document
stringlengths 1
49.1k
| summary
stringlengths 3
11.9k
|
---|---|
is achievable, but strategies to engage must address individualised social needs and circumstances, rather than be superficial and tokenistic (Ratna, Lawrence and Partington, 2016). --- Development and understanding of social cohesion Stage one provided the opportunity for the young people to critically discuss complex issues that affect them on a daily basis. All respondents were able to articulate clearly the impact these issues were having on their everyday lives, but understandably, they were less clear on ways in which these issues might be addressed. This was the task undertaken during stage two of this research in which respondents were encouraged to engage with youth centre practitioners and one another on potential mechanisms and strategies that might be employed to encourage greater social cohesion through a sport initiative. Irrespective of their ability to articulate solutions to the issues discussed in the previous section, the young people were initially very enthusiastic about getting involved in a community project and they showed a great sense of belief that they could make a difference. Mohammed, one of two youth workers consulted during this research said: That is the one thing that they have achieved: that through their own minds they have understood the issues that affect the town. They have a real concern about what is going on and they feel responsibility about how they can change something... They thought they could change the world and this is the attitude that we want young people to have. Mohammed went on to say that a primary consideration for youth worker practitioners was managing the young people's expectations about what could be achieved and what success might realistically look like. The starting point for stage two was developing an understanding of what social cohesion meant for the young people. All participants were able to articulate some sense of what social cohesion meant, i.e., developing harmonious communities. Others however, went further than simply describing what it meant to also developing ideas about how it might be facilitated and the potential impacts of this. As Andrew, one of two youth centre managers involved in the project said: For some within the group it has been an opportunity to channel their ideas and passion around cohesion. For others it has been a process where they are learning about cohesion and how it can impact on their lives and communities. The young people spoke about the importance of, not only bringing different groups of people together, but doing so in a sustainable way. Rohaib (South/British Asian, aged 16) used the metaphor of a bridge to express the value of sustainable development: If you think of people joining together to make a bridge we need the foundations there together to keep the trust together to keep the bridge together. Without that foundation the bridge is nothing. We need to build the trust together; it is the foundations that keep the bridge stable. Developing greater levels of social cohesion within the borough is clearly a huge challenge on a number of levels. For many years, causative factors, such as ethnic segregation have been allowed to manifest largely unchallenged. Whilst steps have been made to address this issue there is still a large amount of ethnic separation. For the purposes of this paper we asked participants about the potential roles of sport in contributing towards social cohesion within this divided community. Their views are reflected in the next section. --- The potential role of sport Respondents spoke positively about their own experiences of participating in sport and of the wider role of sport in their lives. While clearly influential on a personal level, respondents questioned the overall capacity of sport to play any meaningful role in working towards greater levels of social cohesion. In particular, respondents were quick to observe the limited impact sport can make towards challenging deeply entrenched inequalities and perceptions of difference held by various ethnic groups (see Hylton and Morpeth, 2012). As Matthew (White, aged 17) said: I don't think that sport can probably play any role in bringing people together in the community; well not a right lot anyway. It has the potential to work but I doubt that it will. It is hard to get your views across to people and for them to listen. It is going to be hard for them to change the way they think about certain groups. More positively however, there was evidence that, through studying and participating in sport at the college and through being involved in sport at the youth centre, some respondents identified a number of instances where their experiences had challenged (and in some cases changed) their attitudes and perceptions towards people from different ethnic groups. Reflecting on the changes witnessed since the youth centre opened, Ismail (South/British Asian, aged 17) said: I can remember when this first opened and we came in and that night there was a fight. But things like meetings, trips, posters and interactions, have made the lads get on. There were shit loads of fights, but now we are further down the time line. People are just chilling out and getting along. Similarly, Jamaal (South/British Asian, aged 18) spoke of the positive integrative effects of attending the youth centre and playing in mixed ethnicity football teams: It has changed my opinion. I am not saying that I hated White people but I never used to hang around with them; so I didn't know how it was to be with White people. Right now, when I see my White friends from here I think they are actually really good guys, my opinions have changed a lot. This evidence suggests that where there are clearly significant issues within the borough in terms of segregation, the opportunity is still there to instigate an element of meaningful change. But, the literature is divided on the transformative potential of sport. Hutchins (2007) for example, discussed how sport can exacerbate difference rather than overcome it, reinforcing group boundaries and intergroup relationships, rather than breaking them down. In a clear statement of his position he proposes that rather than ask how sport can contribute to social cohesion, we should consider how sport can help negotiate the 'inevitability' of cultural conflict and difference. The position adopted in this paper is that, where sport matters in peoples' lives, it can lead to meaningful change. However, irrespective of its level of importance, change must be co-created, reciprocal and participatory; involving young people from design, through delivery and evaluation. The importance of involvement and ownership was reinforced by Andrew (youth work manager): Anything led by young people has an amazing strength. Too often things are done to people rather than done with. It can enable young people to make a lasting change. The success of any developmental approach that centralises the involvement of young people is highly contingent on their commitment. As we have already suggested, the young people involved in this research were less committed in practice than they were in theory. They spoke enthusiastically about the 'idea' of the sport initiative and made excellent and meaningful suggestions for its implementation, but when it came to the point where they were going to have to deliver their ideas, a number of young people withdrew from the process, leading to the abandonment and ultimate failure of the initiative. --- Reflections on the process Their decision to disengage has to be put into context and so it is worth briefly reiterating the type of young people the project involved. In general they were from a low socio-economic background and had a history of disengagement within both education and society. We are conscious of avoiding essentialism and of reinforcing negative stereotypes, but they were, in the truest sense of the phrase, 'hard to engage' and, for a variety of reasons, this was unlikely to have changed through their participation in this initiative. Sohaib (youth worker) believed that the principal barriers facing most of the young people were issues in their day-to-day lives that neither the youth work practitioners nor Meir were ever fully enlightened on: They have faced a challenge with committing to the project; they have faced basic challenges in terms of attendance. A lot of them have failed with this, to be honest, due to the other issues that are going on in their lives. This point should not be understated. We maintain that the notion of co-production is fundamental for many community sport initiatives with young people, but if those young people are not suitably skilled to lead these, any initiative becomes like (reluctantly stated) 'the blind leading the blind'. As Heather (youth centre manager) stressed: We have promoted the idea that the young people are the drivers of the project, we have stood off and hoped that they would take the lead and to take on the responsibility. Further issues were caused by the nature of the intended outcome, i.e., an initiative based around sport. Criticism from those involved within the project emphasised that the initiative was overly prescriptive from the start and that they would have preferred to have greater autonomy over the focus of activities. Andrew (youth centre manager) for instance, believed that greater emphasis needed to have been placed on empowering the young people through skill development: What we have asked them to do is a big task. There needed to be skills built in to the programme to identify the needs of the young people. They needed to be asked what they needed to make it work. This comment opens up further criticism over the nature of the intended participatory approach favoured by the research team, and questions whether the project actually realised its participatory ambition. Andrew continued to reflect on the need for greater participation and autonomy: It needs to be a case of them determining the process; it was perhaps too rigid and narrow. I would still use a participatory model but I would be more open and less prescriptive about what direction it could take. For Andrew, because the initiative was not allowed to emerge organically, it was, at its core, prescriptive and therefore, flawed. Furthermore, for Heather (youth work manager), increasing engagement and understanding relies heavily on emphasising positive experiences: Ideally you want the project to develop from a flashpoint. It can come from something that someone has said or from what young people have been speaking about with each other. Then you say 'what shall we do about this?' and you can start the project from there. Andrew (youth centre manager) captured this argument in his suggestion that a more fruitful approach might have been to focus on a greater number of smaller projects aimed at specific communities: If I was to do it again I would look to work with small groups from different communities and get them to plan and develop smaller projects in each other's' communities rather than creating something large. Making it bigger does not necessarily make it better or more appropriate for the development of cohesion in the town. The suggestion here is that participants must be enabled to develop a sense of civic engagement and critical awareness which goes beyond either sport or community development, emphasising instead, wider socio-political contexts for development. --- Conclusion The purpose of this paper was to reflect on the challenges associated with co-producing a participatory community sport initiative in working towards greater social cohesion in an ethnically segregated borough in North West England. Our starting point was to establish the nature and extent of this division, as experienced by a group of 28 White and South/British Asian young people living in this borough. We have demonstrated through both the primary and secondary data that ethnic divisions do exist within this borough. These divisions are, in large part, due to perceptions that local/regional White identities are under threat. Such perceptions have emerged due to, among other things, significant levels of migration and settlement from people of South Asian descent, the associated promotion and protection of White cultural traditions, direct and indirect racism and the creation of and subsequent racialisation of portions of the borough. It is important to acknowledge however, that diversity and division are not the same thing and that those who work in sport development and campaign for greater social justice, must establish new mechanisms for developing and embracing diversity without reinforcing divisions. The most effective way of doing this is to develop a strategic approach that connects individual development and community development with social change. If implemented effectively, PAR is one such approach. We have discussed already how the initiative proposed at the outset of this paper failed to materialise. The original intention was to deliver a community sport initiative that was developed and designed by young people. The complexity of applying PAR methods to developing social cohesion through sport within the borough was clearly underestimated. However, it can be justifiably argued that, while the intended project was a failure, the project did bring about valuable secondary outcomes; primarily through learning opportunities experienced by the participants, researchers and youth practitioners, some of which we have extrapolated here. Criticisms of the project, as reflected upon by the young people and youth practitioners alike, highlight the potential limitations of both sport as a stand-alone entity being used to address highly complex issues such as social cohesion, as well as the application of PAR in this context. We do not dispute these flaws, but we do maintain that there are benefits of using this approach, such as its potential to engage young people, the way it encourages participants to take ownership of an initiative and how it ensures that those who are directly affected by the issues have the opportunity to resolve them independently and through consultation. This paper reinforces the view that there is no silver bullet; the only way to fully extrapolate what is required and, therefore, to instigate meaningful change is to fully understand the needs, wants and desires of those for whom the change is intended. Sport, in the context of this research, does not have the capacity to transform the social system. It does however, have the capacity to instigate change on a micro level; to create a shift in the collective consciousness of community members, which should not be discounted. The original ideas behind the project discussed here were unashamedly utopic -though by no means unrealistic -but were tempered by the reality of application. There are many positives that can be taken from this process; not least, a greater understanding has been realised about how participatory work may be applied in similar contexts in the future. This paper is unable to provide solutions to the many complex issues facing residents of this borough in North West England, but it does provide a starting point for resolving them through a desire to adopt participatory methodologies such as PAR, within a social transformative framework. There is clear value in approaching complex problems in an emancipatory way, but this must be supported more widely by other organisations, policy makers etc. The goal of creating a socially inclusive world, which is both necessary and realistic, cannot be solely a matter of the right policy or the right time. If racism and racial inequality in all aspects of life (including sport) are to cease to be of significance, then any analysis needs to be related to broader relations of power in the culture of sport and society (Carrington et al., 2016;Long et al., 2017). The original aims of this project were clearly not met, though ironically, perhaps this paper's greatest strength is the very acknowledgement of that. When reading academic work we assume, in large part, that the data therein reflects project success. After all, this is largely why we seek to publish; to share our findings and wisdom in the hope (and expectation?) that they might be transformational and promote positive impact, whatever that might be in a specific context. Why would we want to showcase 'failed' research? Of course, research will not and does not always succeed in what it sets out to do. As researchers, we ought not to shy away from reflecting on our failures. Indeed, while we are less likely to boast about our failures and our failures certainly do not make their way into University news headlines or get included in REF impacts case studies 4, in communicating failure, we are sharing valuable lessons that can be taken and translated into other (hopefully more successful) contexts. --- Notes 1. In the case of this paper we use the term borough rather than town because the borough on which this paper is based contains two distinct towns; data pertaining to each town individually is not available. Therefore, it is impossible to refer to one without the other, hence their conflation. 2. We employ the term South Asian to describe individuals and communities with roots on the Indian subcontinent. The term British Asian is used to refer to those British citizens who trace their ancestry back to, or who themselves migrated from, the Indian subcontinent. It is employed as a dynamic category and its application has no firm boundaries. 3. A ward is a local authority area, typically used for electoral purposes. 4. The Research Excellence Framework (REF) was the first exercise to assess the impact of research outside of academia. Impact was defined as "an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia" (HEFCE, 2016). | Sports are popularly believed to have positive integrative functions and are thought therefore, to be able to galvanise different, and sometimes divided, communities through a shared sporting interest. UK Government and policy rhetoric over the last two decades has consistently emphasised the positive role sport can play in building more cohesive, empowered and active communities. These positive impacts are particularly important for communities with high numbers of young people from disadvantaged backgrounds. The purpose of this paper is to reflect on the challenges associated with co-producing a participatory community sport initiative with 28 young people in working towards greater social cohesion in an ethnically segregated borough in North West England. Although a great deal was learnt from working towards this, the initiative was ultimately unsuccessful as young people, for a variety of reasons, removed themselves from the process. A major contribution of this paper is how we reflect on the realities of project failure and how future community sport initiatives might have greater success. In particular, we argue that for sport to make a difference, participants must be enabled to develop a sense of civic engagement and critical awareness which go beyond either sport or community development, emphasising instead, wider socio-political development. |
Introduction Feminism is defined in Oxford learner's dictionary, "The belief and aim that, women should have the same rights and opportunities as men; the struggle to achieve this aim". As by another online source, Merriam Webster, feminism is basically a perspective which explores inequalities and inequities between genders, sexes, sexualities, and gender identities. Historically feminism has evolved so much from being specifically talking about sexes to gender identities and sexualities. Feminism aims to focus on inequalities created by intersectionality of sex, gender, class, race, and sexualities (Day, 2016). To know more about feminism, it's history needs to be explored as where and how it was started and what were the purposes, and journey of it. Feminism is spreaded over three waves but before even that, these women were already facing issues and also working to resolve them. There are not many absolute proofs about this protest but it is believed that in 3 rd century BCE, Roman women gathered on Capitolline Hill to resist against Marcus Porcius Cato, who was limiting and bashing the women's right of usage of expensive goods. It was only the start but limited recorded history showed few evidences regarding women struggles. In France, late 14 th and early 15 th century, the first feminist philosopher Christine de Pisan raised voice for derogatory attitudes towards women education (Brunell & Burkett, 2002). To know more about feminism, it's history needs to be explored as where and how it was started and what were the purposes, and journey of it. Feminism is spreaded over three waves.In mid 1800s sufferage moment started which is considered as a major milestone of feminism. In 1848, Lucretia Mott, Elizabeth Cady Stanton and other social activist women and few men gathered in a small town of New York, Seneca Falls. Seneca Falls Convention holds a lot of significance in the history of feminism as Stanton drew up 'Declaration of Sentiments'. This declaration consisted upon 11 resolutions in which she demanded the most radical right, right to vote. When voices were rising by white women about right to vote and education, there was also a voice of black woman, Sojourner Truth. Truth raised voice for differentiation in treatment of elite class and lower class women. Her iconic speech 'Ain't I a woman?' made everything clear how white upper-class women are being treated and how black women are dealt with (Brunell & Burkett, 2002). Mainstream feminist leaders such as Elizabeth Cady Stanton succeeded to secure some of the rights but right to vote was still out of sight. In 1920, 19 th amendment was passed and it was a major success for American feminist. This all era is also considered as first wave of feminism (Brunell & Burkett, 2002). Second category was of radical feminists, they aimed to change the institutions and society entirely as they considered it inherently patriarchal. They thought that society and institutions are hierarchical and filled with traditional power and relationships. They wanted to make it non-hierarchical and antiauthoritarian. Finally, cultural or difference feminists were the last category, they believed that men and women are naturally same, and it should be celebrated. They considered it condescending for women to be more like men (Brunell & Burkett, 2002). Finally there was a third wave of feminism which started in mid 90s. In this era many things were readopted by feminists which were thrown away in second way by declaring it as a form of opression by patriarchy. The notion of 'universal womenhood', body, gender and sexuality was dismantled. Lipsticks, high heels and other feminine products were used proudly as they can be subjective rather than object of sexual suppression. Women of third wave stepped up as empowered females in the world. They mimicked the terms'slut' or 'bitch' as normalizing it to deprive others from wordly weapons (Rampton, 2015). In Pakistan, women related activities which talk about rights and interests is, 'Aurat March'. In their acclaimed 2012 look at entitled Position of Pakistani Women withinside the twenty first Century, Dr Jaweria Shahid and Khalid Manzoor Butt outlined feminism as equality for ladies and freedom from gender discrimination in unique components of life. Keeping this under consideration, one may argue that feminism in Pakistan is a whole myth. Ever due to the fact its independence, in Pakistan it had been scuffling with exploitative remedy on the palms of their male counterparts -the social, financial and political surroundings making it tough for them to develop and combat for their rights. There has nearly constantly been a few backlash towards ladies who want to empower themselves be it via way of means of studying, running or maybe selecting a partner for themselves. It was seen in researches that NGOs and different establishments that were trying to assist oppressed women were accused of deceiving and 'brainwashing' them. Most of those ladies internalised their suffering, both out of worry or a loss of assets to show to and the enormously affluent, knowledgeable higher magnificence in reality turns a blind eye, hoping to preserve their popularity quo (Ovais, 2014). According to literature there are some forums such as politics where people used feminism for different purposes and it can be seen now by advances in media and education. For example, Fatima Jinnah fearlessly encouraged thousands of women to defend their welfare even before the establishment of Pakistan. Soon thereafter, Begum Raana Liaqat Ali Khan established the Pakistan Women's Association (APWA) in 1949 with the purpose of promoting the moral, social and economic status of women across the country. Women's Action Forum was also established. In September 1981, women lobbied and defended that women were dependent upon themselves. However, due to the controversial implementation of the Hadud ruling of General Zia Ur Haq, the real wave of feminist struggle emerged in 1980. The ruling required rape victims to provide four witnesses in order to accept their claims. WMA publicly expressed its opposition and raised public awareness of unjust punishments imposed under the law. Women from all walks of life participated in the forum. Those who opposed the government in the media protesting on the streets carried out educational campaigns in schools and put forward the famous slogan "men, money, mullahs and military". Not surprisingly, feminism was very popular during the two terms of Benazir Bhutto as prime minister (1988-1990 and 1993-1996). During this period, NGOs and focus groups gained considerable power and urged the government to make changes (Sigol, 2016). Unfortunately, as Afiya Sherbano suggested in her study of the history of feminism in Pakistan ( 2009), when Nawaz Sharif took office in 1997 at that time, the momentum slowed down, and women lost ground due to political conservatism and religious revival. In 1997, the Islamic Ideological Committee recommended the mandatory implementation of burqa, and the reputation of murder reached a new height. After General Musharraf joined the rights of women and encouraged women to participate in the media, sports, and other social and political activities, some lost lands were restored. Exercise continues to this day, albeit at less intensity than before. Many laws that favor women, such as the Criminal Law Amendment (2004), the Sexual Harassment Law, the Criminal Law, the Women's Protection Law, the Sexual Harassment Law, the Women's Status Law, and various condemnations of honor killings and Pakistani society faces other vices (Ovais, 2014). As literature showed WAF was a colaborative forum of different organizations and individuals with different views on religion and tradition. The reason for this diversity was the need to bear the greatest number of people to resist oppressive regimes. Body, gender, and family, like many members, were religious and conservative, and deeply rooted in the traditional family system. Talking about body, sex and freedom of expression had not yet become part of the WAF's official public agenda. Ironically, while WAF members avoid public debates about physical and sexual behavior, state and religious clergy were not so intimidated (Sigol, 2019). When WAF was not highlighting and discussing about sexual rights of women publically, while seeing that young women decided to take charge in their hands. It's when the new wave of women movement started. After the uprising of women, maledominated believes started to shake. These movements started in 2018 as 'Aurat March', held on 8 th March. These marches are still criticized by many because of controversial slogans and being open about such issues which are always considered private (Sigol, 2019). A lot of researches have been done on feminism, internationally but Pakistan does not have much research studies done on feminism. Present study focued on young urban women and their understanding of feminism in current age of technology. This study will contribute in Pakistani literature on feminism and Aurat March. --- Review of Literature The word 'feminism' has acquired many definitions over the period and has a long history of women struggle linked to it. The term 'feminism' was established by a French philosopher Charles Fourier in 1832 (Henry, 2015). In the focused research various aspects were under study but mainly, effect of education level and social media usage on understanding of feminism. It also focused on how socioeconomic status affects the participation in women movements and feminist activities. --- The Role of Education There are not many studies done which shows the relation of education level and improved knowledge of feminism. Very few researches came across while finding the literature relevant to the current study. Firstly, it is to be understood that if education level increases one's learning and knowledge is not guaranteed to be increased too. According to authors of Turning Learning Right Side Up: Putting Education Back on Track, education is more likely to be linked with memorization and not with improving learning and knowledge. Educational institutes are working with the aim to improve students' memory rather than working on their knowledge. They are not supposed to be turning students into computers or robot by improving their memory. Their sole purpose should be improving the knowledge and have better learning outcomes. Education and teachers should be focused on what humans can do better than machines and work on their learning experience (Ackoff & Greenberg, 2008). In Education World Forum (2011) education of economic success was focused in the conference of 75 countries. This conference was based on to improve educational facilities and increase the enrolment rate. All these efforts were put to lower the poverty rate but even after increasing the enrolment the condition was right there where they left it. Reason of such outcome was, by improving the enrolment rate learning, skills and knowledge was not improved. Hence, by increase in enrolment rates and education level, it is not necessary that knowledge will also be improved (King, 2011). In another study it was observed that in American colleges, when they increased the number of students the quality of knowledge and even education was not in better condition. Students were spending less than half of the time studying than their ancestors 50 years ago. It was also seen that in later years when these students in working sector, employers complained about not having skilled employees. These employees lacked basic problem solving and writing skills. Many students were just after increasing their degree level rather than skill improvement and gaining knowledge (Bok, 2017). To see whether education level have any impact on knowledge of feminism or not, there is to see if education level improves knowledge. A study conducted on 2,200 US college students over 4 years of time. In related study tests were designed to assess the analytical skills of students. According to Arum and Roksa 45% of students did not improve after two years of college in critical thinking, reasoning, and writing. Results also showed that 37% students did not improve after four years. It concluded that with an increased education level understanding and educational skills do not improve much (Global Focus, 2017). --- Role of Social Media In this age of computer and technology life has become easier than ever. In almost everything technology is present to help humanity. Mode of transportation, education, health, finance, and many other fields are relying on modern technology now. If we talk about education and knowledge, then they are no more different than other aspects of life. In old times knowledge was passed through books, newspapers, pamphlets, or it can be said that paper was the main source. Now technology has taken its place and fastest way of transmission is social media. Social media includes all social platforms i.e. Facebook, Instagram, Twitter, Pinterest, Snapchat, and many more. But the point is what amount of knowledge is accurate on these platforms. It is also needs to be considered that whether the knowledge is giving out the same meaning and understanding as it was supposed to or not. Social media mainly is used for communication and knowledge sharing (Almeshal & Jasser, 2017). In a study about, the impact of social media usability and knowledge collecting on the quality of knowledge transfer (2017), Almeshal & Jasser conducted quantitative research on 426 Saudi participants in which 70% was the response rate. T-test and correlation was used as analysis technique and there was a significant statistical impact of social media usage on collected knowledge and its quality. This research concluded that social media effects the quality of knowledge and it can be difficult to rely on it (Almeshal & Jasser, 2017). Now coming towards the main topic of this research, feminism. Luckily, there is some literature present now which tells how social media can alter the image of feminism. As discussed above that social media does not provide authentic knowledge all the time but it does help in sharing. People can support any social cause online regardless of distance. Social media can cause a change as it connects the world. Where it has negative impacts, on the other hand it is doing good too (Chittal, 2015). A study was conducted on Pakistani culture regarding the effect of media feminist approach on youth by Minhas et al. (2020). A survey was conducted online and in person, from which 150 responses came back. Participant's age range was 16 to 30 years old. Results clearly showed that 73.75% people agree that Pakistani media is promoting western feminism which is against our religion and culture (Minhas, 2020). In 2013, while people were gathering in Texas for approval of abortion bill, there were some who could not join. They started a hashtag on twitter #StandWithWendy and protested online. Just like that in 2014, hashtag feminism was on top on twitter. Another example of social media's power can be seen when Ray Rice, a professional football player, had a domestic violence scandal. Many women were suffering a bad marriage or relationship and to show the support women shared their stories of domestic violence with a hashtag #WhyIStayed (Chittal, 2015). Social media have changed many things lately and internet is flooding with such examples. Hashtag activism is a term which is not much known in our country, but it is used to pressurize companies and politicians to bring changes for betterment. Following are top 3 feminism related hashtags which have trended on twitter. <unk> #BringBackOurGirls -it reached 5.5 million <unk> #YesAllWomen -it reached 3.4 million <unk> #HeForShe -it reached 1.7 million (Chittal, 2015) As the positive aspect of social media is in the spotlight, there is another hashtag which has been re tweeted internationally and turned into an online movement. The movement called #Me Too started in 2006 by an activist Tarana Burke. She used this phrase to raise awareness in people regarding abuse (Gill & Rahman, 2020). Although women are still underrepresented in media field but when it comes to social media platforms, women are more active than men. For example, in Pakistan, women have more followers on twitter and Google+ than men. Even after having more following, women's tweets are not retweeted because they do not use traditional hashtags. (Powell & Moncino, 2018) --- Socio-Economic Status To discuss, if socio-economic status has any impact on participation of social movements or not, there is a need to understand what socio-economic status refer to. Socio-economic status is the level of education, occupation, and income. Due to different socio-economic classes people go through power differences in many fields (American Psychological Association, n.d). In a research of 2010 on protest participation, the data was utilized from 1990's protest of America. All participants (N = 2,517) were the participant of a protest and their data was focused to make hypothesis to find what pattern did the protesters had. Logistic regression t-test was used to analyze the results. Results showed that most of the participants were young, well educated, unmarried, no children and had higher family incomes. This study also concluded that these participants did not had any major responsibility and were zealous to participate (Petrie, 2010). As the topic in spotlight is wheather it affects the participation in social movements or not, there is not much literature found on this area. Still, there are some according to which it affects the participation. A study was conducted in Hong Kong (2015), in which 134 college students were selected for survey. All participants were 18+ and from different institutes. This study concluded through regression that parents support their children a little in participation of movements and parental support is influenced by socio-economic status of the family. Same status people encourage each other to participate in such events (Chan, 2015). In another study regarding civic engagement showed through survey that volunteering in movements is highly influenced by socio-economic status. In 2000s 85% of the people who belonged to the higher socio-economic class tend to join the movements as compared to 73% people of lower class. Author Gaby (2016) also mentioned that these percentages have increased over time as in 1970s 79% of higher class people volunteered as compared to 65% of lower class. For this study upto 350 students per school were selected from different schools of U.S (Gaby, 2016). An European study highlighted the facts about influence of socio-economic status on participation. It shows a clear impact of income on participation. In this research author compared elections and referendums of Netherland and Ireland (Drewer, 2017). There are basically two theories about people's participation in protests. One says that people with lower socio-economic status (SES) tend to participate in protests and on the other hand it is said that higher socio-economic class participate more. Let us have a look on the first perspective which focuses on people with lower SES tend to protest. When state is not fulfilling the needs and people have to face depravity of resources and money, they raise their voices against it. This is called grievance theory when material conditions are unbearable. There is another theory of relative deprivation which states that when a person feels unsatisfied and frustrated due to their conditions, they are prone to protest (Wu & Chang, 2017). Same thing happens in social movements' participation and what encourage people to join. Now coming towards what cause higher class to indulge in social events. There is a theory of social change which informs that when people are out of their financial problems and no longer worried about it, they are motivated. If a person is not worried about finances, then it clearly shows that they are doing well in that sector. This encourage and motivate them to now meet non-material needs (Wu & Chang, 2017). Study in Taiwan ( 2017) was done when data was collected through World Value Survey (WVS) March to June 2012. Sample size was 1,238. Regression analysis was run to get results and to check if higher SES participate in protests than lower SES. Statistically it proved that with the increase in SES the participation also increases by 52.8%. It also supports the above theory of social change and not the grievance, and relative deprivation theories (Wu & Chang, 2017). --- Theoratical Framework The theory which supported this study was 'Stand point theory' proposed by Dorothy Smith in 1987. As the name defines itself, stand point theory states that people have their point of views according to their position in the society. One person's view point will be different from another because of their specific status in society. It clearly does not mean that we cannot see someone else's point of view rather it states three main points (Smith, 1992) 1. No one can have complete, objective knowledge 2. No two people have exact standpoint 3. One must not take own standpoint for granted Smith summerized her theory in very simple words through these points. It is a known fact that nobody has the complete knowledge and also cannot be objective. Being humans, everyone is different from each other, which means two standpoints can never be the exact copy. In third point, a person might belong to the any category of the society but they have some position in it so they must never ignore their own standpoint. --- What is the Impact of Feminism on Young Urban Women? Feminism mainly refers to equality of women in all spheres of life and it has a huge impact on women's life. Current study examined different perspectives to see the understanding of feminism and how it is impacting women's life. Young urban women of Pakistan are being enlightened about their rights and raising their voice against gender discrimination. Feminism is improving women's condition as their pleas are finally being heard by the society. On average, 53% which is more than half of the participants, agreed that feminism has positive impact on women's life. As feminism is relatively new term for Pakistan in comparison to the world, so remaining participants did not agree with the new concept of women equality in all spheres. Feminism includes many perspective and open-ended questions focused on main perspectives of it. Decision making is an essential part of life and to study who should has the power of it and why, the question was asked about who the decision maker of the house should be and why. To which 72.12% participants responded as both partners. 19.69% of the participants gave opinion that men should make the decision and only 6.36% said women. There were also 1.81% participants who said that it should be regardless of gender. According to Caprino (2016) men and women have different capabilities and decision-making qualities. Men tend to take risks more whether success is assured or not, but women think things through to reach assured success whether it's small or big. By having both genders involved in decision making process can help make the best out of any situation (Caprino, 2016). Figure 01: Represents the high ratio given to whom should be the decision maker of the house As woman's identity matters a lot and has a solid impact on her life, next question asked in qualitative part was, whether women should change their surname or not after marriage. 37.27% participants responded that it should be entirely women's choice. 49.39% of participants were not in favour and 11.81% of participants were in favour of women changing the surname after marriage. But this question will be judged on the basis of religion as Pakistan is an Islamic country. According to a video uploaded by Mufti Menk (2020) on YouTube, he briefly described that there is no compulsion in Islam to change the surname after marriage. He explained that due to affiliation women have her father's name as her surname as it represents her identity, family background and orientation. But if she changes her surname after marriage then she will be leaving her identity and it is supported by literature (Menk, 2020). Last question of the study was, how these marches are contributing toshaping identity of young women. 33.36% said positively, 42.72% said negatively, 5.15% said not contributing at all and 10.6% said it is contributing both ways. As mentioned, in above discussion that feminism is not represented in the Aurat March, and extreme language is being used too (Khatri, 2020). So, most of the participant responded that it is affecting them negatively because it is decreasing the thread line difference between assertiveness and aggressiveness behaviour. It is also confusing their immature minds. And with the help of social media added with marches, identities are being shaped under societal pressure (Julha, 2019). It was seen that education level did not affect the understanding of feminism. 46% of the participants' education level was graduation while 43% participants had intermediate level of education. 7% participants were having post graduate level. All levels had different understanding of feminism. Literature has also shown that quality of knowledge is not based on education level as one's education level can be higher than others, but their quality can be down (Global Focus, 2017). --- How Social Media Usage Effects the Understanding of Feminism? Most responses received for this question represented that social media has positive effect on the understanding of feminism. As 39.69% participants' stance was it helps in increasing awareness about women rights, alleviating oppression, helps women raise voice and share their opinions. 36.06% responses stated the negative effects of social media on understanding of feminism. These responses explained how it is affecting it negatively as,knowledge without authentic source is uploaded and people do not do their research before believing, it can be dangerous for uneducated people as their knowledge is not that vast. These responses also shed some light on how information can be unauthorized, bad representation of ideas, promotion of radicalism and misleading information (Fitzpatrick, 2018). It also presented a very valid point; that Pakistan has a patriarchal society, so people tend to oppose women and inclined towards negative images presented on social media. 22.42% of the respondents stated that it depends on the content, perspective of viewer, portrayal of images and naive influencer who do not know what power they hold. It is somehow true that social media presents many perspectives and contents. It depends on both parties how they present information to viewers and how viewers perceive it. According to literature, social media does have issues in authenticity of knowledge, so it all depends on source of content, content itself and writer (Ismail & Latif, 2013). lower class has mobility issues, other classes do not have time to participate as they are working to make both ends meet, doing it for fame, lower class has accepted the situation, to observe women's day internationally and to impose their beliefs on everyone. 20.3% participants were not in the favour of this stance. As everyone is participating in Aurat March. People stated that participation does not depend on socioeconomic status, this march is not only for specific class, all women participate to support equality, it also depends on family background and transgender are participating too. According to a study it is proved that most of the activists participating in protests belong to higher socioeconomic class (Tygrat & Holt, 1971). As the results also showed that specific socioeconomic class participate in Aurat March and respondents briefly explained, activists belong to elite class. The question regarding Aurat March was asked about what opinion they had about it. 37.27% participants gave a positive opinion while 34.24% had a negative opinion. There were also responses which had both opinion and it made 25.75% of the results. Only 15.15% agreed that Aurat March was representing feminism properly. This march was showed like western media but, many aspects were not showed, it helped women in voicing their opinion, advocating women rights, see and try in eliminating violence, highlighting all women issues, and it was also giving women a platform to raise voice. 9.09% participants had mixed feelings about it. As they explained it depends what one sees on media, slogans are extreme but otherwise idea behind it is good, needs to work on narratives then it can be more representable. It was also said by one participant that no doubt women are facing discrimination but that does not mean that every man is discriminating women. In present study the main purpose of conducting the research was, to measure young urban women's understanding of feminism in respect to education level. It also helped in getting their opinion about Aurat March's purpose and social media usage. Data of 330 participants were analyzed through one-way ANOVA and descriptive statistics to explore opinions. --- Hypothesis H1: There will be no significant difference among different education levels and their impact on understanding of feminism Findings clearly showed that there was no statistically significant difference among different education levels in terms of understanding and knowledge of feminism F (3, 329) =.988, p =.399. Results of computed variables showed that level of education does not guarantee quality knowledge about feminism. As the literature and other research also proved that education level cannot significantly improve knowledge. A study was conducted on 2,200 US college students over 4 years of time. In related study tests were designed to assess the analytical skills of students. According to Arum and Roksa (2010) 45% of students did not improve after two years of college in critical thinking, reasoning, and writing. It concluded that with an increased education level understanding and educational skills do not improve much which is also supported by existing literature (Global Focus, 2017). --- Conclusions The present study focused on young women's understanding of feminism based on their education level and socio-economic status. It also covered history of feminism and how it is developing in Pakistan through Aurat March. Many studies have been done on feminism, but it lacked how feminism's understanding is being impacted by education and socioeconomic status. This study fulfills its objectives as majority of urban women has better understanding of the concept of feminism. Results of present study clearly showed that education does not impact the quantity and quality of individual's knowledge. While socioeconomic status contributes to understand such social phenomenon (Manstead, 2018). The current study can contribute to the literature of feminism and factors influencing its understanding. --- Exploring the Concept of Feminism Among Young Urban Women participation in women movements on the basis of education, social media usage and socioeconomic status. --- Objectives <unk> To find out perception of young urban women about feminism based on socioeconomic status, education, and social media influence. <unk> To find out the role of Aurat March movement in the concept of feminism. <unk> To find out the role of socio-economic status of women in participation of 'Aurat March'. --- Research Questions 1. What is the impact of feminism on young urban women? 2. How education level effects the understanding of feminism? 3. How social media usage effects the understanding of feminism? 4. How socioeconomic status effects the participation in Aurat March? 5. How is 'Aurat March' affecting the concept of feminism? --- Hypothesis H1: There will be no significant difference among different education levels and their impact on understanding of feminism. --- Method The topic under consideration is focused to explore the understanding of feminism among young urban women. Current researh examined the effect of education, social media usage and scioeconomic status on understanding of feminism and participation in Aurat March.To conduct this research, quantitative survey method was used. This research focused on exploring the concept of feminism among young urban women, to know more about the focused area quantiative survey research method were selected. --- Sample Purposive sampling technique was used to collect data for present study as it focused on young urban women. All participants were of 15 to 24 years old and from Lahore city. Procedure Objectives were selected for the study which led to formation of research questions and hypothesis. Semi-structured questionnaire was made based on literature as well as | Aurat March is held every year to highlight the issues of women. The present research was conducted to see how much women know about feminism along with Aurat March. Quantitative method with survey design was used for the present research and semi-structured questionnaire was developed to collect data from 350 young women of Lahore, Pakistan and15 to 24 years of age range was selected. Standpoint theory by Dorothy E. Smith was taken as theoretical framework to study the results. This research mainly covered the concept of feminism, its growth over the time, how 'Aurat March' had played its role in development of feminism and how it was contributing in shaping the identity of young women. Content analysis was used for open-ended questions and one-way ANOVA along with descriptive statistics was used for closed-ended ones. Results showed that education plays no role in increment of knowledge and in its quality. This study will help in filling the gaps in Pakistani literature and be base of the future studies. |
INTRODUCTION Chile has seen in recent years, nearly a million migrants enter its borders, coming mainly from Latin America and the Caribbean, where factors such as political stability, security levels, and constant economic growth throughout the last decades, have turned Chile into a pole of attraction for people seeking better employment and development opportunities (Godoy, 2019). This human movement has brought along thousands of children and adolescents, who have been integrated into the Chilean school system, representing a significant number of municipal school enrollment in the districts with the highest index of habitability of immigrants. In this way, it can be observed that in Santiago, the capital of the country, there was a significant percentage increase in foreign students enrolled in public schools in just 3 years, where it went from 8.9% of foreign students to 15.5% in 2017 (Ministerio de Educación, Centro de Estudios, 2018), modifying the cultural and ethnic composition in the classrooms. In relation to studies on migrant adolescents in contexts of South-South movements, that is, massive displacement of people from developing countries to others in the same condition but with better economic and human development indices, Chile constitutes a case and studies by Alvites and Jiménez (2011) and Tijoux (2013) begin to shed light on the problems experienced by migrant students due to challenges that the stress of acculturation process implies (Berry, 1997;Mera-Lemp et al., 2020). The immigrant paradox theory suggests that this movement may have an effect in school performance (Suárez-Orozco et al., 2009) and the persistent gender differences in detriment of girls (Alfaro et al.,2016) in the Latin American context may also have consequences in different psychological constructs. In the case of young foreigners who now live in a country other than their own, their behavior may be shaped by the sum of environmental factors, behavioral and personal aspects, in direct interaction with the degree of stress involved in moving to a geographical place that is not the place of origin, with new customs and values in order to adapt to the new reality of a different community (Berry, 1997;Prilleltensky and Prilleltensky, 2007). In the case of Chile, migrant adolescents have evidenced depressions, anxiety and nostalgia regarding the place of origin (Villacieros, 2020) which may play against the necessary emotional and sociological resetting required to adapt into a new society (Mera-Lemp et al., 2020). The stage of human adolescence, whether in natives or migrants, is traditionally considered as conflictive, and includes questions and difficulties inherent to its evolution and gender (Oliva et al., 2010). Among the psychological resources related with a good psychological adjustment and social integration of adolescent in the school experience the following constructs can be found: self-concept, self-efficacy, and subjective well-being (Pajares, 1996;Mart<unk>nez-Antón et al., 2007;Jiménez and Hidalgo, 2016). Self-concept, widely studied from its multidimensionality (Shavelson et al., 1976;Valentine et al., 2004), since, as indicated by Shavelson et al. (1976) this construct of self-perception is the result of interaction and experience with others on levels such as the academic, emotional, and social, among others. In this sense, it can be assumed that the school experience in a new sociocultural and educational setting puts into play the selfconcept (Goffman and Guinsberg, 1970) of migrant adolescents, since as studies carried out in Spain show, students who presented low socialization and self-concept obtained a low academic performance (León del Barco et al., 2007;Plangger, 2015). In the case of the adolescent population, academic self-concept is one of the most relevant personal characteristics when it comes to explaining, for instance, subjective well-being (Huebner, 1991). The studies by McCullough et al. (2000) concluded that academic self-concept was the main predictor of well-being and that measuring it was a good way to understand the well-being of adolescents. The previous point suggests that the adolescent's selfconcept will play a fundamental role in self-assessment, as well as with respect to psychological well-being and the affirmation of one's own identity (Harter, 1998;Luján, 2002). To the best of our knowledge, there are currently no studies in Chile measuring academic self-concept indices neither in migrant adolescents nor in natives. However, it is a relevant issue since authors such as Hay et al. (1998) affirm that high self-concept is positively related to performance, integration and relationships in the school context, while it is negatively correlated with anxiety. In terms of gender differences on academic self-concept, the study by Costa and Tabernero (2012) did not launch statistically significant gender differences, however, studies by Padilla Carmona et al. (2010) showed that girls surpasses boys in terms of academic self-concept. In Chile, studies such the one of Gálvez-Nieto et al. (2017) did not evidence statically significant differences. Following this same line, it can be considered that one of the most relevant challenges faced by migrant adolescents is the adaptation to a school setting different from that of their country of origin, where self-efficacy, understood as the capacity perceived by an individual to successfully face situations of daily life (Bandura, 1986), this construct plays a crucial role in the inclusion and interaction of individuals, in this case migrants, who join the new group (Briones et al., 2005). According to Briones et al. (2005), self-efficacy in the experience of migrant adolescents suggests a very positive aspect between the level of self-efficacy and the degree of satisfaction with the achievement obtained. Likewise, students who report higher levels of social self-efficacy also notice a greater degree of comfort in environments aimed at sociocultural interaction and better skills in the field of integration. Studies such as that of Juárez-Centeno (2018), indicate that migrant families with a low level of self-efficacy experience higher levels of depression, which affects the behavior of adolescents. Studies carried out in Colombia such as that of Gómez-Garzón (2018), with people who are victims of forced displacement; suggest that there would be a positive relationship between self-efficacy and other constructs such as belonging, inclusion and social well-being. The study of Fan and Mak (1998) in Australia, found that migrant students show lower level of self-efficacy compared with natives as well. In Chile, however, there is still no comparative research in adolescents on selfefficacy of local residents compared to migrants. In terms of gender, contributions made by Blanco Vega et al. (2012), in the Latin American context, suggest that boys tend to have greater indexes of self-efficacy. Similar results were obtained by Junge and Dretzke (1995) and Huang (2013). Quality of life, an important motivational factor in migratory processes, has been conceptualized and measured in different ways. One of them is the concept of subjective well-being, which is positioned within the hedonic tradition and serves as an approach to the satisfaction and happiness of individuals with their own life (Diener, 1994;Cummins and Cahill, 2000). In the context of migration, studies suggest that factors such as time of residence, legal status, size of the social network and coverage of basic needs (Basabe et al., 2009) would be positively related to subjective well-being while Factors such as discrimination (Murillo and Molero, 2012) would be negatively related. Along the same lines, studies such as that of Panzeri (2018) point out the importance of post-migration subjective well-being as a valid measure related to future labor productivity, mental health and social integration from the otherness and needs of the migrants themselves. Regarding the perception of subjective well-being among the native population and the migrant population, Bilbao et al. (2007), Herrero et al. (2012), Mu<unk>oz and Alonso (2016), and Hendriks and Bartram (2019) have found that migrants they tend to exhibit lower levels. Similar results have been found in Chile in studies carried out by Alfaro et al. (2016) regarding adolescents. In this regard, gender differences in favor of men in the Ibero-American context could lead to the assumption that the migration process in girls could have a negative effect in terms of some psychological constructs. This is evidenced by studies by Oyanedel et al. (2015) and Alfaro et al. (2016) in Chile, where boys have higher scores on this scale. Similar results were obtained in studies in Spain such as that of González-Carrasco et al. (2017) where it is concluded that the homeostatic system of girls is probably more sensitive to external variations and that there is a relationship between physical and cognitive aspects that occur in girls as well as their specific pattern of subjective well-being. In this regard, studies on subjective well-being in the adolescent population have been carried out under cross-cultural formulations such as that of Casas et al. (2015) with adolescents from Latin-speaking countries (Spain, Romania, Brazil, and Chile), with adolescents from the two Latin American countries having the lowest scores in terms of subjective well-being. However, studies comparing subjective well-being in migrants and locals are still needed to provide key information to different actors and thus guide decision-making in preponderant sectors, positioning well-being as the center of attention in the development of public policies and as part of the strategies to improve the quality of life (Lucas and Diener, 2008). Now, regarding to the constructs of self-concept, self-efficacy and subjective well-being. Literature has reported relations among them (Garc<unk>a-Fernández et al., 2016). For example, the three of them operate at the level of self-perceptions in social, emotional, and behavioral terms (Bandura, 1992;Lent et al., 1997;Casas et al., 2007). Similarly, these three constructs are sensitive to the context and are built or modified according to lived experiences and social interactions, so they are not stable but rather dynamic (Bandura, 1986;Shavelson and Marsh, 1986). In the educational context, self-efficacy is linked to confidence since students evaluate this ability in order to solve problems, while self-concept is related to the perceived personal competence when executing a task (Bong and Skaalvik, 2003), both constructs emerge as personal competencies in adolescence, which serve as positive development indicators (Oliva et al., 2010). In the case of subjective well-being, this construct is positively related with self-concept and self-efficacy in the educational setting (Gómez et al., 2007;Reyes-Jarqu<unk>n and Hernández-Pozo, 2011;Malo et al., 2011;Chavarr<unk>a and Barra, 2014). As explained, various studies show a positive correlation between subjective well-being and other variables such as self-concept, and self-efficacy (Huebner, 1991;Harter, 1998;McCullough et al., 2000;Luján, 2002;Gutiérrez and Gonçalves, 2013). However, there is still a lack of studies of this nature in Chile. In the same line, literature has reported that self-efficacy would play a mediational role between subjective well-being and other constructs such as meaning in life, life satisfaction and even personality traits such as extraversion and openness (Krok and Gerymski, 2019). To the best of our knowledge the relations between self-concept and subjective well-being have been widely described but no contribution has been made regarding the mediational role of self-efficacy between these two constructs (see Figure 1). For this purpose, the following conceptual mediational model of self-efficacy in the relationship of between self-concept and subjective well-being has been proposed. Considering the information above, it can be noted that in the Chile's school context, there is a gap in the literature regarding the differences between migrant and native students regarding the relationship between self-concept, self-efficacy, and subjective well-being. One might think that there would be an effect on the part of the levels of self-concept and self-efficacy on the subjective well-being exhibited by these students, but there are indications to affirm how this relationship between natives and migrants could be differentiated. Facing a new educational context, adapting to new models, new relationships and especially to a whole new society could put to the test all the cognitive and affective areas that would influence the global satisfaction (Caprara et al., 2006) of migrant students, hence a different configuration of the relationships between self-concept, self-efficacy, and subjective well-being for them (with respect to the natives) could happen not only due to the condition of local or native but because of the gender. In this respect, works by Oyanedel et al. (2015) and Alfaro et al. (2016) in Chile evidenced gender differences in terms of subjective wellbeing and satisfaction with life with male scoring better in these constructs. Similar records were obtained by González-Carrasco et al. (2017) in Spain finding statistically significant differences. However, studies comparing migrant adolescents with locals in terms of subjective well-being as well as self-concept, self-efficacy and gender, to the best of our knowledge, are not available. In this new scenario, where Chile is a recipient of migrants, this study aims to constitute a contribution in the investigation of relationships on academic self-concept and self-efficacy regarding the subjective well-being of migrant adolescents versus the local population, including also gender differences. The importance lies in the fact that both groups will coexist to form part of the productive and intellectual assets of Chile, where a clear understanding of behavioral aspects of these groups would guide the efforts of central and local governments in improving public policies for the optimal personal development of all the country's adolescents. In the same line. This research may serve as input to improve school experience for both locals and migrant students. This research has set as objective to compare the levels of selfefficacy, academic self-concept and subjective well-being among migrant adolescents and local adolescents. This research also aims at exploring gender differences. On the other hand, the objective is also to analyze how the variables of general selfefficacy and academic self-concept are related, as well as to observe their effect on subjective well-being for each of the study subsamples (migrant and local adolescents). --- MATERIALS AND METHODS --- Sample The present study is quantitative and considered a crosssectional design. The sample was made up of adolescent students belonging to 7th and 8th grade of the Chilean educational system. The students correspond to four municipal public educational centers located in the district of Santiago, Metropolitan Region of Chile. Regarding the selection of schools, two criteria prevailed: convenience and percentage of migrant enrollment (not less than 20%). As a criterion to calculate the sample size, a 5% error was considered, with a confidence level of 95%. The sample is made up of 406 students, distributed evenly among the four participating establishments. 56.65% of the students were women and 43.35% were men, 45.81% were in 7th grade and 54.19% were in 8th grade, while the age fluctuated between 12 and 16 years, with an average of 13.36 years (SD = 0.96). Regarding the key variable of the study (native or migrant condition), the sample consisted of 55.91% of students born in Chile and 44.09% of migrants with similar percentages of school vulnerability index. 28.09% of the migrant students were from Peru, 21.35% from Venezuela, 18.54% from Colombia and 32.02 from other Latin American countries and the rest of the world. The average residence time of migrant students in Chile was 2.59 years (SD = 1.68). For the purposes of this study, Chilean-born students were considered as Chilean and foreign students with at least 1 year and a maximum of 5 years in Chile as migrants. --- Instruments --- AF5 Academic Self-Concept The AF5 scale (Garc<unk>a and Musitu, 1999) emerges as an improved version of the AFA Scale (Form A Self-Concept). The AF5 Scale has been developed taking self-concept as a multidimensional construct based on the works of Shavelson and Marsh (1986). The AF5 Scale was validated in Chile by Riquelme Mella and Riquelme Bravo (2011), showing validity and internal consistency. Under these conditions, the present study showed a high internal consistency (Cronbach's Alpha of 0.824). --- General Self-Efficacy Scale The General Self-efficacy Scale is an instrument developed by Schwarzer and Jerusalem (1995) and measures individual perception in relation to the abilities to cope with daily situations in stressful circumstances. In Chile, Cid et al. (2010) demonstrated internal consistency or homogeneity when obtaining a high Cronbach's alpha coefficient, similar to the results obtained in other Spanish-speaking countries. In this study, the observed Cronbach's alpha was 0.859. Personal Well-Being Index -School Children 7 (PWI-SC7) The Personal Well-Being Index (PWI) was developed and validated by Cummins and Lau (2005), its validity and reliability being demonstrated. Later, there was an adaptation of this instrument to apply it in populations of children and adolescents, generating the PWI-SC 7 Scale, which is a version of seven questions that has been validated in Chile by Alfaro et al. (2013). This instrument uses 11-point scales for responses, ranging from Strongly Disagree (0) to Strongly Agree (10). In the different applications carried out in Chile, the instrument has shown a good factorial fit (one dimension), observing in this study a high internal consistency, with a Cronbach's alpha value of 0.864. In addition to these variables, participants were also asked about the different sociodemographic variables: age (in years), sex (0 = boy; 1 = girl), country of birth, time they have been in Chile if they were not born in this country and parents' country of birth. --- Procedure The self-report questionnaire was applied to the students, after having obtained the corresponding permits from the directors of the educational centers and subsequent authorization to be taken at agreed times. On the other hand, an informed consent form was given to the students and their tutors. The application was developed in regular school hours of adolescents, during the 2018 school period (within the months of August-November). The material was delivered to the students, the instructions were given and then the time they needed to respond was allowed. In each application a responsible teacher and one or more researchers were present in the classroom. Schools in the Metropolitan Region of Chile were contacted for convenience, considering the criteria established for the study (having at least 20% migrant students). The only exclusion criterion was that of handling the Spanish language: students from non-Spanish-speaking countries, who had not yet managed the Spanish language to understand the instructions and content of the applied instrument, were left out. --- Analysis of Data First, descriptive analyzes were performed for the total scores of each scale (for the total sample, and the natives/migrants and boys/girls subsamples). Subsequently, t-tests were performed for independent samples for each of these variables, with the aim of comparing the mean scores obtained for each subsamble. Subsequently, two multivariate models of simple mediation were constructed (Hayes, 2018), one for the subsample of natives and another for the sub-sample of migrants, which assumed subjective well-being of the students as a dependent variable (PWI-SC7), academic self-concept as an independent variable and general self-efficacy as a mediating variable. In both models, gender (0 = boys; 1 = girls) was considered as a control variable. ABCa bootstrapped CI based on 5,000 samples was used to calculate the confidence intervals of all the models used. The statistical analyses were carried out through IBM-SPSS v.24 and the modeling tool PROCESS for SPSS v2.10 (Hayes, 2018). --- RESULTS --- Descriptive and Comparative Results Table 1 presents the mean and standard deviation of the scores obtained on the academic self-concept, general self-efficacy, and subjective well-being scale, for the total sample and the sub-samples of migrants and natives. In addition, Independent Samples t-test are also presented to establish differences between these groups. For the three comparisons, equality of variances was assumed, based on Levene's test. The differences were statistically significant (p-value <unk> 0.05) for the case of academic self-concept and general self-efficacy (being the subgroup of migrants the one that obtained the highest score). No statistically significant differences were observed for these groups in the case of subjective well-being. Table 2 presents the mean and standard deviation of the scores obtained on the academic self-concept, general selfefficacy, and subjective well-being scale, for the total sample and the sub-samples of girls and boys. In addition, Independent Samples t-test are also presented to establish differences between these groups. Equality of variances was assumed for the case of general self-efficacy and subjective well-being, while different variances were assumed for the case of academic selfconcept (based on Levene's test). The differences between the mean scores were statistically significant (p-value <unk> 0.05) only for the case of general self-efficacy, where boys had higher scores than girls. --- Mediation Analyzes The results of the simple mediation models are presented below, which considered subjective well-being as a dependent variable, academic self-concept as an independent variable, general selfefficacy as a mediator variable, and gender as a control variable. Model 1 considered the subsample of natives, while model 2 considered the subsample of migrants. The results of the regression analyzes that make up the mediation model 1 are presented in Table 3, while the general diagram of this model is presented in Figure 2. A partial mediation can be observed in this model, where the total effect (TE: b = 0.41, 95% BCa CI [0.30, 0.52]), the direct effect (DE: b = 0.31, 95% BCa CI [0.20, 0.43]), and the indirect effect (IE: b = 0.10, 95% BCa CI [0.05, 0.17]) of academic self-concept on subjective well-being were statistically significant. On the other hand, the results of the regression analyzes that make up the mediation model 2 are presented in Table 4, while the general diagram of this model is presented in Figure 3. A total mediation can be observed in this model, where the total effect (TE: b = 0.22, 95% BCa CI [0.09, 0.36]) and the indirect effect (IE: b = 0.13, 95% BCa CI [0.07, 0.21]) of academic self-concept on subjective well-being were statistically significant, while the direct effect was not statistically significant (DE: b = 0.10, 95% BCa CI [-0.04, 0.24]). --- DISCUSSION Regarding the study variables, statistically significant differences were found in terms of academic self-concept and general selfefficacy, but not in subjective well-being. In relation to the highest scores obtained by migrant students in academic self-concept, the results contradict those found in Spain by León del Barco et al. (2007) and Plangger (2015), as well as results obtained in Israel (Ullman and Tatar, 2001) and in Greece (Giavrimis et al., 2003). The explanation for these results could lie in the fact that migrant students come from family environments, where the importance of study as a tool for social mobility has been understood and that in the case of migrant families the situation is exacerbated as a result of arriving in a country in search of opportunities and substantial changes in their socioeconomic situations (Aragonés and Salgado, 2011). The cultural weight that migrant families bring (Shershneva and Basabe, 2012) in terms of intentions of progress and substantial improvements in the quality of life could be generating in their homes a discourse favorable to study and trust in the capacities of adolescents, which would be reflected in that they feel appreciated by their teachers and that they work a lot in class. The fact of feeling good at an academic level is reflected in the grades obtained by migrant students, where self-concept is built in interaction with teachers and classmates who positively reinforce these attitudes, giving as a result a student who feels competent and works to achieve its objectives (Gargallo et al., 2011). In relation to general self-efficacy, an important contribution of this study is the differences found in the total score of the scale. The results contradict the literature in that migration could show a negative correlation with self-efficacy (Fan and Mak, 1998;Briones et al., 2005;Gómez-Garzón, 2018;Juárez-Centeno, 2018). In this sense, the family environment and somewhat more adverse economic situations could be generating more independent adolescents and with a greater sense of responsibility, affecting their perceptions of the capacities they have to solve problems and the range of possibilities they have to face difficulties. Immigration can bring with it worldviews other than local ones that could be beneficial in terms of innovation, flexibility and other soft skills. This is where transculturation comes into play (González, 2009), since this cultural fusion recovers the best of migrants and natives (Carreón et al., 2016), which could be generating positive changes in the Chilean educational system by enrolling more competive students. In connection with subjective well-being, studies corried out in other countries by Bilbao et al. (2007), Herrero et al. (2012), Mu<unk>oz and Alonso (2016), and Hendriks and Bartram (2019), have found that migrants evidenced lower levels of subjective well-being than their native counterparts. Similar results were even found in Chile by Alfaro et al. (2016). This research exhibited differences favoring natives, however, they were not statiscally significant. These results could be attributed to various factors that are possibly attributable to the fact that the perception of material well-being, health, achievements and future is being managed in a good way by both migrant and native students, as a result of the equal integration of both groups in the health and educational systems with possibilities of personal development through the laws of fee exceptions in higher education and reforms to the system. For example, in terms of satisfaction with health, literature has widely documented the migrants arrive in the host country with better health condition compared with locals, situation that it is likely to be maitained in time (Constant et al., 2018;Luthra et al., 2020). In the specific case of migrant students, they seem to have been successful in restructuring their network of interpersonal relationships in the host country, with no impact on subjective well-being, despite the fact that the literature suggests that the social capital of migrants in general is lower and that this has high correlations with subjective wellbeing and that it can be a predictor of satisfaction with life (Helliwell, 2003). Studies such as those by Mart<unk>nez Garc<unk>a et al. (2001) establish a close relationship between subjective well-being and social support, the latter being of great importance in situations of high stress that usually accompany migratory experiences (Berry, 1997). Local governments (municipalities) have made remarkable efforts to integrate migrant students, which seem to have resulted in similar subjective perceptions of well-being among native and migrant students. The absence of statistically significant differences between migrants and natives could suggest that the migration process has not been a traumatic experience for adolescents and that they view their future with optimism. Regarding gender differences, this study found no statically significant differences in terms of academic self as were also encountered in the contributions by Costa and Tabernero (2012) and the one of Gálvez-Nieto et al. (2017). This may be related to the fact that self-efficacy is related to academic achievement and social adjustment (Trautwein et al., 2006) and both boys and girls would have similar perceptions regarding the achievement of goals. The results obtained are a good incentive since the Government of Chile has highlighted the need to narrow the gender gap in the educational context (Government of Chile, 2013). These results are important because adolescence is a particularly complex period and the literature has reported that it is a more difficult process for girls. Presumably, the results speak of a similar social adjustment in both genders and good socialization, which is especially important given that self-concept is a social product that is generated through the interaction and valuation of others (Garc<unk>a and Musitu, 1999). In terms of roles rooted in Latin American societies, male provider and female housewives, it could be expected that girls would show a lower academic self-concept in relation to boys based on expectations. The results do not mirror these assumptions, on the contrary, they could be indications that Chilean society is advancing in terms of equality and equitable treatment in terms of gender and it is also providing a supportive social context. This situation could also be determined by a good attitude of the teachers toward their students regardless of their gender and that the perspectives and aspirations of both girls and boys are not being affected by discrimination and machismo. Now, the results on self-efficacy mirrored those of Junge and Dretzke (1995), Blanco Vega et al. (2012), and Huang (2013). The differences evidenced could mean the level of achievements and goals will be different in men and that this fact would have repercussions, for example, in the choice and concretion of professional careers and ventures that they decide. Likewise, these results could give indications that the higher levels of resilience in boys could lead to greater well-being in them compared to girls in the future. The fact that boys feel they have more resources to solve unexpected situations and that they feel more confident in their abilities could have repercussions in the world of work and, for example, perpetuate salary gaps by feeling that they "deserve" better positions and salary conditions because they are more self-effective. Society must advance in this regard and promote self-efficacy as an engine in achieving goals and equality. In terms of subjective well-being, this study found no statistically significant differences regarding gender. These findings are not in line with those of Oyanedel et al. (2015) and Alfaro et al. (2016) in Chile, and in studies in Spain such as that of González-Carrasco et al. (2017). Factors such as material, health, and relations satisfaction could be operating at similar levels in both groups. In line with Casas (1998Casas (, 2006)), this finding highlights the importance of conducting studies on subjective well-being in adolescent populations in developing countries, contrasting genders in order to advance a conception of well-being beyond meeting basic needs and focusing on the development of adolescent potentials, since happy adolescents are later happy adults. In terms of social support which is highly related with subjective well-being, girls and boys seem to have been successful in structuring their network of interpersonal relationships, exhibiting similar social capital which has high correlations with subjective well-being and that it can be a predictor of satisfaction with life (Helliwell, 2003). In this line, the size of the social network and coverage of basic needs (Basabe et al., 2009) would be positively related to subjective well-being. The results suggest that, at least in this sample, social and structural factors such as access to opportunities, expectations and roles (Stevenson and Wolfers, 2009), which have traditionally favored men in Latin American contexts, may have a window to experience changes that can be reflected in similar subjective well-being indexes for both groups. These changes can also lead to more equal and democratic spaces of study and better opportunities at work places. As explained, various studies show a positive correlation between subjective well-being and other variables such as self-concept, and self-efficacy (Huebner, 1991;Harter, 1998;McCullough et al., 2000;Luján, 2002;Gutiérrez and Gonçalves, 2013). Literature has also suggested that self-efficacy may mediate the relations between subjective well-being and other constructs such as meaning in life, life satisfaction and selfconcept (Krok and Gerymski, 2019). In the case of nativeborn students, the results controlled by gender assume a direct relationship between academic self-concept and subjective wellbeing; however, general self-efficacy also presents a mediational role. This partial meditational relation can be explained by the fact that academic self-concept highly predicts subjective wellbeing in Chilean adolescents; However, academic self-efficacy would exercise a mediational function, since the literature has reported in many studies that higher self-efficacy comes with greater self-concept. In the case of migrant students, self-efficacy would exercise a total mediational role according to the results obtained. It could be assumed that migrant students came to Chile with a more solid academic self-concept and therefore it would not predict their subjective well-being. The results obtained could constitute a contribution to the Theory of Achievement Goals outlined a few decades ago (Dweck, 1985;Ames, 1987), which could help us to propose the mediational role of self-efficacy in the relationship between self-concept and subjective well-being. Personal goals, according to Nicholls (1984), would be understood as determining agents of behavior and would therefore be mental representations regarding objectives set in an achievement oriented environment and that determine behavior, affectivity and cognition in different situations, and in the case of migrant students, they face contexts where they put their competences and skills to the test in a setting with new motivations, which Ames (1992) understands as a subjective evaluation of the goal structure that is emphasized in a given situation, in order to achieve social approval and status in a group. The mediational role of self-efficacy would indicate that a more self-effective individual would show higher levels of subjective well-being, a situation that has been supported by previous evidence (Lachman and Weaver, 1998;Lang and Heckhausen, 2001) and where self-efficacy would turn out to be the greatest predictor of subjective well-being (Halisch and Geppert, 2000;Gómez et al., 2007) in native and migrant students. In this line, the self-concept of individuals would have the ability to generate relevant changes in their attitudes (Marsh, 2006), so that in terms of achievements it could have effects on subjective well-being, directly in the in the case of native students and via self-efficacy in the case of migrant students. The results obtained in this study should be taken with the appropriate caution, since more extensive studies and other information-gathering techniques as well as different scales will be required | In the last decade, the migrant population in Chile has substantially increased, where the rates have not only increased in the adult population, but also among children and adolescents, creating a potential for social and cultural development in the educational system. The present work analyzes the relationship between self-concept, self-efficacy, and subjective well-being in native and migrant adolescents in Santiago de Chile. The sample consisted of 406 students, 56.65% women, with an age range that fluctuated between 12 and 16 years, with an average of 13.36 years (SD = 0.96). Student's t-tests were used to compare the average of the constructs evaluated between natives/migrants and boys/girls participants. Subsequently, two multivariate models of simple mediation were constructed, one for natives and another for migrants, which assumed subjective well-being as a dependent variable, academic self-concept as an independent variable and the general self-efficacy as a mediating variable. In both models, gender was considered as a control variable. Results show that migrant students present higher levels of academic self-concept and general self-efficacy than native students. There are no differences with regard to well-being. In the case of gender, differences are observed only for the case of general self-efficacy, where boys present higher levels. On the other hand, a partial mediation is observed for the model of native students and a total mediation for the model of migrant students. The study yielded interesting results regarding the differences in the evaluation of the constructs of selfconcept, self-efficacy, and subjective well-being in both groups. Such data can be used as inputs for the development of public policies for adolescents. |
which Ames (1992) understands as a subjective evaluation of the goal structure that is emphasized in a given situation, in order to achieve social approval and status in a group. The mediational role of self-efficacy would indicate that a more self-effective individual would show higher levels of subjective well-being, a situation that has been supported by previous evidence (Lachman and Weaver, 1998;Lang and Heckhausen, 2001) and where self-efficacy would turn out to be the greatest predictor of subjective well-being (Halisch and Geppert, 2000;Gómez et al., 2007) in native and migrant students. In this line, the self-concept of individuals would have the ability to generate relevant changes in their attitudes (Marsh, 2006), so that in terms of achievements it could have effects on subjective well-being, directly in the in the case of native students and via self-efficacy in the case of migrant students. The results obtained in this study should be taken with the appropriate caution, since more extensive studies and other information-gathering techniques as well as different scales will be required in the future to study in greater depth self-concept, self-efficacy, and subjective well-being and especially the possible causal relationships between these constructs. New lines of research could emerge from this study, for example the study of the mediational role of self-efficacy with subjective well-being with other constructs such as meaning of life, self-esteem and social support. Among the limitations of this study it can be mentioned that non-random samples and a cross-sectional design were used. The lack of similar studies in the Chilean context to use as a point of reference was also a limitation in some cases. The new inhabitants of Chile have been looking for quality of life and the importance of studying subjective well-being in adolescents and its influential constructs lies in the possibility of generating inputs for the development of public policies that can arise from the systematic study of the Chilean and migrant population in such a way as to provide key information to relevant actors to make decisions that affect minors in Chile. --- DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. --- ETHICS STATEMENT The studies involving human participants were reviewed and approved by Comité de Ética de la Facultad de Administración y Econom<unk>a de la Universidad de Santiago de Chile. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. --- AUTHOR CONTRIBUTIONS CC contributed to the conception of the study, involved in planning, supervised the work, processed the experimental data, performed the analysis, interpreted the data, drafted the manuscript, and designed the figures. AR involved in planning and supervising the work, processed the experimental data, performed the analysis, drafted the manuscript, and designed the figures. FV contributed to the conception, analysis, interpretation of data, aided in the sample design, interpreting the results, worked on the manuscript, and revised it critically. SC contributed to the conception, analysis, interpretation of data, aided in the sample design, interpreting the results, worked on the manuscript, and revised it critically. EL-O performed the measurements, sample design, aided in interpreting the results, and worked on the manuscript. JR processed the experimental data, performed the analysis, drafted the manuscript, and designed the figures. All the authors discussed the results and commented on the manuscript. --- Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. | In the last decade, the migrant population in Chile has substantially increased, where the rates have not only increased in the adult population, but also among children and adolescents, creating a potential for social and cultural development in the educational system. The present work analyzes the relationship between self-concept, self-efficacy, and subjective well-being in native and migrant adolescents in Santiago de Chile. The sample consisted of 406 students, 56.65% women, with an age range that fluctuated between 12 and 16 years, with an average of 13.36 years (SD = 0.96). Student's t-tests were used to compare the average of the constructs evaluated between natives/migrants and boys/girls participants. Subsequently, two multivariate models of simple mediation were constructed, one for natives and another for migrants, which assumed subjective well-being as a dependent variable, academic self-concept as an independent variable and the general self-efficacy as a mediating variable. In both models, gender was considered as a control variable. Results show that migrant students present higher levels of academic self-concept and general self-efficacy than native students. There are no differences with regard to well-being. In the case of gender, differences are observed only for the case of general self-efficacy, where boys present higher levels. On the other hand, a partial mediation is observed for the model of native students and a total mediation for the model of migrant students. The study yielded interesting results regarding the differences in the evaluation of the constructs of selfconcept, self-efficacy, and subjective well-being in both groups. Such data can be used as inputs for the development of public policies for adolescents. |
Background Excessive alcohol consumption among university students has been linked to a range of adverse outcomes, including educational difficulties, psychosocial problems, antisocial behaviours, injuries, risky sexual behaviours and drink driving [1]. In the United Kingdom, alcohol consumption levels amongst university aged adults increased rapidly during the 1990s [2]. Recent studies suggest that just over half of UK university students 'binge drink' (i.e. consume 5 or more drinks in one sitting) at least once per week [3,4], whilst as many as 80% binge drink at least once a month [4]. One recent study estimated average alcohol consumption at 25 units per week for 1 st year male UK undergraduates and 16 for 1st year women [5], significantly above current public health recommendations. Recent UK government policy of increasing the percentage of young people going to university has perhaps had the effect of exposing a larger proportion of the population to this high-risk drinking environment. Alcohol consumption amongst university students has to date proved highly resistant to intervention efforts [6]. One approach, which has shown some promise in experimental studies, is addressing the perceived social norms that are posited to influence alcohol consumption [7]. Perceived norms take the form of descriptive norms, with behaviour modelled through observation of the behaviour of significant others; or injunctive norms, where the individual perceives that their peers expect them to behave a certain way. Interventions underpinned by the social norms approach argue that normative perceptions are highly fallible, with students often overestimating real alcohol consumption patterns among peers [8]. Hence, through providing feedback and correcting misperceptions regarding the behaviours and social expectations of peers, alcohol drinking behaviours may be reduced. Social norms interventions have typically involved provision of mailed, web-based or face-to-face feedback on individual's drinking behaviour and how this compares to norms for their peer group, or social marketing campaigns to promote awareness of actual norms. A recent Cochrane review concluded that feedback-based interventions delivered via the internet or face-to-face on a one-to-one basis appeared to reduce student drinking behaviours, though mailed or group feedback were less effective, and findings for social marketing campaigns were equivocal [7]. Whilst demonstrating promise, such interventions have typically been examined in isolation from the contexts in which they operate and significant questions remain to be addressed regarding how they might be applied in practice. No such studies have taken place in Wales, with the limited number of UK based studies suffering substantial weaknesses such as high levels of attrition [9]. Furthermore, universities are complex systems, whose overall ethos, policies and practices may provide a context supportive of change, or of maintaining the status quo [10]. Interventions which aim to achieve long-term change through simply targeting cognitive factors such as normative perceptions, without addressing the characteristics of the setting which support the status quo are likely to fail in the longer term [11]. Some community-based interventions to reduce alcohol consumption in adolescents have for example been shown to be more effective in rural settings than urban settings, where impacts of the intervention are perhaps drowned out by the multitude of pro-alcohol stimuli in the urban environment [12]. In Welsh universities, university managed accommodation blocks (halls of residence) primarily house students in their first year of attendance, with approximately half of students living in halls during their first year. Given that first year students are at greatest risk of excessive alcohol consumption [5], halls of residence offer potential as a means of reaching those students most at risk for alcohol related intervention. The proposed research therefore aims to assess the value of a social marketing-led social norms-based intervention implemented in University halls of residence across four Universities in Wales. A survey of first year students was conducted in participating universities in May 2011, in order to establish levels of drinking and the prevalence of alcohol related consequences, as well as normative perceptions. Findings from the survey fed into the development of materials by an Intervention Steering Group to communicate areas of normative misperception (e.g. the extent to which students overestimated peer drinking volume), to be distributed within halls of residence. All halls of residence participating in the study will experience a university-wide alcohol harm reduction toolkit, with half randomised to additionally receive the social norms intervention. An exploratory cluster randomised design with nested process evaluation will be used to identify appropriate outcome measures and data collection methods, test randomisation processes, assess the extent of contamination across trial arms and establish recruitment and retention rates and intra-cluster correlations to help inform sample size for any future definitive trial. It will also identify whether the intervention effectively mobilises the underpinning theory and that this is sufficient to bring about hypothesised responses in terms of awareness, engagement and changed perception of norms. Whilst intervention acceptability and implementation processes will be assessed within the process evaluation. --- Methods/design --- The intervention The intervention is a social norm marketing campaign, which aims to correct misperceptions regarding the behaviours and social expectations of peers and in so doing influence alcohol consumption. The campaign will be delivered in two phases between October 2011 and May 2012 in intervention halls of residence in four universities and will use a variety of materials encompassing posters, beer mats/coasters, leaflets, meal planners and drinking glasses. The campaign will be implemented by university accommodation staff. Social norm messages were based on the results of a survey of first year university students conducted in study universities in late April/May 2011 which identified discrepancies between norms and behaviours. Table 1 highlights the intervention materials and main social norm messages within them. Universities in the study also receive a toolkit to promote institutional responsibility for prevention, audit current alcohol misuse policies and practices and which provided advice and guidance on prevention. The toolkit was developed by a National Union of Students (NUS) intervention project officer in consultation with the universities in the study. It was distributed to key university stakeholders in October 2011 with the intention of developing a supportive environment for the intervention. Control halls will be exposed to the toolkit only. The toolkit and social norms intervention were developed collaboratively by Drinkaware, NUS Wales and the Welsh Government following a review of previous interventions and support from an academic supervisor. Their implementation was facilitated by a dedicated NUS project officer. Given the nature of the intervention, it was not possible to blind participants to condition. --- Recruitment Six universities who had collaborated on the development of the intervention were approached by the evaluation team to participate in the study, with four agreeing to implement the intervention in the study period. Reasons for non-participation were related to difficulties in implementing the intervention and obtaining university consent within the evaluation timeframe. Informed consent for the study was obtained from directors of student services and halls of residence managers. The universities varied in terms of the number of full-time first year students (from 1100 to 3327), and location, with a mixture of urban and rural locations. --- Inclusion/exclusion criteria The four universities had a total of 51 on campus university owned halls of residence. One female only hall was excluded due to lack of trial arm balance. All remaining halls (n = 50) were eligible for inclusion and consented to randomisation, although 5 halls in one site were empty during the first phase of the campaign due to renovation. --- Randomisation Blind remote randomisation was used to allocate halls of residence to receive the social norms plus toolkit intervention or toolkit only. Halls were stratified by institution and halls allocated alternately in a list ordered by size, with the group allocation determined by one random number within each stratum. --- Measures --- Primary outcome Units consumed per week -daily drinking questionnaire The primary outcome is alcohol consumption in units per week assessed via the Daily Drinking Questionnaire [13]. The measure asks students for details of a typical week rather than exact quantities for the last 7 days, in order to ensure that it reflects habitual drinking. The DDQ has emerged as a favoured measure within RCTs with students due to its brevity, its convergent validity with more laborious drinking measures [13], acceptable internal consistency (Neighbors et al. 2002), good 2month test-retest reliability for volume and adequate test-retest reliability for frequency [14]) and established ability to detect post-intervention changes. Importantly, the measure also provides comparable estimates regardless of whether administered via the internet or as a pen and paper exercise [15] Secondary outcomes Weekly alcohol consumption behaviours -daily drinking questionnaire Responses can also provide a measure of i) number of days per week drinking in a typical week, ii) number of units per sitting and iv) number of heavy drinking episodes per week. [Prevalence of higher risk drinking -AUDIT The consumption subscale of the Alcohol Use Disorders Identification Tool (AUDIT p15; [16] provides an additional measure of alcohol consumption, allowing estimation of the prevalence of potentially hazardous drinking in control/intervention halls. The scale includes items on frequency of drinking, volume per drinking occasion and frequency of 'binge drinking' (e.g. 8+/6+ units on one occasion for men/women), each scored on a scale of 0-4. In primary care studies, a total summed score of 4 or above for men, or 3 or above for women, has been shown to optimally identify potentially hazardous drinkers [17] Alcohol related consequences -Rutgers alcohol problem index Secondary outcomes include the 18-item version of the Rutgers Alcohol Problem Index (RAPI) [18]. The index is a well validated measure of alcohol problems with well established psychometric properties among clinical and general population samples ranging from 12 to 21 years. It is commonly used among general university populations in evaluations of alcohol based interventions. All items are typically summed to provide a single continuous variable for alcohol problems, although the factor structure in the current population will be carefully checked. --- Descriptive norms In order to assess whether the campaign achieves the hypothesised mechanism of changing perceived descriptive norms for drinking, the evaluation requires a measure of perceived descriptive norms. The drinking norms rating form has been widely used in RCTs and cross sectional studies (Baer et al. 1991) and involves rewording the DDQ to reference others rather than self, therefore having the advantage that perceived normative behaviour is measured in exactly the same way as own behaviour. --- Injunctive norms Whilst most previous studies have focused on descriptive norms, many psychological models argue that injunctive norms (i.e. perceived social pressure or social approval) are equally important in shaping behaviour. A scale previously used by Neighbors et al. ( 2008) was therefore included. --- Demographics Measures of gender, age, ethnicity, international/home student status, course studied and place of residence will facilitate an examination of the representativeness of the sample, to assess comparability between groups of students assigned to receive/not receive the social norms intervention and assess potential contamination between trial arms. --- Acceptability of objective measures Students will also be asked to indicate whether they would be willing to provide hair samples as an objective measure of alcohol consumption, although it will be made clear that this is a hypothetical question, and that we will not be attempting this at any point in the present study. The question is simply included to evaluate the acceptability of this method among university students if we were to seek funding for a larger definitive trial using more objective measurement approaches in the future. --- Data collection At four months after initial implementation of the intervention, measures will be collected via a survey to all 1 st year university students, offered in web and paper format. They will be recruited via nominated university distribution contacts, who will circulate the link to first year students via email and electronic notice boards between mid-February and the end of March 2012. At least one reminder will be emailed to students during the data collection period. On completion of the questionnaire, data will be captured and processed by a market research company, who will prepare a complete anonymised dataset for analysis. Heads of student services provided consent for the conduct of the survey and students will not be able to complete the survey without completing informed consent tick-boxes. Students will not be asked to provide any identifiable information, other than email addresses, which will be used purely for the purpose of selecting a winner for the £100 prize draw in each university, offered as an incentive for participation. Email addresses will be separated from responses to the web survey and destroyed after the prize draw. In an attempt to boost student responses, residence hall managers will be asked to promote the survey and the prize draw to residents. To compare the efficacy of two data collection approaches, a paper copy of the questionnaire will be distributed to student halls of residence via accommodation managers, inviting students either to complete the paper copy and return it to the research team in a freepost envelope, or to go to the web-page to complete the survey online. Questionnaires completed in paper form will be returned to the research team in freepost envelopes. These will be stored in a locked cabinet until the web survey data-file is received from the survey company, at which point, questionnaires will be retrieved, data entered into the data-file, and questionnaires returned to the cabinet. Participants will be offered the opportunity to enter a prize draw, with £100 offered to one winner in each participating university by supplying a university email to be kept separately to questionnaire responses. Email addresses will be recorded on a detachable sheet at the start of the questionnaire, which will be separated from the questionnaire once received, with the email address entered into a separate spread sheet and the paper copy destroyed. --- Sample size Assuming a student response rate of 40%, 1600 completed questionnaires will be available for analysis, an average of 32 students per hall. Assuming an intra-cluster correlation of 0.03, fifty halls of residence will therefore provide 80% power to detect a 0.2 standard deviation difference in units of alcohol consumed using a two-tailed alpha of 0.05. Assuming a student response rate of 25%, 1000 completed questionnaires will be available for analysis, an average of 20 students per hall. Assuming an intra-cluster correlation of 0.03, fifty halls of residence will therefore provide 80% power to detect a 0.23 standard deviation difference in units of alcohol consumed using a two-tailed alpha of 0.05. It is not anticipated that the effect size will be of this magnitude, with a much larger trial likely to be necessary to detect realistic effect sizes below 0.1 standard deviation. This study is therefore designed as an exploratory trial to assess the value of the intervention and plan a larger scale study if warranted. --- Process evaluation Universities are complex systems, whose ethos, policies and practices may provide a context supportive of change, or of maintaining the status quo [10]. Within evaluations of complex interventions, process evaluation is crucial in order to understand what was implemented, how it was received and ultimately, how outcomes were produced. A process evaluation will run alongside the implementation of the programme, throughout the 2011/12 academic year. The process evaluation is concerned with 5 core research questions: i. What role does alcohol play in students' social life during the transition to university and throughout university life? ii. How are the toolkit and social norms activities developed and what are their underlying logic models? iii. How are the toolkit and social norms activities implemented? iv. How, for whom and in what circumstances, does the toolkit brings about change in university practices v. How, for whom and in what circumstances does the social norms intervention influence alcohol related beliefs and behaviour? The process evaluation will encompass 1) group interviews with up to twenty 2 nd and 3 rd year students in each university focusing upon experiences of alcohol throughout student life, 2) visits by a researcher to each intervention residence hall in order to monitor the distribution and placement of materials, 3) group interviews with up to 6 students in 2 case study halls in each university (one receiving and one not receiving the social norms intervention) exploring awareness and responses to the intervention 4) interviews with stakeholders in each university involved in delivering the intervention. In addition, all residence hall wardens will be asked to complete a brief questionnaire to assess changes in practice over time. Permission will also be requested from university representatives to use routine public data gathered during audits forming part of the toolkit. Finally, within the survey described above, to assess intervention reach, students will be asked to indicate whether they had seen the intervention materials in their own hall of residence, or in another students' hall of residence. To assess recall, students who recalled seeing any of the norms materials will be asked to identify core messages from a list. Students will also be asked whether messages within the materials were credible and relevant, and whether they felt that exposure to the materials had influenced their normative perceptions or behaviour. These questions will be identical for students in control and intervention halls, allowing assessments of contamination between trial arms. The survey also includes a number of bespoke items from the intervention survey, which informed the social norm intervention, but only where these are linked to specific intervention communications (e.g. some materials focused on round buying behaviour and alternating alcoholic and soft drinks, hence items assessing the prevalence of these behaviours are retained). Hall of residence managers will be asked for their consent for researchers to visit halls to monitor the placement of campaign materials. Prior to group interviews, an information sheet would be provided, with participants offered the opportunity to ask questions prior to obtaining informed consent. Since part of the process evaluation requires asking different questions of intervention and control premises representatives, the research team members who conduct the process evaluation will be unblinded. --- Analysis In order to assess exposure to intervention materials and contamination between trial arms, percentages of students within the intervention and control groups reporting having seen each of the intervention materials i) in their own hall of residence and ii) in another students' hall of residence will be examined. Among those students reporting exposure to intervention materials, percentages correctly identifying the messages within them will be calculated for each trial arm. Percentages of students reporting each level of agreement with statements regarding the credibility, relevance and perceived impacts of intervention materials will also be examined for each trial arm. Whilst the study is likely not sufficiently powered to detect impacts on behaviour, it is likely that relatively large changes in perceived norms will be necessary to produce small changes in behaviour. Hence, regression analyses, with random terms to adjust for clustering at the hall level, and fixed terms to adjust for stratification variables, will examine differences between intervention and control participants in terms of normative perceptions for alcohol consumption and alcohol related consequences. Comparisons between trial arms will be conducted on an intention-to-treat basis. Secondary analyses would compare halls on the basis of researcher observations of whether or not materials were placed. To inform the design of a potential large scale definitive trial with sufficient power to detect changes in behaviour, intra-cluster correlations and standard deviations will be calculated for total number of units per week. Response rates will be calculated in each trial arm. The percentage of students reporting willingness to provide hair samples will also be presented, whilst among those students reporting that they would only do so if paid, percentages reporting that each level of payment would be required would be presented. --- Discussion The need to address high levels of alcohol misuse amongst UK student populations has led to a range of possible preventive approaches, including social marketing campaigns that address misperceptions of social norms. However, the lack of a strong evidence base for UK interventions highlights the need for an exploratory trial phase before large scale intervention implementation and the conduct of any definitive trial. Definitive trials require appropriate outcome measures, cost effective data collection, reliable randomisation processes, an understanding of potential contamination across trial arms and a measure of recruitment and retention rates and intra-cluster correlations to help inform sample size calculations. The current study provides the opportunity to generate such information within the context of an exploratory trial of a university halls based social norm marketing intervention. It also provides the opportunity to test the application of the theoretical assumptions underlying the social norm approach by measuring the hypothesised pathways that are posited as leading to behaviour change. These are an assessment of campaign awareness, reception and changes in normative perceptions. The challenges in facilitating such processes with a relatively low intensity interventions informed intervention development and the provision of the supportive environment toolkit and also led to a relatively large sample size for an exploratory trial, to asses such changes in intrapersonal processes. Finally the study provides an important opportunity to assess intervention acceptability and implementation processes to inform optimum intervention content and delivery in any future trial. --- Authors' contributions SM, GM and LM were actively involved in the development and design of the study and all authors in the drafting of the manuscript. SM is the principal investigator. GM is co applicant and responsible for the day to day management of the study. LM is co applicant and responsible for statistical oversight of the project. AW is responsible for the conduct of the process evaluation. All authors read and approved the final manuscript. --- Competing interests The authors declare that they have no competing interests. | Background: Excessive alcohol consumption amongst university students has received increasing attention. A social norms approach to reducing drinking behaviours has met with some success in the USA. Such an approach is based on the assumption that student's perceptions of the norms of their peers are highly influential, but that these perceptions are often incorrect. Social norms interventions therefore aim to correct these inaccurate perceptions, and in turn, to change behaviours. However, UK studies are scarce and it is increasingly recognised that social norm interventions need to be supported by socio ecological approaches that address the wider determinants of behaviour. Objectives: To describe the research design for an exploratory trial examining the acceptability, hypothesised process of change and implementation of a social norm marketing campaign designed to correct misperceptions of normative alcohol use and reduce levels of misuse, implemented alongside a university wide alcohol harm reduction toolkit. It also assesses the feasibility of a potential large scale effectiveness trial by providing key trial design parameters including randomisation, recruitment and retention, contamination, data collection methods, outcome measures and intracluster correlations. Methods/design: The study adopts an exploratory cluster randomised controlled trial design with halls of residence as the unit of allocation, and a nested mixed methods process evaluation. Four Welsh (UK) universities participated in the study, with residence hall managers consenting to implementation of the trial in 50 university owned campus based halls of residence. Consenting halls were randomised to either a phased multi channel social norm marketing campaign addressing normative discrepancies (n = 25 intervention) or normal practice (n = 25 control). The primary outcome is alcohol consumption (units per week) measured using the Daily Drinking Questionnaire. Secondary outcomes assess frequency of alcohol consumption, higher risk drinking, alcohol related problems and change in perceptions of alcohol-related descriptive and injunctive norms. Data will be collected for all 50 halls at 4 months follow up through a cross-sectional on line and postal survey of approximately 4000 first year students. The process evaluation will explore the acceptability and implementation of the social norms intervention and toolkit and hypothesised process of change including awareness, receptivity and normative changes. Discussion: Exploratory trials such as this are essential to inform future definitive trials by providing crucial methodological parameters and guidance on designing and implementing optimum interventions. |
Background Prior literature has widely documented that there is a significant association between the propensity of physicians to use Evidence-Based Medicine (EBM) in their practice and the structural characteristics of their professional networks [1,2]. In particular, this stream of research has shown that the network characteristics of professional relationships among clinicians are important predictors in explaining their different orientation towards EBM [2][3][4]. Although EBM has been widely considered as an individual attitude, its actual impact within organizations strongly relies on its pervasiveness and widespread diffusion at the organizational level [5]. If physicians do practice EBM individually, the risk is that barriers to the effective implementation of innovative clinical solutions are not translated "from the bench to the bed" of the patient [6,7]. These difficulties are often due to social constraints and barriers which some elitists may establish against other non-elitarian members within organizations [8], as well as the resistance of other clinicians who have a different behavioral orientation [9,10]. Despite its general importance, this topic has been seldom analyzed on empirical grounds in healthcare organizations. The aim of the present paper is to fill this gap by exploring and testing whether physicians' selfreported frequency of EBM adoption is related to the network position they hold in the overall web of collaborative relationships established within healthcare organizations, where they routinely visit and treat their patients. Data regarding a community of hospital physicians staffed in one of the biggest Italian healthcare organizations were collected and used in the present study. Social network analysis was firstly performed to identify structurally important physicians in the network. Specifically, we derived a core-periphery structure of the overall inter-physician network, distinguishing the dense cohesive core of the professional network from the sparse, unconnected periphery. Then, a new class of network centrality indicators, overall called Hubs and Authorities centrality, were employed to capture the structural prominence that physicians exhibited in the network. Finally, we explored whether their self-reported frequency of EBM adoption predicted the degree of coreness and structural importance that individual doctors assume within the observed network. Social networks research has provided ample evidence that individuals' attitudes and other personal characteristics influence the shape of their social networks as well as the position they assume in the overall web of relationships [11][12][13]. On the basis of previous work developed in this field [14], we assume that the propensity towards EBM is a relatively stable physician individual characteristic, which in turn influences his/her network position within organizations. We hypothesize that there is an association between the physicians' propensity to use EBM and their degree of coreness within organizations, taking other relevant individual and organizational characteristics into consideration. --- Methods --- Research setting and data collection The present observational study was conducted using a questionnaire survey of 329 physicians employed in six hospitals belonging to one of the largest Italian local health authorities (LHAs). In Italy, LHAs aim to promote and protect the health of all resident citizens of a specific territory. The Italian National Health Service (INHS) is currently comprised of 145 LHAs. Based on considerations of efficiency and cost-effectiveness, each LHA may provide direct care through its own facilities or may commission the services to providers accredited by the system, such as independent public and private bodies. The surveyed LHA serves approximately 800,000 individuals residing in 50 municipalities. The LHA employs around 8,400 people, including technical staff, nurses and physicians, and more than 80,000 hospitalizations occur annually. Hospital activities are carried out according to a matrix organizational model. Although hospital activities are carried out in six hospital facilities, these hospital services are provided by three clinical directorates, which may be considered as the health sector equivalent of strategic business units [15]. Clinical directorates are managerially inspired and defined groupings of clinical specialties and support services created specifically for the purposes of resource management, control and accountability. They are intermediate organizational establishments through which defined parts of larger hospitals' health services are managed. The directorates were introduced in the Italian healthcare organizations in the 1990s (laws 502/1992 and 229/1999), with the aim of reorienting activities toward healthcare processes [15]. Data were collected using a questionnaire, which was administered from February to November 2007. Participation was voluntary, and respondents were assured that their responses would be confidential and used for research purposes only. Because our study contains no experimental research, and given that any information concerning patients was collected, in accordance with Italian law ethics approval was not necessary. However, all physicians provided informed consent for the survey. The questionnaire consisted of three sections, which contained a total of 17 questions. The first section collected attributive data on clinicians, such as: age, gender, hospital tenure, prior experience in the NHS and managerial role. The second section was designed to collect data on advice network relationships among clinicians. According to Burt's approach [16], we used an egocentric social-network survey instrument to derive a list of people with whom the respondent had ties with. Each physician was asked to name colleagues within and outside their hospital organization with whom they interacted with through relationships based on the exchange of advice, and responses were combined in a summary network. Each respondent was asked to characterize tie strength with each nominated peer using a five-point scale. The third section of the questionnaire collected information about clinicians' attitudes towards EBM. It included questions about respondents' perceptions of the availability of information and the possibility of accessing scientific evidence through corporate informationtechnology support. Responses to the questionnaire were requested within 3 months. Two quarterly recalls were sent to the physicians via email and a final recall asked for a response within 1 month. After ten months the questionnaire was activated and made available online, almost 90% of the total population (# 297 physicians) completed the questionnaire. --- Variables and measures --- Dependent variable Social network analysis was used to derive the position of individual physicians within the surveyed professional network. Using survey relational data, an adjacency (or square) matrix containing information on the interpersonal collaborative ties among clinicians was created [17]. Each row/column listed physicians surveyed and intersecting cells represented the frequency (intensity) of interaction between pairs of individuals. After data preparation, we used the continuous Core-Periphery algorithm developed by Borgatti and Everett [18] to compute the degree of coreness of each surveyed physician. As Borgatti and Everett clarify: "the core periphery model consists of two classes of nodes, namely a cohesive subgraph of the core, in which actors are connected to each other in some maximal sense and a class of actors that are more loosely connected to the cohesive subgraph but lack any maximal cohesion with the core." [18:378] Core periphery algorithms jointly consider two kinds of structural properties of network nodes. First, the level of centrality that a given actor assumes within the network is considered. At the same time, the algorithms take into account the general level of interconnectedness it exhibits with other network nodes. A Network Coreness score was computed and then assigned to each sampled physician. A Hubs and Authorities analysis was conducted to complement the core-periphery analysis described above. Hubs and Authorities analysis represents a natural generalization of the eigenvector centrality analysis, which can effectively identify the structural importance of individual actors in social networks [19,20]. A set of algorithms is defined to compute two distinct and heavily interwoven measures, called "hub" and "authority", which reflect the prominence of each actor based on the structural characteristics of his/her network ties. An actor can be defined as highly hub central whether he/she points, i.e. he/she has many out-going ties, to many good authorities. High authority actors are those who receive, showing many incoming ties, from many good hubs. Kleinberg clarifies [19] that "[t]he authority score of a vertex is proportional to the sum of the hub scores of the vertices on the in-coming ties and the hub score is proportional to the authority scores of the vertices on the out-going ties." Overall, clinicians experiencing high Network Authority scores can be commonly regarded as important actors since they are both relevant and popular within the network. The UCINET 6.392 software package was used in the present study to perform the analysis of the surveyed professional network [21]. --- Independent variable As in previous research, physicians' attitudes towards EBM (EBM adoption) were investigated by asking how often in the past year they had used scientific evidence published in peer-reviewed biomedical journals to aid their medical practice [2,14,22,23]. The survey questionnaire specifically asked individuals to answer "How often did you use scientific evidence published in peer-reviewed biomedical journals in your medical practice over the last year?". Responses were rated on a 4-point Likert-scale structured as follows: "never," "rarely," "sometimes," and "often/very often". --- Control variables A number of other demographic and work-profile variables that might affect the position that physicians occupy within the organizational network were considered and included in the regression models. Some attributive characteristics of each physician were included, such as: Age, Gender and years of prior experience within the INHS (Tenure INHS) and within the LHA (Tenure LHA). A dummy variable that considered the managerial responsibility (Managerial Role) of each physician was assigned a value of 1 if the physician played a managerial role within the hospital system or 0 otherwise. Given that the geographical distance for other colleagues likely affects the possibility of interaction between them, and thus the position that an individual occupies within the network, a variable named Geographical Proximity was computed as the reciprocal of the average geographical distance (expressed in kilometers) of each sampled physician from their organizational colleagues. Finally, a set of dummy variables that considers physicians' affiliation to the various LHA hospitals and directorates was entered into the model. --- Results The overall sample is made up of 297 physicians. Table 1 shows the main characteristics of sampled individuals. They are, on average, 47 years old (SD 8.01) and are mostly men. The number of years they have accumulated in the INHS is, on average, 16.01 (SD 8.01). Years of experience that have been accumulated within the organization is, instead, 10.95 (SD 7.94). Only 51 physicians are clinical managers. As for EBM adoption, the majority (71.5%) reported adopting EBM frequently, followed by physicians declaring to adopt EBM very frequently (14.49%), occasionally (12.56%) and never (1.45%). Overall, almost 86% of sampled physicians reported to adopt EBM frequently or very frequently. Figure 1 illustrates the network of collaborative relationships among sampled physicians. The circle (node) represents physician and the link (edge) represents an existing collaborative tie among node pairs. Physicians' locations in Figure 1 were determined using a springembedding heuristic, multidimensional scaling algorithm, with proximity indicating the extent to which two clinicians were connected directly and indirectly through mutual colleagues [21]. Table 2 shows the pairwise correlations among variables. The inspection of coefficients reveals a strong and positive association between age, tenure in the INHS, and organizational tenure. Tenure in the INHS is, in turn, positively associated to the variable making distinction of whether the clinician has a managerial role or not. Network coreness and network authority variables showed to be moderately associated with both the propensity to adopt EBM and geographical proximity of physicians, albeit with a different sign. Tables 3 and4 show the OLS (Ordinary Least Squares) regression results. Stata 10 was used to perform the regression analysis. Table 3, in particular, presents three different models that we built to explore the clinician's coreness within the professional network. Model M1 contains only the EBM adoption variable, and it should be considered as a null model against which the explanatory power of the subsequent models can be compared to. Model M2 includes only control variables. Model M3 is the full model incorporating all explanatory variables. Model M1 in Table 3 shows that there is a significant association between the EBM variable and the network position that physicians hold in their professional network. In particular, it was found that a negative association exists between the physicians' attitudes towards EBM and their degree of coreness (<unk> = -0.004; p <unk> 0.05). The regression results also documented that, among all of the structural and characteristic variables included, the coreness of individual physicians in their professional network was associated with Managerial Role, Geographical Proximity and the variables reflecting the clinician's affiliation to hospital structures and directorates. In particular, Managerial role (<unk> = 0.011; p <unk> 0.05) and Geographical Proximity (<unk> = 0.001; p <unk> 0.01) were positively associated with the coreness of professionals within the organization. Physicians in Department #2 were more likely to exhibit a higher coreness than those in Department #1 (<unk> = 0.020; p <unk> 0.01), which is the baseline category of the model. Physicians in hospital facilities #2 (<unk> = -0.056; p <unk> 0.01), #3 (<unk> = -0.045; p <unk> 0.01), #4 (<unk> = -0.029; p <unk> 0.01), #5 (<unk> = 0.026; p <unk> 0.01) and #6 (<unk> = -0.017; p <unk> 0.10) exhibited a lower coreness score than those working in hospital #1. Compared to the variables of Model M1, Model M2 includes the measure characterizing the physicians' attitudes towards EBM in clinical practice. This variable showed a significant negative association (<unk> = -0.006; p <unk> 0.05) with the dependent variable, documenting that there is a negative association between the propensity to use EBM and the coreness that physicians exhibit in the overall professional network. All significant control variables in Model M1 continued to maintain significance in Model M2. Finally, it is important to note that the inclusion of the EBM adoption variable increased the overall fit of Model M3 over Model M2. Table 4 presents all models exploring the association between the EBM adoption variable and the Network Authority variable. Our model building follows, similarly to what previously presented, a stepwise approach. According to this logic, Model M1 contains only the EBM adoption variable, Model M2 includes only control variables, and Model M3 is the full model that incorporates all explanatory variables. Models M1 and M3 in Table 4 document that a negative and significant association exists between the EBM adoption variable and the structural importance that physicians hold in the collaboration network (<unk> = -0.009; p <unk> 0.05 in M1; <unk> = -0.019, p <unk> 0.05 in M3). The inspection of parameters corresponding to control variables overall confirms our previous results documenting a significant association between the network centrality of clinicians and a number of other contingencies, such as their spatial proximity from colleagues (<unk> = -0.001; p <unk> 0.01 in M2; <unk> = -0.001, p <unk> 0.01 in M3), the managerial position they eventually occupy in the organization (<unk> = 0.020; p <unk> 0.1 in M2; <unk> = 0.023, p <unk> 0.05 in M3), and their belonging to specific hospitals (hospital #5, <unk> = 0.052; p <unk> 0.01 in M2; <unk> = 0.048, p <unk> 0.01 in M3) and departmental arrangements (Department #3, <unk> = -0.047; p <unk> 0.01 in M2; <unk> = -0.047, p <unk> 0.01 in M3). --- Discussion EBM represents one of the most important paradigms in modern medicine [24][25][26]. Clinicians and healthcare professionals in general are requested to increasingly adopt and integrate the latest available medical knowledge produced in their clinical practice [27]. In this study, we explored how the propensity towards EBM is associated with the position that professionals occupy in the overall network of collaborative ties they create within healthcare organizations. Our findings documented that there is a significant negative association between the physicians' propensity to use EBM and the coreness they exhibit in their organization. Our analysis also indicated that the core is formed by physicians having a significantly lower propensity towards EBM than their peers located in the peripheral part of the network. Supplementary analyses were performed to capture closely the structural importance of physicians in the professional network through the use of network centrality indicators. Our findings again provided evidence for a negative association between EBM adoption and the network prominence of individual clinicians. Although the homophily of physicians in terms of EBM adoption has been documented elsewhere [23], in this study we show that higher EBM adoption can cause the isolation of such groups of professionals. Within organizations, there is the potential risk that professionals having this kind of behavior can be viewed as elitists who may behave in contrast with the practices routinely adopted within hospitals. As prior studies have shown [28], innovators within healthcare organizations often face hard times in changing the way consolidated practices are used daily. In particular, those who are located in the center of the network are less exposed to novelty and innovative behavior since their higher interconnectedness with homophilous pairs likely increases the risk to be influenced from colleagues [3]. Our findings are also consistent with extant research demonstrating that the acquisition of new knowledge by physicians often occurs more likely through personal relations than through explicit guidelines and clinical protocols [2]. Gabbay and le May [29] have shown that physicians often use mindlines instead of guidelines, because of their tendency to discuss clinical matters with colleagues instead of relying on documentation such as articles, meta-analysis and Cochrane library. In addition, the superior propensity towards EBM of clinicians forming the periphery might reduce their risk of becoming over-embedded. Our findings have a number of implications. First, hospital executives are encouraged to identify groups of professionals that exhibit potentially virtuous attitudes and behaviors within their organizations. Social network analysis tools and techniques appear useful in this vein. They are, in addition, encouraged to foster collaboration across groups characterized by different propensities to use EBM in daily practices. The adoption of new organizational arrangements, processes and informal occasions of meeting, would all be useful means to achieve this objective. For example, increasing collaboration might be achieved through the internal restructuring of hospital organizations. The adoption of specific types of clinical directorates or interdisciplinary and interprofessional groups is an example in this direction [15,30]. New internal processes have to do with both organizational and professional streams of activities. Organizational processes for example concern the definition of objectives such us budget, quality standards and appropriateness, which may be targeted by administrators in order to encourage collaboration across heterogeneous groups. Finally, executives have the possibility to support the inclusion of medical leaders within organizations so that their role might be leveraged to persuade other professionals to collaborate more with EBM users [8]. Policy implications are also strong. Health systems around the world are urged to ensure coordination and integration amongst providers by fostering the collaboration of healthcare professionals belonging to different organizations. Policymakers may want to encourage healthcare administrators to implement the above mentioned actions. In addition, in this context, interorganizational cooperation may be better achieved by identifying EBM users in organizations and then leveraging the higher tendency to cooperate by the virtue of their homophily [23]. These initiatives might be, for instance, headed to foster a better continuity of care across organizations through the formation of interorganizational groups or the definition of innovative clinical pathways. Our findings should be interpreted in light of several limitations. First, the degree of EBM adoption was selfreported in the present study. Although this is not an objective approach to studying physicians' orientation to EBM, our approach seems to be consistent with extant research on this topic [2,5]. The type of study design poses another limitation. Given that all data were gathered at the same time, we cannot ascertain whether the collaboration of physicians with colleagues is an antecedent or consequence of EBM adoption. Even though the cross-sectional design adopted in this research prohibits us to determine causality, it provides however that a causal link exists between EBM utilization and social collaborative relationships. We encourage future longitudinal studies to disentangle the effect of physicians' attitudes towards EBM and their propensity to establish collaborative ties in healthcare organizations. --- Conclusions Our study documents that the overall network structure is made up of a dense cohesive core of physicians and a periphery made up of less connected clinicians. The social structure of this model underlies a group tightly connected physicians who interact strongly in order to exchange relevant knowledge, and a large number of less cohesive clinicians who are more likely to be connected amongst themselves than to members of the core part of the network. This result might be interpreted as a marginalization of physicians who are more prone to use EBM in their clinical practice. This social structure may result in a fragmented organization, in which different habits and characteristics of groups of physicians likely increase the risk of conflicts and barriers for integration within hospital boundaries [31,32]. Social network analysis tools and techniques should be increasingly adopted by policymakers and administrators in order to support integration and coordination of clinical activities in complex social systems such as healthcare organizations. --- Competing interests The authors declare that they have no competing interests in the present study. --- Authors' contributions DM and AC conceived the study and undertook the data collected. DM and GD contributed to the study design and performed the statistical analysis. All authors contributed to the subsequent manuscript drafts and approved the final manuscript. | Background: Extant research suggests that there is a strong social component to Evidence-Based Medicine (EBM) adoption since professional networks amongst physicians are strongly associated with their attitudes towards EBM. Despite this evidence, it is still unknown whether individual attitudes to use scientific evidence in clinical decisionmaking influence the position that physicians hold in their professional network. This paper explores how physicians' attitudes towards EBM is related to the network position they occupy within healthcare organizations. Methods: Data pertain to a sample of Italian physicians, whose professional network relationships, demographics and work-profile characteristics were collected. A social network analysis was performed to capture the structural importance of physicians in the collaboration network by the means of a core-periphery analysis and the computation of network centrality indicators. Then, regression analysis was used to test the association between the network position of individual clinicians and their attitudes towards EBM. Results: Findings documented that the overall network structure is made up of a dense cohesive core of physicians and of less connected clinicians who occupy the periphery. A negative association between the physicians' attitudes towards EBM and the coreness they exhibited in the professional network was also found. Network centrality indicators confirmed these results documenting a negative association between physicians' propensity to use EBM and their structural importance in the professional network. Conclusions: Attitudes that physicians show towards EBM are related to the part (core or periphery) of the professional networks to which they belong as well as to their structural importance. By identifying virtuous attitudes and behaviors of professionals within their organizations, policymakers and executives may avoid marginalization and stimulate integration and continuity of care, both within and across the boundaries of healthcare providers. |
Introduction Faced with the fact of religious pluralism, states' options have ranged from persecution or rejection, to acceptance and protection, to disinterest. Something similar happens with the definition of secularism. A negative concept for those who interpret it as a rejection of the religion in the public domain, a positive one for those who understand it as a space for the development of more humane societies. In any case, even those who understand it positively have not reached an agreement as to which are the most appropriate ways of putting it into practice. In this work, starting from a positive conception of secularism in its political meaning as a sphere of possibility for peaceful and fruitful coexistence between different options, we have focused on one of these "need to put into practice" affairs, the management of death; more specifically, burials according to the Islamic practices in present-day Spain. Spanish society has undergone a profound process of secularization in recent decades, particularly after the restoration of democracy in 1978. It has experienced a strong change in its religious beliefs (Pérez-Agote and Santiago Garc<unk>a 2005;Ruiz Andrés 2017, 2022;Urrutia 2020). According to a WIN/Gallup International (2015) survey, Spain is one of the European countries, after Sweden and the Czech Republic, with the highest percentage of adults who declare themselves atheist or non-religious (Gallup International 2015; Zuckerman and Shook 2017, p. 8). On the other hand, a macro-survey carried out in 2021 by the Statista Global Consumer Survey, maintains that 59% of those surveyed in Spain state that they adhere to a creed, a value identical to that registered in other European countries such as Austria, Switzerland, Germany and Denmark (Mena Roa 2022). Recent surveys conducted by the Centro de Investigaciones Sociológicas (CIS) show that approximately 40% of Spaniards declare themselves agnostic, atheist or non-believers (Centro de Investigaciones Sociológicas 2023, p. 16). The Constitución Espa<unk>ola (1978) guarantees the ideological, religious and worship freedom of individuals and communities, declares the non-confessional nature of the state ("No confession shall have state character"; art. 16.3) and sets that the state should promote "relations of cooperation with the Catholic Church and other confessions" (Art. 16.3). In order to develop the Constitution, the state has established cooperation agreements with religious denominations considered as notoriously rooted in the Spanish society: Islam, Judaism and Evangelical Christianity. These agreements contain references to worship centres, schools and cemeteries, among others. Even so, from time to time, and mainly among Muslims, issues relating specifically to cemeteries appear in the news, whether they be reporting positively (Comisión Asesora ADOS 2022;Metroscopia 2011;Rioja Andueza 2022) or complaining (Ibá<unk>ez 2023;Observatorio 2022b;Vega Medina 2023) about the situation of Muslim burials. However, a review of the jurisprudence of the high courts of the Autonomous Communities does not reflect any problems in this regard. Death is an inevitable fact to be faced not only by individuals but by societies. No society, secular or whatever, can escape from it and, in fact, funerary rites and practices have more to do with social contexts than with the deceased themself. The consideration of, on the one hand, the importance of these funerary rites and practices for any society, and on the other, the apparent contradiction among information about Muslim burials in Spain, has led us to the questions that guides this article: How is the cooperation agreement between the state and the Muslim communities being fulfilled in practice? In addition, what difficulties are encountered and what answers are given by both sides? The issues related to funerary practices are multifaceted, with emotional, social, political, economic and ecological components that in each case may be more or less relevant; however, as we cannot cover them all, we have focused here on the anthropological and normative aspects. Regarding the methodology followed in this article, it is important to clarify that we have used secondary sources. Among the analysed documents there is sociological and historical research and reports about Muslims in Spain, funerary enterprises' reports, Muslim Communities reports and burial laws both from the state and from the Autonomous Communities. This article first presents a brief theoretical framework on the concept of secularism and, associated with it, the concept of spatial justice; added to those, is an anthropological approach to the social and personal value of funerary practices (Section 2). A second part presents an overview of Muslims in Spain (Section 3). This is followed by a description of current Muslim funerary practices with a brief look at their history (Section 4). After reviewing the Spanish regulations on cemeteries and burials (Section 5), this article ends with some discussions and conclusions (Sections 6 and 7). --- Secularism and Funerary Practices --- Secularism As Berlinerblau points out, there is no clear agreement on what is meant by the term "secularism" (Berlinerblau 2022, pp. 5-9). Among the multiple definitions, in this article we stand with those who interpret it as a system for articulating religious plurality in a neutral state that guarantees freedom of worship to its citizens (Casanova 2009(Casanova, p. 1051)). Speaking in the same positive sense, Habermas (2015, p. 269), for whom "secular" entails a reasonable response to the need for peaceful coexistence in plural societies, and which, in his opinion, has allowed religious minorities to move from mere tolerance to the recognition of rights. Similarly, the aforementioned Berlinerblau, points out the following as a basic definition: "political secularism refers to legally binding actions of the secular state that seek to regulate the relationship between itself and religious citizens, and between religious citizens themselves" (Berlinerblau 2022, p. 5). In his work Life World, Politics and Religion, Habermas devotes the last chapter to religion in the public sphere of post-secular society. In it, he stresses that immigration provokes in the receiving societies the "challenge of a pluralism of ways of life, which goes beyond the challenge of a pluralism of faith currents" (Habermas 2015, p. 268). In other words, the problems of coexistence between people of different religions are exacerbated by being associated with problems for the social integration of people with different cultures. In the context of this discussion, Habermas, poses the following question: "How should we understand ourselves as members of a postsecular society, and what should we expect from each other in order to ensure in our states a civil treatment of citizens towards each other even under the conditions of cultural and ideological pluralism?" (Habermas 2015, p. 381). Habermas' proposal has two aspects: on the one hand, he proposes a process of reflection and permanent democratic learning in which citizens, from different positions, translate, justify and debate their proposals. On the other hand, he stresses that the success of these processes depends on the acceptance of common values and that no one position should claim to be the sole guide to the world of life. Applied to religions, this means that no religion can claim to determine all aspects of people's lives and of society (Habermas 2015, p. 81). Without going into definitions, Charles Taylor describes the secular social environment as consisting of "... among other things, in the move from a society in which faith in God was unquestioned and, indeed, far from problematic, to a society in which such faith is regarded as one option among others, and often not the easiest to adopt" (Taylor [2007(Taylor [ ] 2014, p. 22), p. 22). In other words, religious belief becomes just another belief, which leads to the question of whether or not "Now that religion is no longer the only social force able to challenge the state should not religion then be allowed to follow the same rules as other members of civil society when they participate in the public sphere?" (Martin 2013, p. 150). The question obviously deserves a moment's pause because the question of plurality and how to respond to it is not something to be taken lightly. Indeed, some authors have pointed out some problems that can arise from the uncritical assumption of the goodness of plurality. In his short article "Five Confusions About the Moral Relevance of Cultural Diversity", Ernesto Garzón Valdes, after justifying why he considers it a mistake to confuse tolerance with moral relativism and cultural diversity with moral enrichment, argues for the practice of an active tolerance that has no qualms about rejecting the intolerable. He insists on the importance of democracy in respecting what he calls the "preserve" of primary rights that escape any majority (or community, we would add) decision. He adds that the justification of the limits of tolerance must be debated and argued with "universalisable reasons; excluding, when designing social institutions, the appeal to non-transferable personal convictions such as those invoked by religious or ethnic fundamentalists" (Garzón Valdés 1997, p. 5). Plurality only has value if it is able to respect the rights of all people and guarantee them the possibility of satisfying their basic needs. In the same critical vein, Giacomo Marramao notes that "Western democratic societies today are confronted with the claim to citizenship of culturally differentiated individuals or groups, who, while instrumentally demanding recognition of their rights, refuse to grant universal legitimacy to democratic formalism" (Marramao 1996, p. 91). In order to give an adequate response to this question, Marramao highlights three points: first that the ethnocentric component of Western universalism, which has fuelled the politics of difference that challenge democratic values, must be assumed. Secondly, he understands that we must overcome the axiom of the incommensurability of cultures, include the moment of symbolic interaction between cultural contexts (instead of cultural differences) and, thirdly, defend the value of democracy as a "common place of uprootedness" (Marramao 1996, p. 96). If the definitions and positions on secularity are many, the way in which these definitions are put into practice is no less so. Berlinerblau (2022, pp. 49-126) considers several basic models of secularity or, as he prefers to call it, "political secularism". The three main models that Berlinerblau mentions are the "separationist framework" (the case of the United States), the "la<unk>cité" (France) and the "accommodationist framework" (India). Against this "doctrinal" backdrop, Bhargava proposes shifting the focus from doctrines to the normative practices of states: "Once we do this, we will begin to see secularism differently, as a critical perspective not against religion but against religious homogenization and institutionalized religious domination." (Bhargava 2011, p. 92). Analysis of secular practices in different states reveals, on the one hand, that there are multiple models of secularism in democratic and non-democratic states, as well as in some countries with large Muslim populations, such as India, Senegal and Indonesia (Stepan 2011, p. 115), and on the other hand, it highlights the importance of setting limits on what can be accepted. For example, the state must be sensitive to the moral integrity of religions, liberal and illiberal, but it cannot tolerate any of the four forms of oppression: "interreligious, intrareligious, domination of religious by secular, and domination of secular by religious" (Bhargava 2011, p. 110), and neither should religions (Martin 2013, p. 160). The state must also accept that human beings feel connected to transcendent entities, including God, and that this must be visible in individual beliefs as well as in social practices, but "A secular state has its own secular ends". (Bhargava 2011, p. 97). Bader further suggests that the concept of secularism should be dropped from our constitutional language as it is, not only a "complex, polysemic and contested concept but also a "fuzzy", chamaleonic and highly misleading concept" (Bader 2017, p. 341). Moreover, and more importantly, this author suggests that the principle of constitutional secularism hides from view the tensions among secularism, liberal constitutionalism and democracy. What really matters is not if the state is secular or not but "whether it is decent and/or liberal-democratic" (Bader 2017, p. 340). To conclude, in this article and very close to Berlinerblau's general definition, even if we are not discussing here any of the definitions, we assume a positive conception of secularism understood as a set of rules that establish a playing field in which citizens of different beliefs, religious or not, can coexist and develop peacefully and in solidarity. Likewise, we assume that this playing field must be established taking into account that the human being is not only an emotional, rational or economic being, but also a being that inhabits and interacts in spaces (and the cemetery is a space designed by the living for the dead). In relation to this aspect, we present the following epigraph. --- Spatial Justice In recent decades, a concept related to space and urban design has become increasingly important: "spatial justice". This concept can be understood from Henry Lefebvre's "Theory of Spaces" (Lefebvre [1974] 1991) and Edward W. Soja's "Spatial Justice" proposal (Soja 1996(Soja, 1997(Soja, 2000(Soja, 2010)). The starting point of these authors is that space is not a mere external environment or container, a neutral scenario, but a social product, the fruit of certain historical and present relations of production that are materialised in a certain spatio-territorial form. From this perspective, human life is temporal, social and spatial, simultaneously and interactively, and is therefore always engaged in a socio-spatial dialectic. Soja reformulates Lefevre's approach by incorporating the concept of "spatial justice" (Soja 1996, pp. 53-82). This concept posits how space is involved in generating and sustaining different processes of inequality, injustice, exploitation, racism, sexism and so on. The spaces that are shared reflect the type of society that is being created (Johnson 2008;Soja 2009Soja, 2010;;and Harvey [1973] 2009, although he speaks of "territorial injustice" p. 107). Soja classifies the spaces of injustice into exogenous and endogenous geographies. The former is produced by impositions of hierarchical power (unjust exogenous geographies). This would be the case, for example, with apartheid. The second (unjust endogenous geographies) derive from decisions related to the location of services, infrastructure, projects, and their consequences on spatial distribution, evidenced in, for example, the inequitable distribution of basic urban services, such as public transport, clinics and schools. (Soja 2000(Soja, pp. 197-202, 2010, pp. 31-66), pp. 31-66). Spatial thinking thus links the quest for spatial justice with the pressures and struggle over what Lefebvre called the "right to the city" (Lefebvre [1968(Lefebvre [ ] 1978)). However, these authors do not mention cemeteries. In any case, the right to the city was first defined in 2005 by the World Charter for the Right to the City as "the equitable use of cities within the principles of sustainability, democracy, equity and social justice". Its first article assesses that, "All persons have the Right to the City free of discrimination based on gender, age, health status, income, nationality, ethnicity, migratory condition, or political, religious or sexual orientation" (World Charter for the Right to the City 2005, p. 2). The European Union is also increasingly concerned about making this concept a reality based on the criteria of spatiality, integration and inclusion in order to contribute to better territorial cohesion. (Madanipour et al. 2022). From this perspective, "spatial justice" can also be circumscribed to the spaces designed by the living for the dead: cemeteries. Cemeteries should also be inclusive places that reflect the same inclusiveness sought and projected for the spaces of the living. In a city designed under the concept of "spatial justice", cemeteries should also be places where any citizen (including Muslims) can feel that his or her mortal remains have been duly received without any discrimination. In the following section, we develop some ideas that, in our opinion, justify a secular state paying attention to the funerary practices performed by any of its citizens. --- Funerary Practices The wide variety of mortuary customs and rituals collected by the extensive ethnographic literature (e.g., Azevedo 2008;Barley 1995;Bloch 1994;Bloch and Parry 1989;Douglass 1969;Madariaga 1998;Rojas 2016) shows the importance of death for all human beings. Beyond the body's disposal, the meaning of these rites can be varied: transcendence/survival (Bauman 1992), regeneration of life and reaffirmation of the social order, as well as the relationship between generations and legitimation of authority (Bloch and Parry 1989) or a prolonged dialogue about the notion of personhood (Barley 1995). Robert Kastenbaum defined the "death system" as "the interpersonal, socio-cultural and symbolic network through which an individual's relationship to mortality is mediated by his or her society" (Kastenbaum 2001, p. 66). In pre-industrial societies, death had a clearly social, communal dimension. The death of a member of the community disrupted social organisation and highlighted the risks to the survival of the community, requiring a response that reorganised society and averted the danger (Hertz 1990). Such behaviour was widespread well into the 19th century and even into the early decades of the 20th century. However, modern instrumentality has deconstructed mortality, stripping death of meaning and seeing it as a useless leftover of life and as "the Other of modern life" (Bauman 1992, p. 131). Philippe Ariès called "inverted death" or "forbidden death" the characteristic model of 20th century Western societies. Death, once so present, is going to fade away and disappear. The progressive process of individualisation, together with the medicalisation of death, consolidated the "social indifference" to the loss of one of the members of the group and the perception that death was more a personal than a social problem. Medical technology became the new instrument for domesticating mortality, replacing religion in this function (Ariès [1977(Ariès [ ] 1999(Ariès [, [1975(Ariès [ ] 2000)). Today, a large part of the population is dying in hospitals. It is not usually the preferred place to die; however, when the dying patient has not expressed their will about where to die, families generally send him to the hospital for the greater security that it offers them (Lima- Rodr<unk>guez et al. 2018). Many times, the corpse is quickly sent to a mortuary where those who want to say goodbye look at it through a glass window before it is buried or cremated. The collective support provided by traditional ritual is lost and the living are left without references for gestures to relieve their grief and symbolically facilitate the deceased taking their place among the dead (Barley 1995, p. 132;Segolene 1998, p. 62). The recent situation of the COVID-19 pandemic has shown how hard it has been for many families not being able to say goodbye to their loved ones or having been able to celebrate the usual funeral rituals (Burrell and Selman 2022;Prieto Carrero et al. 2021). Although funeral rites are transformed into a public celebration of a private experience, adapted to the individual characteristics of the deceased or those who remember them, they are far from disappearing (Segolene 1998, pp. 63-67). New technologies can also influence funeral rituals and the way people deal with death. Thus, for example, more and more terminally ill people are sharing their experiences and personal process in blogs (Kemp 2018, pp. 385-86). In largely secularised societies, where personal belief is autonomous from denominational orthodoxy (Rodr<unk>guez et al. 2021), funerary practices may be distanced from religious customs even among those who claim to belong to a religion. Thus, Spain, a country which, until a few decades ago, had a strong Catholic tradition, is the country in the European Union that cremates its dead the most (Palacio 2023), whereas the Catholic Church only allowed cremation from 1963, and, from 1997, a funeral liturgy in the presence of cremated remains, which have to be buried. In fact, in 2005 the average number of cremations in Spain was 16%, reaching 41% in 2018, and it is estimated that it will reach 60% in 2026 (D<unk>az Pedraza 2022, p. 88). In short, the multiplicity of funeral rites existing in today's societies, whether associated with a religion or not, traditional or innovative, reflects the variety of communities that make them up, as well as the multiplicity of meanings that people give to their lives. All societies, including the most secularised ones, must manage the treatment of these plural practices and sensibilities surrounding death. --- Sociological Data on the Reality of Muslims in Spain Spain currently displays great religious diversity. As we do not have precise data because each confession estimates the number of its faithful with different criteria, we must consider the following data as approximate (Dahiri 2022). The Spanish Episcopal Conference estimates that there are 32.6 million Catholics; the Federation of Evangelical Religious Entities of Spain considers that there are 1.7 million Evangelical Christians, 900,000 of whom are migrants; the Union of Islamic Communities of Spain puts the number of Muslims at 2.3 million; and the Federation of Jewish Communities of Spain groups together 40,000 Jews. With each of these accounts, the Spanish state has established "Cooperation Agreements" in compliance with Article 16.3 of the Constitution. In this context, and given that this paper focuses on Islam, this section presents some data on the social situation of Muslims in Spain. Beyond being a homogeneous reality, the plurality of Spanish Muslims is manifested in relation to their origins, their language and the way they live their faith (Casa <unk>rabe-IEAM 2009, p. 12; Moreras 2013Moreras, 2017;;Planet Contreras 2013, p. 266). On the other hand, there are Spaniards of Spanish origin who, for various reasons, profess the Muslim religion (Rosón and Tarrés 2013, pp. 249-64). It is not easy to know the exact numbers of Muslims in Spain, as the question of religion does not appear in most official surveys. The reports of the Centro de Investigaciones Sociológicas (CIS) only ask whether the person is Catholic, or of another denominationwithout specifying which one-agnostic or atheist, and the Population and Housing Census in its last wave (2021) will not have data until 2023, the last one being from 2011. We can add the reports by Metroscopia, which in its fifth wave collects data from the 2011 Census, and those prepared by the Union of Islamic Communities of Spain (UCIDE) on Muslim citizens in Spain. These, as of 31 December 2021, are based on data from the General State Administration and UCIDE's own registers and include as Muslims the descendants, up to the third generation, of those who came to Spain in the 1950s. The two studies are hardly comparable. Metroscopia's survey has Muslim immigrants as its study universe (excluding Muslims born in Spain) and is an opinion survey; UCIDE's survey is based on official data about "all" Muslims. The results are, logically, diverse and are summarised below. --- The Metroscopia Report (Metroscopia 2011) Among the results of the study, three fundamental aspects stand out: religiosity, the desire to integrate and the positive evaluation of Spanish society. Regarding religiosity, 53% of respondents declared themselves to be practising Muslims and 12% nonpractising Muslims. However, religion ranks fourth in importance in their lives (88%), behind family (99%, work (97%) and money (92%). The authors underline that they favour a secular state that does not give special treatment to any religion, and their adherence to Islam seems more identitarian than a defence of religious orthodoxy. Talking about integration, 67% feel at ease in Spain, most speak Spanish well and say they do not encounter any obstacles to the development of their religious beliefs. Among those who mention an obstacle (10%), most point to the shortage of mosques. There is no mention of cemeteries. Finally, they value Spanish society and institutions and the treatment they generally receive from them. They value equal treatment in health care, equality between men and women, the general standard of living and consider (93%) that Muslims and Christians make an effort to understand and respect each other. In general, they perceive little negative social reaction to the Muslim religion (Perceived: 36%). Despite the good results, the authors question to what extent the data reflect reality or are mediated by what they call the influence of social desirability, i.e., what immigrants think they are expected to answer. --- The Report of the Union of Islamic Communities of Spain (UCIDE) (Observatorio 2022a) Produced in 2021 and much shorter (14 pages), this report does not collect opinions, but rather data from the records of the General State Administration and the Union of Islamic Communities of Spain itself. The collected data refer to the Muslim population in Spain, both immigrant and native, according to different variables such as place of origin and nationality, Muslim population in the different Autonomous Communities and within them, as well as by provinces. As mentioned above, it considers all descendants, up to the third generation, of those who came to Spain in the 1950s to be Muslims. The report begins by noting that "Maleki and Hanafi (Sunni) rites (sic) are the most widespread in Spain for the practice of Islamic worship" (p. 2). It also gives data on the number of Islamic entities in Spain: "52 Islamic confessional federations (including Comunidad Islámica de Espa<unk>a, CIE), 1819 religious communities and 21 confessional associations" (p. 14). These figures show the internal diversity of the group, although the Islamic Community of Spain (CIE), the legal entity in charge of monitoring the cooperation agreement with the Spanish state, does not include all of them. On the question of cemeteries, the two reports diverge. In the Metroscopia report, Muslims interviewed did not report any perceived lack of cemeteries, while UCIDE's report indicates that 95 per cent of communities do not have a cemetery or almacbara. It is not clear which entities are meant by 'Islamic communities', whether it is each of the entities registered in the Register of Religious Entities or those at the municipal level. This distinction is important because in a municipality there may be several registered entities whereas public cemeteries exist at a municipal or supra-municipal level. In short, the reality of Muslims in Spain is more complex and plural than might be expected from the fact that the state recognises that a single entity, the Islamic Commission of Spain, has the legitimacy to represent the interests of all Muslims. --- Burials in Islam: Basic Funerary Practices and Legal Considerations --- Basic Funerary Beliefs and Practices It is important to know what the basic funeral practices of Muslims are. Some of them, according to circumstances, could somehow affect the mortuary policy of Western countries, in our case Spain. A brief review of history shows that these practices were not always the same and varied according to circumstances and cultures. This suggests that, as with mortuary practices in Western countries, Muslim funeral practices may also undergo changes. The Qur'an is not very explicit about how Muslim funerals should be conducted (Campo 2001, p. 263), but it gives indications of the custom of burial in direct contact with the earth. Further information is provided by the sunna, the body of Muhammad's sayings and deeds and his way of proceeding as attested by the ashab, his contemporaries and companions. From the Qur'an and the sunna emerge a series of funerary guidelines for the Muslim world, which are summarised below. It is essential that the body is washed and buried as quickly as possible, preferably on the day of death, but no earlier than eight hours, and no later than twenty-four hours. The corpse must be respected because it is to be returned on the Day of Resurrection (yaum al-Qiyama), so embalming and autopsy are not recommended unless strictly necessary, and cremation is prohibited. The corpse will be washed by men if it is a male and by women if it is a female (Bennett 1994, p. 108). It is then wrapped from head to foot in white linen, in three pieces if male and five pieces if female (Sakr 1995, p. 62). The corpse will be placed on a flat board (Lapidus 1996, p. 154;Sakr 1995, p. 64), in a slightly foreshortened position, with the eyes facing Mecca, the arms outstretched at the sides of the body and the feet pointing south. The characteristics of Muslim cemeteries are austerity and uniformity. The deceased are buried in absolute anonymity, the acquired social status disappears, in order to emphasise the religious sense of the eschatological afterlife (Mart<unk>nez N<unk>ez 2011). Consistently, the style of tomb construction is characterised by simplicity and economy of cost. The deceased should be buried in the locality in which they lived/died. The burial consists of a hole in the ground that completely conceals the corpse (Ekpo and Is'haq 2016, p. 62). All Muslims, rich or poor, are buried following the same procedure. It is not permitted to bury the deceased in the coffin unless there is a requirement that must be met in a particular area or country (Ekpo and Is'haq 2016, pp. 61-62). Shared graves are only permitted in times of war or epidemic (Simpson 1995, p. 242). If there are multiple graves, the Muslim graves must be separate from those of non-Muslims. However, historical documents and some current studies show that burial customs and forms of burial, while maintaining the position facing Mecca and with the body in the ground, have varied over time as well as according to social groups and territories. In the case of Spain, historical texts, especially those of Al Tafri (10th century) and Yça de Segovia (15th century), reflect customs that differ from those of today. It states that there is no established rule and that whoever knows best should bathe the dead, that the man should bathe his wife and the woman her husband and young boys (Abboud-Haggar 1999, pp. 172-73; Echevarr<unk>a 2020, pp. 100-1, note 67). On the other hand, the excavations of the Islamic cemeteries in Toledo and other cities of the Islamic period in the Iberian Peninsula, show that overpopulation forced communities to bury several persons together in a single tomb (Echevarr<unk>a 2013, p. 359). They show the reuse of some tombs, even of the Muslim rite, by simply covering the ground again with earth to fulfil the precept of resting on the ground (Echevarr<unk>a 2020, p. 83). The position of the body, which was originally in strict lateral decubitus, also changed. The tombs of the wealthiest became more conspicuous, and the practice of customs originally rejected by jurists, such as visits to the cemetery (Christys 2009, p. 298;Davoudi 2022, pp. 232-33) and mourners at funerals (Echevarr<unk>a 2020, pp. 84-85;Halevi 2007, p. 114), has been recorded. Even today, depending on the local practices of the various countries where Islam has become firmly established, differences can still be found. Thus, for example, for mourning in North Africa women will wear white, in the Middle East they wear black, and in Turkey they will choose subdued colours (Jonker 1997, p. 160). Generally speaking, a large proportion of middle-aged Muslims living in Europe maintain traditional funeral practices and beliefs about the afterlife (Ahaddour et al. 2017;Kadrouch Outmany 2016, p. 104;Subirats 2014Subirats -2015, pp. 58-59), pp. 58-59). Even so, today there is a "growing individualisation in the religiosity of Muslim communities" (Moreras 2017, p. 32). This development entails redefining rituals that become more an identity question than a strictly religious one, and act as active negotiation mechanisms with respect to European societies, as in the Spanish case analysed here. On the other hand, as we will see below, the Muslim legal tradition offers examples of flexibility and adaptability to new circumstances, with regard to funerary needs. --- Legal Considerations: The Muslim Principle of Maslahah Mursalah ("Public Interest") The term maslahah designates in Islam that which is in the public interest or welfare. Strictly speaking, maslahah means "utility", but in general terms, maslahah denotes "cause or source of something good or beneficial" (Khadduri 1991, p. 738;Opwis 2005, p. 182;Salvatore 2007Salvatore, 2009, p. 194), p. 194). Maslahah is the interest or benefit for which there is neither legitimate supporting evidence in the Islamic sacred sources nor a claim to the contrary ("unrestricted" utilities, utilities not enjoined or excluded by revelation)" (Kamali 2003, p. 362;Opwis 2010, pp. 9-13;Vogel 2000, p. 372). Jurists use this concept to mean "general good" or "public interest" (Kayikci 2019, p. 6). It is the principle by which Allah is moved by considerations of utility and universal good (Pareja 1975, p. 226). To put the principle of maslahah mursalah into practice, three conditions are required: (1) It must be a real interest to benefit people or prevent them from harm. (2) It is in the public interest of the nation as a whole or the majority, not to serve personal interests or the interests of a particular group. (3) Provisions based on the general interest are not expressly regulated by the Qur'an, the sunna or the consensus of the scholars (ijma') (Haryati Ibrahim et al. 2022, p. 123 | Death is not only a universal biological fact; for the individual it is the "event horizon". This fact has important symbolic meanings and complex social consequences. Any society, secular or not, must manage this reality. What response is given to the question of religious phenomenon in general, and to funerary practices in particular, in a secular society in which individuals with different religious sensibilities coexist? This article aims to analyse the response given by the Spanish state to the questions raised regarding burials by Muslim communities, the most widespread minority group in Spain as a whole. This response, which would be framed within what could be called a 'cooperation model', has encountered some difficulties as a result of the territorial organisation of the Spanish state. Despite this, the willingness to cooperate on the part of both the administrations that make up the state and the Islamic communities has made a situation of stable coexistence possible. |
the Islamic sacred sources nor a claim to the contrary ("unrestricted" utilities, utilities not enjoined or excluded by revelation)" (Kamali 2003, p. 362;Opwis 2010, pp. 9-13;Vogel 2000, p. 372). Jurists use this concept to mean "general good" or "public interest" (Kayikci 2019, p. 6). It is the principle by which Allah is moved by considerations of utility and universal good (Pareja 1975, p. 226). To put the principle of maslahah mursalah into practice, three conditions are required: (1) It must be a real interest to benefit people or prevent them from harm. (2) It is in the public interest of the nation as a whole or the majority, not to serve personal interests or the interests of a particular group. (3) Provisions based on the general interest are not expressly regulated by the Qur'an, the sunna or the consensus of the scholars (ijma') (Haryati Ibrahim et al. 2022, p. 123). In any case, the maqasid shariah or principles of shariah must be respected: religion, life, intellect, lineage and property. Malik ibn Anas (d. 179 A.H./795 A.D.) is credited with being the first jurist to make decisions on this principle (Alias et al. 2021;Esposito 2003, p. 189;Khadduri 1991;Salvatore 2007, p. 156). In Spain it appears in the mentioned medieval author Yçar de Segovia (Yça Jabir n.d.). Some Qur'anic principles capture the essence of the concept of maslahah, such as those that point out that Allah's message to Muhammad is not intended to be a burden but to offer divine mercy to all humankind, regardless of any barriers (Qur'an 5:6). Only the Shafi'i school does not admit legal opinions based on maslahah because it holds that there can be no maslahah outside the Shari'a (Kamali 2003, pp. 362-64;Esposito 2003, p. 195;Soufi 2021). In present days, this concept has become the subject of increasing interest among those jurists who have sought legal reforms to meet the needs of modern conditions in Islamic society. Since, in any case, maslahah implies respecting the five principles of the Shari'a, it might seem that there is an incompatibility between this principle and the secularised democracies of the West. However, these democracies, from a secular and non-denominational perspective, respect the same values, such as the ones mentioned previously: religion, life, intellect, lineage and property. This means that between contemporary Muslim culture and Western culture there can be found a certain degree of "reasonableness" (Mangini 2018, p. 20) in order to promote the common good. This degree of "reasonableness or compatibility can also be seen, as shown below, in relation to the question of cemeteries. --- Funeral Legislation: The Spanish Legal Framework The approval of the current Spanish Constitution (Constitución Espa<unk>ola 1978) in 1978 gave birth to a dramatic change in the organization of the Spanish state. With freedom, justice, safety, equality, solidarity and pluralism as prime principles, the Constitution in its Part VIII, referred to as the "Territorial Organisation of the State" in its Chapter One, settled that "The State is organised territorially into municipalities, provinces and Autonomous Communities that may be constituted. All these bodies shall enjoy self-government for the management or their respective interest" (Art. 137). Chapter Three of the same Part VIII assesses that "In the exercise of the right to self-government recognised in Article 2 of the Constitution, bordering provinces with common historic, cultural and economic characteristics, island territories and provinces with historic regional status may accede to self-government and form Autonomous Communities in accord with the provisions contained in this Title and in the respective Statutes" (Art. 143.1). Articles 148 and 149, respectively, fix the competences (powers) that may assume the Autonomous Communities and those that will hold the state, exclusively. As a result, Spain is presently territorially decentralised and formed by 17 Autonomous Communities, each one with its Statute (the agreement that establishes the Community's powers) its Government and its Parliament with legislative power. There are also two Autonomous Cities (Ceuta and Melilla, in the north of Africa) where the majority of the population is Muslim. Equally, the Spanish Constitution of 1978(Constitución Espa<unk>ola 1978) guarantees the ideological, religious and worship freedom of individuals and communities, declares the non-confessional nature of the state (Art. 16.1), and establishes "relations of cooperation with the Catholic Church and other confessions" (Art. 16.3). The state has established "cooperation agreements" with some religious denominations (Islam, Judaism and Evangelical Christianity). Law 26/1992 of 10 November (1992) (Ley 26/1992), approving the State Cooperation Agreement with the Islamic Commission of Spain, establishes that "Islamic Communities belonging to the Islamic Commission of Spain are recognised as having the right to the concession of plots reserved for Islamic burials in municipal cemeteries, as well as the right to own Islamic cemeteries" (Art. 2.5). Currently, the Autonomous Communities are the bodies that have competence in matters related to the implementation of agreements with religious denominations. Within each Autonomous Community, it is the local councils that have competence in the area of cemeteries. This means that they are being obliged to guarantee that burials in their cemeteries are carried out without discrimination of religion or any other grounds (Article 1 (Ley 49/1978) of 3 November; Article 2.b of the Organic Law on Religious Freedom (Ley Orgánica 7/1980). This is why the territorial associations and federations, Islamic in this case, choose to establish agreements within the Autonomous Community or the municipality in which they reside. However, the Organic Law on Religious Freedom of 1980 imposes some limits, such as the protection of the rights of others in the exercise of their public freedoms and fundamental rights or the safeguarding of security and health (Article 3.1). The application of the regulation may present some difficulties. These are mainly confined to three areas: that of health, that of availability of space in cemeteries and that of the "spatial arrangement" within cemeteries, given that the Muslim tradition advocates the separation of Muslim plots from non-Muslim plots. --- Health National and regional legislation establishes that burial in a coffin is compulsory, which is contrary to the traditional Muslim prescription of burial in contact with the ground. Only in the Autonomous Cities of Ceuta and Melilla, where the majority of the population is Muslim, did the regulations allow burial directly in the ground without a coffin. Lately this has been changing. Andalusia updated its Regulation on Mortuary Health Police in 2001 (Decreto 95/2001, of 3 April) to accommodate religious specificities (Moreras and Tarrés 2013, p. 47), requiring the coffin to carry the corpse, but exempting it in the burial as long as it concerns persons whose cause of death does not represent a health risk (Art. 21.4). Shortly before the advent of the COVID-19 pandemic, Valencia, Castilla y León (N<unk>ez 2019;Santiago 2019) and Galicia (<unk>lvarez 2019) joined this list of communities that allow coffinless burial. In any case, even before the pandemic, Muslims had already adapted to the regulations prohibiting burial without a coffin by placing soil inside the coffin to allow the corpse to be in contact with the earth. This was seen as "a formula of rapprochement of positions" (Comisión Islámica de Espa<unk>a 2019). This formula became mandatory because of the COVID-19 pandemic, which forced a drastic change in funeral practices worldwide, affecting all religions, which made a great effort to adapt (De León 2020). The technical document "Procedure for the management of dead bodies of COVID-19 cases" published by the Spanish Ministry of Health on 26 May 2020, stated that any burial of a person who died from COVID-19 should be in a coffin. In fact, the president of the Islamic Commission of Spain himself, Riay Tatary, and his wife, who died of COVID-19 in April 2020, were buried in coffins (Cadelo 2021). --- Availability According to the Observatorio Andalus<unk> and Union of Islamic Communities of Spain, the Muslim population in Spain is, with all reservation, around 2,250,000 (Observatorio 2022a). For that population there are, according to a report by the Islamic Commission of Spain, two private Muslim cemeteries and thirty-five plots for Muslims in municipal cemeteries. (Comisión Islámica de Espa<unk>a 2020), some of which have already reached their maximum capacity (Vega Medina 2023). On the other hand, the differences between Autonomous Communities are notable, while some still lack plots for Muslim burials (Observatorio 2022b, p. 16;2022c, p. 27), others are increasing the number of burials (Comisión Asesora ADOS 2023). Most cemeteries require the person to be registered as a local dweller in order to access their services. This makes it difficult to find alternatives for those who do not have space in their place of registration's cemetery. The lack of burial space affects people of all faiths and is one of the biggest challenges facing Spanish cemeteries today. On the other hand, the general tendency of all Spanish Autonomous Administrations is not to create private confessional cemeteries with public funds (Llaquet 2012, p. 79). In these circumstances, repatriation of the body is still very common among Spanish Muslims, although those who were born in Spain generally choose to be buried in Spain. Repatriation is also very common in other European countries. For example, in the Netherlands the repatriation rate is approximately 90%, for France the rate is 80% and for Norway 40-50% (Ahaddour et al. 2017(Ahaddour et al., 2019;;Breemer 2021, p. 20;Kadrouch Outmany 2016, p. 104). The reasons for repatriation are varied, including funeral legislation, financial constraints, lack of knowledge of existing possibilities and a sense of belonging to the family and country of origin. Since the 1990s, Spanish Muslim communities have devoted more effort to ensuring the repatriation of their deceased than to obtaining reserved plots. After the COVID-19 pandemic, this situation has been reversed, forcing municipalities to seek urgent alternatives for the dignified burial of their Muslim fellow citizens (Moreras 2022, p. 79). Therefore, the lack of burial sites is the biggest complaint voiced by Muslim communities (Comisión Islámica de Espa<unk>a 2019; Consejo Consultivo de la Unión de Comunidades Islámicas de Espa<unk>a 2014; Etxeberr<unk>a et al. 2007, pp. 168-72;Europa Press Sociedad 2021;Salguero 2021). In any case, in relation to the availability of space for Muslim burials, the situation in Spain is similar to that in other European countries (Ahaddour and Broeckaert 2017;Arab News 2020;Breemer 2021;Gilliat-Ray 2015;Savio 2020;Selby 2014). Despite this, the way in which each country deals with the issue is different depending on the agreements (or lack thereof) with the respective Muslim communities. In this regard, the application of the above-mentioned legal principle of maslahah mursalah has proven to be somewhat effective. In a densely populated Muslim country such as Malaysia, multi-level construction was permitted in 2015 through a Fatwa., Particularly in the Federal Territory of Kuala Lumpur, a Fatwa was issued in 2018 recommending, in both rural villages and densely populated cities, the implementation of multi-level burials to maximise the use of cemeteries, with the condition of preserving the sanctity and honour of the dead (Haryati Ibrahim et al. 2022). This example in a Muslim country reinforces the idea that in countries where Muslim communities are a minority, the principle of maslahah mursalah could be used to solve similar problems (Mawardi 2020). In this sense, although it is not specified that it be by application of the maslahah mursalah, in some cemeteries in Spain, such as those in Valencia and Mallorca, it has been decided to build Muslim burials in the ground, downwards, one on top of the other (Alba 2021). This shows that Muslim communities have made efforts to adapt to the new circumstances created by spatial problems (reduction of burial spaces) or health problems (COVID-19 pandemic), which have forced them to modify some of their burial practices to some extent. In these adaptation efforts, concern for the common good beyond the religious beliefs of individuals has been fundamental. --- Spatial Arrangement Applying Edward W. Soja's concept of "Spatial Justice" (Soja 1996(Soja, 1997(Soja, 2000(Soja, 2010) ) to funerary spaces, one might wonder whether it would not also be possible to design in the near future an inclusive type of cemetery, which would not show great social differences or differences based on economic, ideological or religious motives. One might ask to what extent the parcelling of funerary spaces helps inclusivity or underlines exclusivity in a society that upholds the principle of equality. From this perspective, if a society with multicultural ghettos is clearly not inclusive compared to one whose spaces are not separated but shared by members of the whole society, it is logical to think that something similar might not happen with the spaces shared by the deceased, or rather, by the living dedicated to the deceased. Cemeteries that are much partialised and clearly differentiated can generate the sensation of separation between the faithful and the unfaithful, "ours" and "the others". However, the realisation of spatial justice can take various forms and the fact of separating plots in a cemetery on religious grounds can also be seen as an example of respect for diversity and as an attempt to integrate those first-generation Muslims, older people from Muslim countries, who generally opt for repatriation of corpses. Equally, it can be seen as a way of integrating those who were forced to migrate for political reasons and asylum claims, and who cannot return. In any case, it seems prudent to avoid "severe segregation" so to establish distinctions between Muslim plots and the rest of the groupings the use of ornamental and vegetal elements is recommended (Moreras and Tarrés 2013, p. 43). Moreover, parcelling may, in the medium term, become meaningless if, as may happen, second-and third-generation Muslims adopt and assimilate majority practices, (Ansari 2007;Balkan 2018;Kapletek 2017). All of this is not incompatible with those proposals that posit the desirability of certain key principles for all cemetery systems: dignified disposition of the body of the deceased, democratic accountability, equal access to funeral services regardless of income, freedom of religious expression and environmental sustainability (Rugg 2022). In any case, the three aspects presented (health, availability and spatial arrangement), although susceptible of being debated and dealt with from different angles, do not seem to pose serious problems of coexistence among citizens of different creeds. --- Discussion At the beginning of this article, we asked ourselves about the positive and problematic aspects of the Spanish model of secularism. We discuss here some issues related to this question; questions that the very development of this article has raised in its authors, and which remain open for future debates and work. First. Spain's experience seems to prove Marramao right when he rejects the axiom of the incommensurability of cultures and proposes to highlight the meeting points between different positions (Marramao 1996). However, although it seems to have achieved a peaceful and respectful integration of religious plurality, the model of the Spanish state in its relationship with religious denominations is not easy to classify within the models pointed out by Berlinerblau, Stepan and Bhargava (separation, la<unk>cité and accommodation). The Spanish state itself, departing from the Spanish Constitution and the agreements established with various confessions, including Muslims, defines this relationship as one of "cooperation". Perhaps it can therefore be framed within what could be called a "cooperation model". The very term "cooperation", at least in the Spanish language, seems more constructive and positive than "accommodation"; to cooperate implies "to work together with another or others to achieve a common goal" (Real Academia Espa<unk>ola 2022a), while "accommodate" implies "to harmonise, to adjust to a norm, to conform..." (Real Academia Espa<unk>ola 2022b). This nuance has also been pointed out in some studies critical of the goodness of the model and the widespread use of the term accommodation (Barras et al. 2018;Solanes Corella 2017). Second. The organisation of the Spanish state combines hierarchical aspects (the constitutional umbrella and the exclusive competences of the state) with territorial decentralisation (competences assumed by each Autonomous Community). The state's relationship with Islamic communities thus brings into contact a complex, hierarchical organisation with a multiplicity of communities that are not only not hierarchically organised but also often not even related. The state establishes an agreement with a single interlocutor (the Islamic Commission of Spain), which may make it difficult to reflect all the sensitivities of such a plural and heterogeneous group. Moreover, it is an agreement that is not the responsibility of the state administration to implement since competences in this area correspond to the Autonomous Communities and municipalities. This fact can make it difficult to implement agreements in the same way throughout the country. Paradoxically, however, this organisation brings local religious communities closer to the decision-making centres, enabling a more fluid dialogue between communities and the administration. Given that there are different forms of state organisation, to what extent are the ways of relating to the different confessions conditioned by the centralisation or de-centralisation of the state, i.e., its territorial organisation? Although it seems to work in Spain, is decentralisation always accompanied by an improvement in state-religious community relations? Third. Spain has opted for a model of dialogue with the different religious denominations. This option for dialogue and cooperation has been accompanied by a positive attitude on the part of the different religions. In our case, Muslim communities have shown their willingness to adapt to new circumstances that have affected their burial practices (coffin burial and problems of burial space). This adaptation has been consistent with the legal tradition of the principle of muslaha mursalah, which, for the sake of the common good, makes it possible to deal with new situations not contemplated in the foundational texts of Islam. This reinforces the conviction that there can be common ground between secular values and religious beliefs. Simultaneously, it makes us rethink some of the criticism Habermas (2015) has regarding his proposal of "translation" of religious questions onto "secular" language. Be the translation language "secular" (Habermas 2015), "human rights" (Martin 2013) or "liberal-democracies principles" (Bader 2017), could it be possible to communicate among those that are different, in the absence of a minimal common language? In the language of Garzón Valdés (1997) and Marramao (1996), would communication be possible without an untouchable common place of the rootless? Fourth. In spite of the above, one of the aspects that we consider potentially problematic is the fact that the only subject of the right to burial according to the Muslim rite are the Islamic communities integrated in the Islamic Commission of Spain. This could pose problems for Muslims belonging to a community that is not part of the Commission. Although it is not publicly stated, it would not be unreasonable to think that the internal plurality existing within Muslim groups could generate some conflicts. And not only within communities, but even concerning individuals who might wish to break with their religious tradition or simply change their burial practices without renouncing their faith. The need to respect the principles, values and objectives of a secular society (Bhargava, Garzón Valdés, Martin, etc.) leads us to point out that any Muslim (or any believer of any denomination) should be able to choose the way he or she wishes to be treated when dying. Fifth. The agreements established by the Spanish state, which are not established in other European countries in the region, despite providing guarantees, may raise doubts or objections as to the possibility of their generalisation and, therefore, their sustainability. In that sense, for example, should agreements be made with all religious denominations, or only with the most representative or established ones? Should the agreements be equal, or is it better not to make any agreements at all? On the other hand, and continuing in the field of limits, can the right that is recognised for a religious community be extended to an individual who is not part of it? For example, if Muslims are allowed to be buried without a coffin, should any citizen not have the same right regardless of their beliefs? Sixth. In addressing the diversity of funerary rituals that affect the management of shared public spaces, this article has underlined the importance of what authors such as Lefebvre and Soja have called "spatial justice". Given the emotional and identity value that funeral rites have for the living, we understand that this concept is also important when rethinking the public space designed for the dead. If the shared space of the living must be fair and inclusive, should the shared space of the dead not be fair and inclusive as well? --- Conclusions As a first and general conclusion, it seems that there is not any major incompatibility between the Muslim funerary practices and the Spanish law, neither between those practices or the Spanish society. On the contrary, despite the difficulties in its implementation due to the heterogeneity of the Muslim communities, the Spanish model to manage the religious phenomenon, which calls itself "cooperation", shows that aside from the law, the attitude of the participating groups matters. It also shows that, practiced as dialogue secularism, there can be an opportunity for the integration of religious diversity, not a system of confrontation between the state and cultural or religious groups. Regarding the funerary rites, the implementation of the Spanish state's agreement with Muslim communities has encountered difficulties of various kinds, mainly those related to health and available space. However, none of these has led to serious problems of social coexistence. Part of these difficulties can be understood in the context of a wider issue of burial space, which is widespread throughout the country and for all persons and denominations. Muslim communities have shown a capacity to adapt to these difficulties. Moreover, from its historical legal tradition we highlight the principle of muslaha mursalah, which has allowed Muslim communities throughout history to adapt to new situations, not specifically envisaged in the Qur'an or the sunna, while respecting the fundamental principles of Islam. This legislative adaptability in pursuit of the common good has also been effective in its treatment of certain funerary practices. This shows that when the secular framework of the state is open, neutrally positive, and religious communities show adaptability in pursuit of the common good, integration and coexistence are possible. --- Data Availability Statement: Not applicable. --- Conflicts of Interest: The authors declare no conflict of interest. | Death is not only a universal biological fact; for the individual it is the "event horizon". This fact has important symbolic meanings and complex social consequences. Any society, secular or not, must manage this reality. What response is given to the question of religious phenomenon in general, and to funerary practices in particular, in a secular society in which individuals with different religious sensibilities coexist? This article aims to analyse the response given by the Spanish state to the questions raised regarding burials by Muslim communities, the most widespread minority group in Spain as a whole. This response, which would be framed within what could be called a 'cooperation model', has encountered some difficulties as a result of the territorial organisation of the Spanish state. Despite this, the willingness to cooperate on the part of both the administrations that make up the state and the Islamic communities has made a situation of stable coexistence possible. |
themselves moving back home. In the U.S. alone, 2.2 million adults between the ages of 18 and 25 moved back in with either a parent or grandparent through March and April [4]. The effects of this major disruption to the Spring semester were reflected in many social media posts, online forums, and news outlets. One especially telling article featured in the New York Times titled "'I'm in High School Again': Virus Sends College Students Home to Parents, and Their Rules" documented the challenge of adjusting not only to online classes but also to the unexpected and abrupt transition to living back home [5]. Due to the pandemic, the conclusion of the Spring 2020 semester was in stark contrast to previous years. During a traditional academic year, the conclusion of the Spring semester is associated with enhanced positive mood states [6] as well as a significant increase in alcohol consumption [7]. This is consistent with research demonstrating increased alcohol consumption by college students during positive mood states and periods of celebration [8,9]. However, because of the pandemic, What happens when the party moves home? The effect of the COVID-19 pandemic on U.S. college student alcohol consumption as a function of legal drinking status using longitudinal data many end of semester celebratory events (Spring Break trips, embedded program trips, graduations, etc.) were either cancelled or moved online. Given the widespread impact of the pandemic on the Spring semester, it is yet unclear how the pandemic related changes may have impacted college student alcohol consumption. Research on the impact of the pandemic on alcohol consumption has produced conflicting reports. While product research has shown an increase in alcohol sales [10], there is also data showing a higher number of respondents reporting either no change or a decrease in consumption (vs. an increase) [11]. Further still, research has found an increase in alcohol consumption by commuter college students (i.e., students less likely to experience a change in living situation as a result of the pandemic [12]). The college environment has been notoriously associated with a culture that endorses alcohol consumption. Research conducted prior to the pandemic has found that greater exposure to college environmental factors (e.g., living on-campus, greater amount of time spent at the university) correlates with increased drinking frequency [13]. The impact of the college environment on increased alcohol consumption is further supported by data showing that college students tend to consume more alcohol than their non-college attending peers [14]. Students who were living in dorms, near college downtown areas, and other campus related accommodations for the Spring 2020 semester may have had to severely adjust their drinking habits when they suddenly found themselves back home. It is worth noting that research has found an impact of legal drinking status on alcohol consumption using college student samples. Students above the legal drinking age (21 years old in the U.S.) demonstrate patterns of drinking that are distinct from their underage peers. Some research has demonstrated that younger students report more frequent alcohol consumption compared to older students [15], however, this pattern of drinking may be dependent on a number of different factors (gender, setting, type of event, etc.). For example, students above the age of 21 have been shown to be more likely to engage in heavy alcohol consumption before going to, and while at, bars [15]. This indicates that students over the age of 21 who were living on campus or near downtown bar areas may be especially impacted by pandemic-related closures. Specifically, they have lost access to preferred establishments due to bar closures and/or having to relocate. Because of the complex relationship between legal drinking status, drinking settings, and exposure to college environmental factors, the impact of pandemic-related changes on college student's alcohol consumption is a topic worthy of investigation. This is especially the case as many health experts anticipate additional waves of the virus [16] and the effects may continue into the academic year. The current study sought to investigate the impact of the COVID-19 pandemic on the alcohol consumption habits of college students as a function of legal drinking age. The current study utilized longitudinal data from a large land grant university located in the Northeast U.S. to compare alcohol consumption data from past Spring cohorts to actively enrolled Spring 2020 students across two time points (at the beginning and end of the semester). Analyses were conducted using students who reported living either on-campus or near the downtown area at the beginning of the semester. --- METHODS --- Design Data for this longitudinal cohort study were collected at a large, northeastern U.S. university during the Spring 2019 and 2020 semesters via an online survey (Qualtrics, Provo, UT) as part of a larger ongoing research project examining college student health behaviors and outcomes. Baseline data for the independent cohorts were collected at the start of the semester (<unk>late January) after the add/drop deadline, and follow-up data were collected prior to the end of semester exams (<unk>mid-to-late April). Data collected in 2019 and 2020 cohorts were considered to be collected during "normal" and "COVID-19" circumstances, respectively. COVID-19 cohort participants experienced a shift to online instruction after Penn State issued a university wide shut down immediately following 2020 Spring break in mid-March. The Pennsylvania State University Institutional Review Board approved this study. --- Participants and procedures Undergraduates enrolled in general health and wellness classes were recruited to complete the baseline survey. Data were password protected, and only accessible to research team members. An informed consent statement was presented to students upon opening the survey link. Cookies were used to prevent multiple submissions. --- Measures --- Demographic characteristics Participants in both cohorts self-reported their age, gender, race/ethnicity, sexual orientation, year of study, and living situation at baseline. The Spring 2020 "COVID-19" cohort reported their living situation at follow-up too, however, this measurement focused on specifying the type of housing they resided in (house, single apartment/condo complex, etc.) as well as the zip code. --- Alcohol consumption Alcohol consumption was assessed using the Daily Drinking Questionnaire [17]. The DDQ assesses the quantity and frequency of alcohol use by asking students to estimate the typical number of drinks consumed on each day of the week, averaged over the previous three months. Baseline DDQs for both cohorts were framed for 3 months, while the 2019 and 2020 follow-up DDQs were framed for 3 months and 1 month, respectively. The 2020 framing of the DDQ was altered to exclusively encompass the period post-pandemic and closure of the university. DDQ was summed to compute a variable indicative of typical total weekly alcohol consumption (in standard drinks) which was used as the dependent variable in analyses. --- Statistical analyses Analyses were conducted using SPSS Version 26.0 (IBM, Armonk, NY). Data were analyzed using a 2 (cohort group: COVID-19 vs. normal) <unk> 2 (age group: over 21 vs. under 21) <unk> 2 (time: beginning of the semester vs. end of the semester) mixed model ANOVA, with cohort group and age group as between-subjects factors and time as a withinsubjects factor. The main model was analyzed using only students reporting living either on-campus or in the downtown area at time 1. An alternative model was analyzed using only off-campus students to confirm the unique effect on students living near or on-campus. --- RESULTS --- Participant characteristics The majority of participants were women, non-Hispanic White, and heterosexual. Participant characteristics are displayed in Table 1 and attrition rates are displayed in Table 2. --- Main model There was a significant 2 (cohort group: COVID-19 vs. normal) <unk> 2 (age group: above 21 vs. under 21) <unk> 2 (time: beginning of the semester vs. end of the semester) interaction, F(1, 227) = 14.198, p <unk>.001, <unk> 2 = 0.059. There was a significant main effect for all three factors (cohort group, age, and time), as well as a significant two-way interaction between time and cohort (Table 3). At the onset of a typical Spring semester, students over the age of 21 consumed more alcohol (M = 10.74, SD = 7.79) than their underage peers (M = 5.09, SD = 6.82) with the discrepancy increasing slightly at the end of the semester (M = 12.63, SD = 8.97; M = 4.13, SD = 5.13, respectively). At the onset of the Spring 2020 semester, students over the legal age consumed more alcohol (M = 10.37, SD = 7.08) compared to students under 21 (M = 4.58, SD = 6.29), however, by the conclusion of the semester, there was a marked decrease in the discrepancy between the two age groups (M = 5.20, SD = 6.15; M = 2.75, SD = 6.60, respectively). This change was largely driven by a drastic drop in consumption by students of legal drinking age (Fig. 1). It should be noted when gender was included as an additional factor the overall model was not significant, F(1, 223) = 0.46, p =.50; there was not a significant main effect for gender (p =.44); and, gender did not interact with age group (p =.67) or cohort (p =. 19), but did interact with time (p =.02). Given the lack of strong statistical support that gender played an important role in the relationships of interest it was excluded from main model. It should be noted that within the factorial ANOVA model that included gender, the three-way interaction of cohort, age, and time remained significant (p <unk>.001). --- Model assessing off-campus students In order to fully investigate the impact of living situation, the model was computed using only students who reported living off campus outside of the downtown area. The model was non-significant, F(1, 62) = 0.487, p =.488, providing further support for the enhanced impact of the pandemic on students living on-campus or near campus within the downtown area. --- DISCUSSION The current study investigated the impact of the global pandemic on alcohol consumption as a function of legal drinking status. Longitudinal data were used to compare Spring 2020 students to Spring 2019 students with data collected at the beginning and end of the semester. Analyses specifically focused on students who reported living on-campus or near campus in the downtown area. Results found a significant three-way interaction between all factors (cohort group, age group, and time of the semester). During the pre-pandemic Spring semester, students over 21 years (i.e., legal drinking age) consumed more alcohol than their underage counterparts, with the difference between the groups increasing slightly by the end of the semester. The start of the Spring 2020 semester was similar, with students over the age of 21 consuming more alcohol than underage students. However, by the end of the Spring 2020 semester (which was severely impacted by the pandemic), the alcohol consumption of students over the age of 21 dropped to a level that was similar to their underage peers. There are several potential explanations for this pattern of results. One is that students lost access to their preferred drinking establishments. During March and April 2020, 43 state governors, including the (Pennsylvania) governor, issued stay-at-home orders which included the shut-down of non-essential businesses, in this case bars and restaurants [18]. It is common for large universities, like the one of the current study, to have downtown areas with bars that are frequented by students. Previous research has found that students tend to consume heavily at these establishments, especially students over the legal drinking age [15]. Therefore, the loss of access to these traditional consummatory settings may have played a role in the change of alcohol consumption behavior. Students over the legal drinking age could have purchased alcohol to maintain pre-pandemic consumption habits. However, given the marked decrease in reported consumption, it appears that this was not the case. This leads to a second potential explanation, change in living situation. Almost all on-campus students had to vacate university affiliated housing, with national data suggesting that millions of students moved back in with parents or grandparents as a result [4]. Many students may have found that their college level drinking habits were not endorsed within their family homes and adjusted accordingly. There are also additional explanations outside of the scope of the current study, such as decrease in peer pressure [19], change in caloric intake and physical activity [20], and loss of financial opportunities [21]. These, and others, should be explored in future research. It is worth noting that a potential implication of this effect is that living with family and away from collegiate drinking establishments (as compared to living on campus) during the academic year may serve as a protective factor against overconsumption. This may be an unanticipated benefit of the current global pandemic and is in line with previous research that has identified living at home as a protective factor against dangerous alcohol related behaviors [22]. Additional research has found that college students living in their familial home report a greater influence of their parents' beliefs impacting their decisions in general [23]. Research has also found that the protective effects of parental involvement extend to many risk-related behaviors associated with the collegiate environment [24] Further support of the protective effects of parental involvement can be found in research demonstrating that these effects begin before the transition to college and may persist even if the child leaves home for college [25]. Identifying protective factors against harmful drinking in college student populations is especially important given that college students are a high-risk demographic for dangerous over consumptive behaviors [26]. However, in continuing to look toward the future, researchers and college administrators should closely monitor student alcohol consumption behaviors when full student bodies are welcomed back to campus and downtown drinking establishments resume business. Although the current study did find a decrease in alcohol consumption among legal drinkers compared to past Spring semesters, recent research has found an increase in alcohol consumption among commuter college students [12]. This may be due to demographic differences between campuses, in particular student living situations. Students living in campus affiliated accommodation or near college campuses tend to consume more alcohol compared to commuter students [27]. It appears as though alcohol consumption increased for students who were not required to relocate [12], while students living in campus-affiliated accommodation had to adjust their drinking habits upon moving back home. A limitation of the current study is that current living situation was only measured at the first time point. Although at follow-up (i.e., time 2), the COVID cohort was asked to report on their housing type (house, single apartment/condo complex, etc.) This would allow for additional investigation into the impact of living situation changes throughout the academic year on alcohol consumption. An additional limitation that warrants discussion is in reference to the influence of gender within the current study. Longstanding research spanning across cultures has consistently demonstrated gender differences in alcohol consumption with men reporting higher drinking frequencies and quantities compared to women [29]. However, the current study did not find statistical evidence for an impact of gender on the interaction between cohort, time, and age. One potential explanation for this pattern of results is that the current study did not have enough statistical power to detect the effect within the four-factor model. Another explanation could be that the overwhelming impact of COVID-19 on alcohol consumption may have had such a severe impact on both men and women that existing gender differences were no longer apparent. Future research to explore further into the impact of COVID-19 on alcohol consumption habits of male versus female college students. Related to this limitation are the patterns noted in attrition rates (see Table 2). The attrition rate for men (66.3%) was higher than that of women (51.5%) within the Spring 2020 cohort (i.e., the COVID affected group). Furthermore, it should be noted that these attrition rates were lower compared to the attrition rates for men (74.4%) and women (68.5%) for the Spring 2019 cohort. The lower attrition rate among the COVID group may be due to students having more time to complete the follow-up survey due to changes in academic, employment, and leisure activities as a result of the pandemic. However, differences in attrition rates based on age were negligible within each cohort. Attrition represents one of the major concerns in conducting longitudinal studies as it can potentially impact the generalizability of the study. Although there are slight differences reported by gender and cohort, it is common for attrition rates between 30% and 70% to be reported in longitudinal studies [30]. However, results should still be interpreted with caution as the differences in attrition rates may imply a bias in relation to gender and cohort. Additional research is needed to further assess attrition in the time of COVID-19 related data collection. In conclusion, the findings suggest that the effect of the pandemic on U.S. college student's alcohol consumption may depend on a number of factors including legal drinking status and living situation. As the COVID-19 pandemic continues to unfold, researchers should continue to monitor the impact it has on college students. This is especially the case with the varied approaches to course delivery (virtual, face-to-face with social distancing, hybrid, etc.) and anticipated future waves of the virus [16]. --- Data Availability Deidentified data from this study are not available in a public archive. Deidentified data from this study will be made available by emailing the corresponding author. Analytic Code Availability: There is no analytic code associated with this study. Materials Availability: Materials used to conduct the study are not publicly available. --- Conflicts of Interest: The authors declare that they have no conflict of interest. Human Rights: All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The Penn State University Institutional Review Board approved this study. This article does not contain any studies with animals performed by any of the authors. Informed Consent: Informed consent was obtained from all participants included in the study. --- Transparency Statement Study Registration: The study was not formally registered. Analytic Plan Preregistration: The analysis plan was not formally preregistered. | Colleges, local public servants, and health officials should closely monitor student alcohol consumption as traditional establishments and campuses change policies due to the pandemic (e.g., closing and re-opening). |
Background The purpose of informed consent is to protect patients' autonomy. In the West, under the principle of informed consent, doctors are required to provide patients with adequate information and respect their decisions [1,2]. The patient's family members may take part in the treatment process. However, on the doctor's side, the patient's autonomy and rights are always the primary consideration. They will not disclose a patient's condition to family members without the patient's consent. It is even less likely for doctors to bypass patients and prioritize informing the family members of their condition. Only in special circumstances, such as when a patient loses the ability to make decisions, will the doctors then inform the family members and act on their decisions [3,4]. In China, however, it is customary for doctors to inform both the patient and their families of the patient's condition, even if the patient is capable of making decisions. This usually happens when the family members accompany the patient or when the patient's condition is complex and severe [5,6]. In some special cases, doctors will even give priority to informing family members of the condition and letting them communicate with the patient [7,8]. In other cases, family members believe that disclosing the true condition will cause significant psychological harm to the patient, and they may request that doctors cooperate with the family in "deceiving" the patient. Most of the time, Chinese doctors will accede to the family's request [9][10][11]. Some scholars claim that family members play such an important role in contemporary Chinese medical practice because of the influence of the traditional Confucian culture [12][13][14]. Confucian culture tends to view the family as the basic unit of society, as opposed to the current Western society, which views the individual as the basic unit of society. Major decisions regarding personal well-being are often made collectively by family members. This family-oriented ideology has shaped a unique "doctor-family-patient" model of the physician-patient relationship, in which physicians are no longer dealing solely with the patient, but also with the patient's family. Countries and regions influenced by Confucian cultures, such as Japan, South Korea, and Hong Kong, also have similar situations where family members participate in medical decision-making [15][16][17][18]. Additionally, scholars have analyzed this physician-patient relationship model from an economic, medical insurance policy, and educational perspective to explain its realistic basis [5,19]. Research on this "doctor-family-patient" model of the physician-patient relationship has primarily focused on three areas: (1) investigating the attitudes and reactions of patients and their families toward family involvement in informed consent and medical decision-making; [10,19,20] (2) analyzing the underlying reasons for this model; [5,12,19,21] and (3) evaluating the advantages and disadvantages of this model [22][23][24][25]. However, existing research lacks empirical studies on the physician population. This appears to neglect the views of physicians on this doctor-patient relationship model and the potential challenges it may pose for them. On the one hand, Chinese physicians are educated and trained to protect the patient's privacy, fulfill their duty of informed consent, and respect the patient's autonomy, with such requirements reflected in relevant laws [26,27]. On the other hand, in China, physicians appear to comprehend and accept the model of the physician-patient relationship in which the family members play an important role in informed consent. This implies that Chinese physicians may encounter ethical dilemmas when conducting informed consent. As mentioned above, in some cases, family members may request that physicians conceal the patient's condition from the patient. Although doctors are allowed to inform family members rather than patients of their conditions in some special situations, [26,27] there is no explicit provision of law that doctors can deliberately conceal the patient's condition from the patient based on the request of the family members. Therefore, what should a doctor do when the family members' demands conflict with the patient's right to be fully informed? In addition, when a patient explicitly states that they do not want their family members to be involved, how should a doctor decide between the patient's privacy and the family members' requests for the patient's information? And when a family member's decision does not seem to fit the patient's best interests, what should a doctor do? These ethical dilemmas and the challenges they cause for physicians will be the focus of our research. As mentioned above, Chinese physicians are required to conduct informed consent to respect the patient's autonomy. However, such requirements emerged gradually at the turn of the 21st century under relevant legal and ethical principles. At the legal level, the provision of informed consent in surgical procedures was first included in the Medical Practitioners Law of the People's Republic of China in 1998 [28]. In terms of medical ethics education, mainstream textbooks gradually began to include informed consent as an ethical principle in the early 21st century. Medical ethics education is compulsory course in all medical universities in China. And the course is usually scheduled for undergraduate medical students in their junior or senior year. Thus, the moral distress felt by doctors who received medical education and entered clinical work after 2000, i.e., those in the under 35 age group, may be more pronounced than that felt by senior doctors. At the same time, these young doctors will become the backbone of the medical field in a decade or two, which means that their attitudes toward the "doctor-family-patient" relationship model will reflect the attitude of China's medical community toward this model in the next few decades, as well as their responses to the corresponding ethical dilemmas. Therefore, our study targets young doctors (under the age of 35) to investigate their attitudes and reactions to the above ethical dilemmas and the reasons behind their responses to them. This study is the first large-scale study of doctors' attitudes and reactions to the "doctor-familypatient" model of the physician-patient relationship in China in the past decade, and the first to be conducted among young doctors under the age of 35. China has multiple, distinct medical education pathways that can last from 5 to 11 years [29]. The normal age for the students who start a 5-year Bachelor of Medicine degree (MB) is 18. After MB program, they are qualified for the Medical Licensing Examination and further pursuing a 3-year Master of Medicine degree (MM). In fact, most medical students are able to obtain their medical license at the age of 23 before starting the MM program. The MM program usually include a 3-year general training which is required for physicians to be solely responsible for the patients. In our study, all participants have completed their MB program and hence, most of them have obtained the license. However, not all participants can independently manage patients considering that some of them are in their MM level. Even for these participants, they have played assistant role under supervision of high-hierarchy physicians in the physician-patient communication. Thus, all participants would have already experienced such model of physician-patient relationship. --- Methods --- Study design This study was conducted from June 11, 2022, to September 20, 2022. Data were collected through an online survey using a snowball sampling method. The target population is doctors of all grades with clinical experience in various departments from 3 A hospitals, the highest normal level hospitals in the Chinese healthcare system. --- Questionnaires The questionnaire was developed by consulting several literature including the theoretical discussions of the "doctor-family-patient" model of physician-patient relationship and some qualitative studies [5,14,19,30]. In addition, a couple of young physicians were interviewed as the pre-study. The interview also contributed to the content of questionnaire. The questionnaire consisted of 31 multiple-choice questions and took approximately 10 min to complete. The questions covered four major parts: the participants' basic information, the fulfillment of the obligation to fully inform, who will be informed, ethical dilemmas in decision-making (Supplementary 1), and 10 questions related to the content of this article. The results of these 10 questions are included in this paper. Other questions will be presented and discussed in a subsequent article, The requirements of fully informing and the reaction to patient's decisions: a questionnaire study on Chinese doctors. 1 --- Ethical considerations Participants' identifiable personal information was withheld from the questionnaire results data. The first page of the questionnaire stated the purpose of the study, its use, and the contact information of the person in charge. Informed consent was obtained from all participants (Supplementary 2). The study was also approved by the Ethics Committee of Nankai University (NKUIRB2022095). --- Data analysis Data were imported into IBM SPSS version 25 for statistical analysis. Descriptive statistics were performed for each variable separately by the respondent. Cross tabulation and Pearson's chi-squared test were used to analyze the differences between types of patients for categorical variables, and a p-value <unk> 0.05 was considered statistically significant. --- Results We obtained a total of 421 data sets, of which 368 met the age requirements for this study. The participants included 155 males (42.1%) and 213 females (57.9%). The minimum age was 21 years old, the maximum age was 35 years old, and the average age was 27.6 years old. In terms of education level, 118 participants (32.1%) held a bachelor's degree, 217 participants (59%) held a master's degree, and 33 participants (8.9%) held a doctoral degree. Finally, their professional titles were distributed as follows: 171 participants (46.5%) were resident physicians, 126 participants (34.2%) were attending physicians, 70 participants (19%) were associate chief physicians, and 1 participant (0.3%) was a chief physician (Table 1). Our data (Table 2, Q1, Fig. 1) showed that only 20 doctors (5.40%) stated "informing the patient alone is sufficient" when it comes to informing adult patients of their serious conditions. 254 doctors (69.0%) stated that "unless the patient explicitly expresses a desire for their family members to remain uninformed, the family members will be informed", while 35 doctors (9.5%) stated that "even if the patient explicitly expresses a desire for their family members to remain uninformed, the family 1 The article is being written. members will be informed". The remaining 59 doctors (16.1%) would "inform the family members first and let them inform the patient". When facing elderly patients (60 and over [31]) with decision-making capacity, the situation was significantly different (Table 2, Q1, Fig. 1; Table 3). Only 14 doctors (3.8%) stated that "informing the patient alone is sufficient". Of those surveyed, 146 doctors (39.7%) chose to "inform the patient's children2 unless the patient explicitly expresses a desire for their children to remain uninformed", while 100 doctors (27.2%) stated that "even if the patient explicitly expresses a desire for their family members to remain uninformed, the children will be informed". Of respondents, 108 doctors (29.3%) would "inform the patient's children first and let them inform the patient". In general, many respondents who chose A ("inform the patient alone is sufficient") or B ("inform the patient's children unless the patient explicitly expresses a desire for their children to remain uninformed") in the first question chose C ("even if the patient explicitly expresses a desire for their family members to remain uninformed, the children will be informed") or D ("inform the patient's children first and let them inform the patient") in the second question (Table 3). By contrast, most respondents who chose C and D in the first question made the same selection (80% and 76.27%) in the second question (Table 3). When asked about the primary reason for ensuring that family members are informed about the medical condition of the adult but not elderly patients (Table 2, Q2), 144 doctors (41.4%) believed that "major medical conditions can have an impact on the whole family, so families also have the right to know", 139 doctors (39.9%) chose "informing family members to let them to discuss with patients can help patients make better decisions", and 62 doctors (17.8%) chose "avoiding medical disputes and preventing family members from holding doctors accountable on the grounds of not being informed". The proportion of doctors who cited the reasons for ensuring that family members are informed about the medical condition of elderly patients is slightly different (Table 2, Q2). Of those doctors, 180 (50.8%) believed that "informing adult children and involving them in medical decision-making can help patients to make better decisions", 128 (36.2%) considered the overall impact on the patient's family, and 44 (12.4%) chose "avoiding medical disputes and preventing family members from holding doctors accountable on the grounds of not being informed". We further analyzed the data on doctors who chose "avoiding medical disputes and preventing family members from holding doctors accountable on the grounds of not being informed" (Table 4). We found that compared to doctors who chose other options, a larger proportion (31.4%) of doctors who chose to inform family members even when adult but not elderly patients explicitly stated that they did not want their family members to be informed cited "avoiding medical disputes and preventing family members from holding doctors accountable on the grounds of not being informed". However, there was no significant difference in this proportion when it came to elderly patients (13.0%). When family members asked doctors to conceal the patient's medical condition "for the best interests of patients", 270 doctors (73.4%) chose to "respect the views of the family and cooperate with them in concealing the condition from the patient", while 73 doctors (19.8%) explicitly refused the suggestion and advised the family that this violated professional ethics. In addition, 21 doctors (5.7%) tended to make situation-specific analyses, and 4 doctors (1.1%) would report to their superiors and follow their instructions (Table 2, Q3, Fig. 2). When faced with elderly patients who have decision-making capacity, the attitude of doctors did not significantly change. Of the respondents, 293 doctors (79.6%) chose to cooperate with adult children in concealing the patient's medical condition, 55 doctors (14.9%) explicitly refused, and 18 doctors (4.9%) based the decision on the situation, while 2 doctors (0.6%) followed the instructions of their superiors (Table 2, Q3, Fig. 2). --- Discussion Firstly, Chinese doctors pay extra attention to informing the patient's family, which may not be in the patient's best interests. In contrast to previous studies, our study reflects for the first time the balance that doctors consider between the patient's right to know and their family members' right to know. When faced with an adult but not elderly patient, most doctors (69%) would regard the family's right to know as equally important to the patient's right to know when informed of a severe medical condition. One-quarter of the respondents give extra weight to the family's right to know, with some believing that it takes priority over the patient's right to know (16%) and some even thinking that it is more important than the patient's right to privacy (9.5%). Only 5.4% of respondents believe that it is only necessary to ensure the patient's right to know. It can be seen that Chinese doctors place a high priority on keeping patients' families informed, but not exclusively for the best interests of the patient. According to the existing literature, patients, especially old patients, would not refuse their family members to involve in their medical decision making [5,19,32]. Three reasons were most mentioned: family-oriented tradition in China, the patient's ability to understand the information, and the patient's economic situation. Generally, both patients and their family members believe that this model in which the family have the patient's information and take part in the patient's medical decision-making fits the patient's best interests. However, how physicians perceive this model is slightly covered by previous studies. Our study shows that only about 40% of participants' primary reason for ensuring that family members are informed is for the interests of the patient; about 41% is for the consideration of the overall impact on the patient's family; and about 18% is for self-protection considerations. Although the interests of the patient and the patient's family are often consistent, factors other than the patient's interests should not be the primary consideration for doctors as the requirement of medical professional ethics. As mentioned above, influenced by the family-oriented culture, family members play an important role in medical activities in China, which creates a so-called "doctor-familypatient" model of the physician-patient relationship. However, this can be morally accepted and defended because it is considered the way to maximize the patient's interests in the Chinese cultural context. As doctors, the emphasis on the family's right to know should be for promoting the patient's best interests, rather than for weakening the consideration of the patient's interests. In addition, exemption considerations were particularly evident among respondents who felt that the family's right to know was more important than the patient's right to privacy. Our data suggest that the exclusion of liability has become an important reason for doctors to value family members' right to information, which is contrary to their professional ethics. Although the subjective reasons for this phenomenon lie in the lack of professional ethics of physicians who have failed to consistently prioritize the patient's interests, objective factors such as tense doctor-patient relationships cannot be ignored. Over the past decade, there have been unrelenting incidents of violence against doctors [33,34]. The Lancet published two editorials in 2010 and 2020 calling for concern about the personal safety of Chinese doctors [35,36]. Tensions in the doctor-patient relationship have led to a decline in trust between doctors and patients, and doctors have had to consider how to avoid getting themselves into potential disputes when facing patients and their families. This is particularly evident in extreme cases, such as when doctors delayed performing a cesarean section on a woman who died in labor, without obtaining her husband's consent [37]. In the case, the pregnant woman was suggested by her physician to have a C-section immediately. However, her husband insisted a natural birth and refused to sign on the informed consent form. The physician didn't perform a C-section on the woman as the family's informed consent was the necessary for the operation. Then the pregnant woman died in labor. Fear of complaints from the patient's husbands would be the main reason that the physician delayed the operation. Secondly, our study reveals for the first time that Chinese doctors treat adult (but not elderly) patients and elderly patients differently when it comes to informing family members. Our data showed that compared to adult but not elderly patients, Chinese physicians tend to place greater emphasis on the family's right to know (27.2% vs. 9.5%) with elderly patients, even if they have the capacity for decision-making, and consider it more beneficial to the patient (50.8% vs. 39.9%). The reasons for this difference in treatment mainly stem from two factors: the emphasis on filial obligation in traditional Chinese culture and the consideration of the education level of the elderly. First, influenced by traditional Confucian culture, Chinese society places great importance on the obligation of children to support and care for their parents [14]. Adult children accompanying their parents to medical appointments is often seen as a sign of a harmonious parent-child relationship and filial piety. Elderly patients are also often pleased to see their children show their concern by being informed and involved in decision-making. In such cases, doctors are more likely to be convinced that their children represent the best interests of their parents and are therefore more inclined to ensure that their children are informed in the interest of their elderly patients. In addition, the elderly population in China has a lower level of education. According to the 7th National Population Census of China in 2020, only 13.9% of the population aged 60 and above had a high school education or above [38]. The level of education significantly limits the elderly patient's understanding of their medical condition, especially when medical terms are involved in the physician's explanation [19]. In such cases, ensuring that family members are informed is more beneficial for the patient's decision-making and subsequent treatment. Thirdly, when family members request that doctors withhold information from patients "in the best interest of the patient", the majority (over 70%) of participants choose to comply with the request, although this may cause them distress. The practice of benevolently withholding information from patients is not uncommon in medical history [39]. However, since the 1950s, with the shift in the doctor-patient relationship and the emphasis on patient autonomy, this practice has been criticized as paternalism and has been gradually replaced by informed consent [40,41]. In current medical practice, the professional ethics of doctors in many countries explicitly prohibit doctors from withholding information from patients [42,43]. In Western countries, on the patient's family side, it is not common for them to request doctors to withhold information from patients. On the doctor's side, it is difficult to imagine that doctors agree and comply with such requests. For example, Anne Lapine and her colleagues reported an interesting case in which the wife, daughter, and son-in-law of a Chinese patient requested that an American-born physician withhold information regarding a terminal diagnosis. The family felt that if the patient was told he had cancer, then his spirit would be broken. The Ethics Committee invited an American-born Chinese physician as a guest consultant. The consultant confirmed that it is common in Chinese culture for the family to request physicians to withhold a terminal diagnosis to protect the patient's feelings [44]. However, even in China, complying with family members' requests to withhold information from patients may cause distress to doctors. In our pre-study, the interviewees mentioned that. It is also showed in existing literature [45][46][47]. The family's requests put the physician in a dilemma. On the one hand, withholding information from patients not only violates professional ethics but also violates relevant laws. The current professional ethics and laws all require doctors to fulfill their obligation to inform patients fully and respect patients' autonomy [26,27]. Although in some special cases doctors are allowed to inform family members rather than patients themselves, no law or ethical rule explicitly states that doctors can withhold information from patients based on family members' requests [26,27]. On the other hand, although most doctors acquiesce to the power of family members to make decisions for patients and indicate their willingness to comply with family members' requests to withhold information from patients, it can be imagined that some of these doctors do so for the sake of exemption rather than the best interests of the patient. For doctors who consider the best interests of the patient, deciding whether to comply with family members' requests to withhold information from patients is not an easy task. It requires the doctor to consider the patient's condition, the patient's mental capacity, the patient's rapport with the family, and the impact of withholding information on subsequent treatment. This is probably why some respondents stated that it depends on the specific situation. --- Limitations Our study has some limitations and deficiencies. First, our participants were all recruited from 3 A hospitals, which have a relatively high concentration of medical resources and a relatively high volume of patients. However, 3 A hospitals' situation is limited to show the whole picture of the physician-patient relationship models considering the proportion of 3 A hospitals is low in China. Future studies should also focus on hospitals or medical institutions of lower levels. Differences in management systems, patient volume, and types of doctor-patient relationships between hospitals of different levels may lead to variations in results. Second, although snowball sampling was the most appropriate and efficient way to collect data under the covid-19 prevention and control requirements in China last year, it has its unavoidable disadvantages as a non-probability sampling method. Such as sample selection bias, limited generalizability, difficult to estimate sampling error and so on. Third, our data have been representative of young doctors as possible on the basis of publicly available data. However, our sample size was not large enough, and factors such as age, department, and region were not completely balanced, which may have resulted in some bias. --- Conclusions and practical implications Our study is the first large-scale study of young doctors' (under the age of 35) attitudes and reactions to the "doctor-family-patient" model of the physician-patient relationship in China. It shows the doctors' balance weighing the patient's right to know against the family's members' right to know, as well as the patient's right to privacy against the family's requests for the information. It also shows the doctor's responses to the requests from family members to withhold information from patients and the reasons behind such responses. Our study reflects the potential moral distress caused by such a model. In terms of practical implications, it is necessary to increase clinical ethics training to promote doctors' professionalism considering that some doctors are not motivated for the best interests of the patient. In addition, the ethical dilemmas faced by doctors cannot be ignored. Perhaps it could be helpful to have different rules for informed consent for different types of populations, i.e., adult and elderly patients. Furthermore, communication techniques such as SPIKES protocol [48] and CST [49] may be useful aids for doctors when communicating with patients and their families. --- Data availability The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. --- Supplementary Information The online version contains supplementary material available at https://doi. org/10.1186/s12910-023-00999-6. --- Supplementary Material 1 --- Supplementary Material 2 --- Author contributions Hanhui Xu and Mengci Yuan conceived and designed the project together. Hanhui Xu wrote the paper and reviewed the references. Mengci Yuan acquired and analysed the data. --- Declarations Ethics approval and consent to participate All methods were carried out in accordance with relevant guidelines and regulations and all experimental protocols were approved by the Ethics Committee of Nankai University (NKUIRB2022095). A documented consent was provided in the questionnaire's first page and informed consent was obtained from all participants in our study. --- Consent for publication Not applicable. --- Competing interests The authors declare no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Background Based on the principle of informed consent, doctors are required to fully inform patients and respect their medical decisions. In China, however, family members usually play a special role in the patient's informed consent, which creates a unique "doctor-family-patient" model of the physician-patient relationship. Our study targets young doctors to investigate the ethical dilemmas they may encounter in such a model, as well as their attitudes to the family roles in informed consent. Methods A questionnaire was developed including general demographic characteristics, the fulfillment of the obligation to fully inform, who will be informed, and the ethical dilemmas in decision-making. We recruited a total of 421 doctors to complete this questionnaire, of which 368 met the age requirements for this study. Cross tabulation and Pearson's chi-squared test were used to analyze the differences between types of patients for categorical variables, and a p-value < 0.05 was considered statistically significant.Our data shows that only 20 doctors (5.40%) stated "informing the patient alone is sufficient" when it comes to informing patients of their serious conditions. The rest of the participants would ensure that the family was informed. When facing elderly patients with decision-making capacity, the data was statistically different (3.8%; P < 0.001) The primary reason for ensuring that family members be informed differs among the participants. In addition, when family members asked doctors to conceal the patient's medical condition for the best interests of patients, 270 doctors (73.4%) would agree and cooperate with the family. A similar proportion (79.6%) would do so when it comes to elderly patients.Chinese doctors pay extra attention to informing the patient's family, which may not be in the patient's best interests. (2) Chinese doctors treat adult (but not elderly) patients and elderly patients differently when it comes to informing family members. (3) When family members request that doctors withhold information from patients "in the best interest of the patient, " the majority choose to comply with the request, although this may cause them distress. |
INTRODUCTION Randomized Controlled Trials (RCT) have long been heralded as the "gold standard" for measuring the effectiveness of an intervention, due to their ability to reduce bias and show cause-effect relationships. In this article we will briefly summarize the evidence base for the effectiveness of complex mental health interventions in prison settings, while also identifying the recurrent issues. We will then focus predominantly on our experience of conducting prison-based RCTs and ask the question are prison RCTs of complex interventions a sisyphean task? To date, there has been a surprising number of systematic reviews of interventions or prisoners/forensic populations. These reviews have assessed the evidence base in a number of different ways, for example discrete sub-populations [e.g., adolescent offenders (1,8) female offenders (2,6,12)]; offense types [e.g., violent offenses (4,19)]; specific interventions [e.g., psychotherapy (3,9,11)]; or the impact on specific outcomes [e.g., health outcomes, violent behavior or reoffending (10,12,14,21)], with many having a broad inclusion of primary studies designs (9,12,13). Of relevance here are two reviews (17,21). The first reviewed RCTs of a range of psychological therapies for prisoners with mental health problems (17). Across 37 identified studies, they found a medium effect size for psychological therapies (0.50, 95% confidence interval [0.34, 0.66]), however effects did not appear to be sustained over time. Where trials had used a fidelity measure these were associated with lower effect sizes. The authors also undertook a qualitative analysis of the difficulties of conducting RCTs in prisons. The issues included: • Post-treatment follow-up -high rates of release, rapid turnover of prisoners, and short duration of stay leading to difficulties with initial recruitment and loss to follow-up. • Institutional constraints -constraints on the scheduling of sessions, "lock-downs, " high attrition rates partly due to scheduling changes and inmate infractions. • Small sample sizes. • Contamination of treatment and control conditions due to the closed communal setting of the prison. • Not being able to blind the participants to intervention/treatment as usual; and • Reliance on self-report measures. The second review examined RCTs of psychological interventions, delivered during incarceration but focused solely on recidivism as the outcome (21). Of 29 RCTs, psychological interventions were associated with reduced reoffending (OR 0.72, 95% CI 0.56-0.92), but after excluding smaller studies there was no significant reduction in recidivism (OR 0.87, 95% CI 0.68-1.11). The number of studies was not large, which the authors suggested supports the evidence that there are significant challenges of doing high-quality research in prisons. Also, many of the studies had a risk of bias, mainly around randomization, intervention deviations and difficulties associated with masking staff and participants to the assigned intervention. In this context we will now reflect on our own experiences of conducting two prison-based RCTs: Critical Time Intervention (22,23) and Engager (24,25). Both studies started with a pilot trial followed by a full RCT. Both interventions were throughthe-gate interventions, with baseline assessments completed in prison and then follow-up after release from prison. The two studies are described below and in Table 1. --- CRITICAL TIME INTERVENTION (CTI) CTI is an intensive form of mental health case management, operational at times of transition between prison and community and designed for people with severe and enduring mental illness. CTI case managers, routinely mental health nurse, psychologists, or social workers, provided direct care where and when needed, for a limited time period. They began their involvement when the individual was still in prison. For sentenced prisoners, this started 4 weeks before release. For remand prisoners, or those with unpredictable dates of release, intervention starts as soon as the person is known to the prison mental health team. The holistic intervention involves working with the individual and their families (where possible), as well as active liaison and joint working with relevant prison and community services. Five key areas are prioritized: (1) psychiatric treatment and medication management, (2) money management, (3) substance abuse treatment, (4) housing crisis management and (5) lifeskills training. CTI is not prescriptive, it responds to the needs of each individual, thus looks slightly different for each person, but still within the five-priority area framework. The intervention includes four phases. Phase 1 is conducted while the person is in prison and requires the development of a tailor-made discharge package based on a comprehensive assessment of the individual's needs. Phases 2 and 3 focus on intensive support post-release and then handing over primary responsibility to community services and phase 4 fully transitioned care to community services to provide long-term support. The aim is that phases 2-4 are completed within 6 weeks of release from prison. We conducted a multicentre, parallel-group randomized controlled trial across eight English prisons (originally planned for three sites, but additional sites had to be added, discussed below), with follow-up at 6 weeks and 6 and 12 months postrelease. A sample of 150 male prisoners were included with eligibility criteria of being: convicted or remanded; cared for by prison mental health teams; diagnosed with severe mental illness, and; with a discharge date within 6 months of the point of recruitment. Of these 150, 72 were randomized to the intervention and 78 were randomized to the usual release planning provided by the prison. Engagement with community mental health teams at 6 weeks was 53% for the intervention group compared with 27% for the control group [95% confidence interval (CI) 0.13% to 0.78%; p = 0.012]. At 6 months' followup, intervention participants showed continued engagement with teams compared with control participants (95% CI 0.12% to 0.89%; p = 0.029); there were no significant differences at 12 months (23). --- ENGAGER The Engager intervention is designed to engage individuals with common mental health problems in the development of a pathway of care for release and resettlement in the community. It is a manualised, person-centered intervention aiming to address mental health needs as well as to support wider issues including accommodation, education, social relationships, and money management. The intervention is delivered in prison between four-and 16-weeks pre-release and for up to 20 weeks post-release. Experienced support workers and a supervisor with experience of psychological therapy deliver Engager. The practitioner and participant develop a shared understanding of the participant's needs and goals, recognizing the links between emotion, thinking, behavior and social outcomes. A plan is developed, based on agreed goals, and including liaison with relevant agencies and the participant's social networks. A mentalisation-informed approach underpins all elements of the intervention. Use of existing practitioner skills is also key to intervention delivery. We conducted a two-group parallel randomized superiority trial in three prisons. Men serving a prison sentence of 2 years or less were individually allocated 1:1 to either intervention (Engager plus usual care) or the control (usual care alone) group. The primary outcome was the Clinical Outcomes in Routine Evaluation Outcome Measure (CORE-OM) (26), six months after release. A total of 280 men were randomized (25). --- OUR PERSPECTIVE -WHAT WORKS? Intervention allocation in CTI and Engager was at the individual level and so our perspective here focuses on this type of design. However, there are several alternative designs such as cluster, preference and benchmarking controlled trials [we refer the reader to (27,28)]. Overall, we agree with the reviews (17,21) in that prison RCTs are possible. In both studies participant engagement was positive, with high levels of consent and enthusiasm for the interventions, but also being involved in the research process. However, the unique prison context can make standard trial procedures and standard assessments of study quality more difficult to achieve. --- Pilot Trials In both our studies we undertook pilot trials. For CTI (22) the focus of the pilot was very much about testing if the intervention could produce an outcome, while in Engager (24) the pilot trial explicitly examined trial design and recruitment building on earlier feasibility work (29), but importantly also had an embedded realist1 -informed formative process evaluation, which focused on how the intervention was working (30). Both pilot trials provided invaluable knowledge and supported the development of relationships with the recruitment sites. On reflection, had the CTI pilot (22) formally tested recruitment and eligibility rates, then perhaps we could have better predicted the slow recruitment rates faced and negated the need to add so many other sites. Slow recruitment was due to a complex interplay of lengthy delays in approval and other operational delays such as change in healthcare providers, which meant that men became ineligible to take part due to not being released within the study period. The difference between these two pilot studies also reflects the fast pace of change we have seen in our understanding of intervention development and testing, and the improved guidance on feasibility and pilot trials (31). The UK Medical Research Council (MRC) published a framework on developing and evaluating complex interventions in 2000 (32), it was revised in 2006 (33), but has been very recently updated again in 2021 (34) -clear evidence of this fast pace. In addition, our theorical understanding of acceptability, often a key outcome in feasibility and pilot trials, has advanced with the work of Sekhon (35), using this framework may have added significant depth of understanding of the anticipated and experienced acceptability from the perspective of the intervention delivers and recipients. --- Blinding Single-, double-, and triple-blinding are commonly used in RCTs. A single-blind study blinds the participant from knowing which study trial arm they have been assigned. A double-blind study blinds both the participants and researchers to allocation. And triple-blinding involves blinding the participants, researchers, and statistician. The review above (17) highlighted that blinding was problematic. Blinding participants where the intervention is a psychological therapy and/or person facing is difficult, if not impossible. In CTI (23), we were able to blind the researcher and statistician data. We were able to blind the researcher to allocation as there was no face-to-face contact with the participants after baseline data collection, which was before participants were randomized. In Engager, we were only able to blind the statistician. In our Engager pilot trial (24) we tested and reported on our attempts to blind the researchers, but researchers were unblinded very quickly. Due to the frequent contact the researchers had with participants, participants were keen to share their experiences with the researchers and/or the researchers saw the participants with the Engager Practitioners due to the closed confines of the prison. We considered a range of workable solutions to maintain blinding, such as using a article-based selfcomplete outcome measure for participants but decided against this due to literacy problems and the likely increase in incomplete data. In the main Engager trial (25) the researchers knew trial arm allocation, this was a positive in that it allowed for the continued building of rapport between the researcher and participant to facilitate follow-up rates but may have diluted the relationship building effects of the intervention. Both studies could have considered adaptations to their design to allow recruitment to each arm to be staggered, but this lengthens the overall study time and cost. --- Outcome Measures How we measure outcomes in forensic populations is notoriously complicated and the reason why there is little agreement about which outcomes to use (36). Forensic settings and forensic populations are diverse. For example, settings can include, police custody, prison, probation services in the community and secure forensic hospitals. Even within the same setting there is diversity, for example, secure forensic hospitals have different security levels and different provider organizations. Services may also be viewed as having diverse goals including clinical, legal and public safety. In addition, forensic populations may have multiple and varied problems. For example, personality disorder, mental illness, learning disability, substance abuse and offending behavior, with many co-occurring, leading to many combinations of potentially relevant outcomes. To confound this further there are also different type of outcomes. Objective outcome measures can be viewed as outcomes such as rehospitalisation, reoffending and death, and are usually obtained from administrative datasets. In our CTI study (23) our primary outcome was based on information collected from participants electronic health records. While on the surface this would seem to avoid the limitations associated with self-report data e.g., social desirability, honesty, introspective ability, latent nature of the measures, missing data, it was not without shortcomings. The data was only as good as the quality of the written records, and at times this was poor, something highlighted by other researchers (37). We also planned to supplement this with information from UK health registries, however due to accessibility issues, likely data quality and an inability to join data from different registries, we were unable to progress this. A recent systematic review of 160 RCTs accessing routinely collected heath data, found only a very small proportion of UK RCTs (about 3%) and highlighted issues with access, quality and a lack of joined-up thinking between the registries and the regulatory authorities (38). In both CTI and Engager we had planned to obtain offending data, but faced similar issues to the health data in terms of protracted approval processes. Over recent years there has been an explosion of the number of subjective outcomes available. There have been a number of reviews (36,39,40) of outcome measures in forensic settings, identifying a large number of questionnaire-based instruments, focusing mainly on risk and clinical symptoms, neglecting quality of life, functional outcomes and patient involvement. In the most recent review, a total of 435 measures were identified. Of the 10 most frequently used, half of the instruments were primarily focused on risk. Only one instrument, the Camberwell Assessment of Need: Forensic Version (CANFOR) (41), had adequate evidence for its development and content validity. In our Engager trial (25), outcome data was primarily subjective and significant work went into deciding which outcomes to use, with the aim of selecting a set of outcome measures that captured the most important areas of the Engager intervention. We adopted a four-stage approach involving; a single round Delphi survey to identify the most important outcome domains; a focused review of the literature, testing of these measures in the target population to assess acceptability and the psychometric viability of the measures and a consensus panel meeting to select the primary outcome measure for the trial and key secondary outcome measures. In addition, we actively sought the input of our Peer Research Group (42) throughout this process. After the four stages the CORE-OM (26) and CANFOR (41) both received the same number of votes to be the primary outcome measure. We opted for the CORE-OM (26) as the primary outcome measure. It had marginally superior psychometric properties, could be administered in a highly scripted fashion that would reduce researcher bias, some items were of little relevance to a prison population and there were issues with the CANFOR being able to demonstrate change over-time (43,44). There is also some criticism of the reliability of the scoring system for the CANFOR. We had considered using outcomes based on practitioner records, however, it quickly became clear that these were not recorded in a sufficiently consistent way to merit inclusion. They were not undertaken at set time points, were often subjective in terms of focus, and suffered from missing data. Ultimately, even going through this process of selecting the primary outcome, we found problems with the CORE-OM. The before and after changes for individuals did not match the journey of rehabilitation and recovery detailed in the depth process evaluation (45), where we found that the intervention was more effective when practitioners developed an in-depth understanding of the participant. It may therefore not to be sensitive enough to detect small unpredictable steps in recovery resultant from the intervention for individuals with lifelong experiences of adversity. It also highlights the problems of reducing very complex interventions down to just one outcome, it may be that we just do not have adequate outcomes to test such complex interventions. We tried to use the PSYCLOPS (46) questionnaire, an idiographic measure designed to detect changes in person specific problems, but the prison environment rendered it unworkable because once released individuals' problems were almost entirely different. --- Intervention Fidelity One of the reviews highlighted above showed that studies including a measure of fidelity were associated with lower effect sizes (17). Intervention fidelity, like outcomes, is a complex area with a lack of agreement about the appropriate indicators of fidelity and how these should be measured (47,48). It is argued that any assessment of fidelity should look at the intervention designer-, provider-and recipient-levels (49). However, it is likely that the delivery of an intervention as complex, person-centered and flexible to the individual as CTI or Engager will be harder to evaluate than simpler "one dose fits all" designs. In CTI, fidelity was assessed using an adapted version of the fidelity scale used in the Critical Time Intervention -Task Shifting study (50) at eight time points over the course of the trial. However, a more reliable and detailed way to assess fidelity would have been for the CTI manager to complete a checklist per participant against the core CTI principles. This would have allowed more detailed analysis of what each participant received, mapped against their needs. There was variation in fidelity to the intervention across the different CTI managers. In Engager, fidelity was assessed by creating an intervention delivery timeline which depicts practitioner and supervisor start and end dates, instances of training sessions, research team-engager supervisor supervision and periods of prison "lockdown, " where practitioners were unable to access the prison sites to deliver the intervention. Practitioners and supervisors also kept records of contacts in the form of daily activity logs (documenting time spent with participants, or activities related to participants e.g., arranging appointments, liaison with other services) and recorded session case notes (documenting intervention delivered and received). We recognized however that this only measures superficial aspects of fidelity (reach and dose) and not the multiple mechanisms designed to be at play in such a complex intervention (30). There is, however, little published regarding fidelity in complex behavioral interventions and there needs to be more published on fidelity results (51). --- Process Evaluation The biggest difference between CTI and Engager was the complexity and depth of the qualitative components. In CTI, we undertook a nested qualitative study. At that time even this was relatively unheard of in RCTs (52). Jump five-years and we were undertaking one of the most in-depth process evaluations for complex health interventions (30,45). Even after the publication of the MRC guidance in 2000 and 2006, process evaluations have often been small qualitative add-ons to trials and of little importance to the main trial findings, although more recent guidance emphasizes the importance of detailed analysis (53). The parallel mixed method process evaluation in Engager not only provided evidence of breadth and depth, and from multiple perspectives about what was delivered to participants, but also allowed us to focus in on how team dynamics and underlying beliefs and values affected implementation, and to propose what might be done to support practitioners further to optimize delivery. Documenting suboptimal implementation, was important for trial result interpretation and development of future practice. The use of realist-informed methods allowed us to interrogate the intervention mechanisms by assessing if delivering the specified intervention components produce the hypothesized outcomes. This gave us insight into how the intervention can have a sustained effect when delivered well. We showed how consistent delivery across time could lead to the several mechanisms being activated, often repeatedly, to achieve incremental but sustainable change (25,45). It also allowed us to examine more deeply what "meaningful change" meant for the intervention participants in ways that standard outcome measures cannot assess. --- DISCUSSION --- Are conducting RCTs of complex interventions in prisons: A Sisyphean task? No, far from it. In our experience they can be conducted, are a key tool in developing evidence-informed practice and for some interventions provide the best approach to test effectiveness. But there is also a need for flexibility so that we are not unduly limited by a specific set of perspectives. For us there are some key must dos. Pilot and/or feasibility trials to help minimize risks to the main trial e.g., ensuring testing of recruitment and follow-up rates, developing effective relationships with the prisons so they see the value of research. A robust process evaluation is key, for understanding what was delivered but more importantly how was it delivered and how it produces change, how interventions work has often received little attention in prison research. Areas where we need to improve are our understanding how best to assess fidelity and our choice of outcome measures, is this user led vs standardized measures vs. bespoke, or should we use a combination. Plus, we need to work to improve access to routinely collected data, other European countries, such as the Nordic countries are much more advanced here. We also need to work with the prison system to ensure they see the value in supporting independent, external research to reduce protracted approvals. We must not get overly fixed on some traditional aspects of rigor. Alongside flexible adaptive RCTs we also propose the development of rigorous methods for evaluating impact of interventions in non-randomized studies e.g., prepost implementation studies. Before-after health or quality of life questionnaire data can be examined alongside processes of care, economic data and depth qualitative process evaluation analyses. Where novel interventions are adopted as treatment as usual there is a place for robust service evaluations of routinely collected data, where research ethics would not be required. It was Fyodor Dostoyevsky who said: "The degree of civilization in a society is revealed by entering its prisons" and therefore we continue to undertake prison research, despite some of its challenges. We strive to reduce health inequalities and drive-up quality healthcare for a group of people who are significantly disadvantaged and vulnerable (54-57) so that we can live in a more civilized society. --- DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author. --- AUTHOR CONTRIBUTIONS CL drafted the first version of the article. SL, JS, CH, SR-B, CQ, RB, and JS revised the article. All authors accepted the final version of the article. All authors contributed to the article and approved the submitted version. --- Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | Randomized Controlled Trials (RCT) are the "gold standard" for measuring the effectiveness of an intervention. However, they have their limitations and are especially complex in prison settings. Several systematic reviews have highlighted some of the issues, including, institutional constraints e.g., "lock-downs," follow-ups, contamination of allocation conditions and a reliance on self-report measures. In this article, we reflect on our experiences and will describe two RCTs. People in prison are a significantly disadvantaged and vulnerable group, ensuring equitable and effective interventions is key to reducing inequality and promoting positive outcomes. We ask are RCTs of complex interventions in prisons a sisyphean task? We certainly don't think so, but we propose that current accepted practice and research designs may be limiting our understanding and ability to test complex interventions in the real-world context of prisons. RCTs will always have their place, but designs need to be flexible and adaptive, with the development of other rigorous methods for evaluating impact of interventions e.g., nonrandomized studies, including pre-post implementation studies. With robust research we can deliver quality evidence-based healthcare in prisons -after all the degree of civilization in a society is revealed by entering its prisons. |
Introduction Alcohol consumption remains a major public health problem, contributing to myriad preventable conditions and increasing the risk of violence. Management of adverse outcomes of alcohol is estimated to cost Australia $66.8 billion a year [1]. As such, Australia's National Alcohol Strategy 2019-2028 aims for a 10% reduction in harmful population-level alcohol consumption [2]. However, reduction messages need to account for the fact that alcohol consumption at risky levels is socially acceptable particularly for heavy drinking sub-populations, including women in midlife aged 45-64 years (herein defined as'midlife women') [3,4] despite current Australian Guidelines which recommend drinking no more than two standard drinks on any day to reduce the lifetime risk of harm from alcoholrelated disease or injury, and no more than four standard drinks per occasion to reduce short-term acute harms. In 2019, the heaviest drinking 10% of the Australian population accounted for 54.1% of all alcohol consumed [5]. Central to this paper, midlife women in particular are at a higher risk of lifetime harm than other sub-populations of Australians, and while men consume more alcohol than women (at a population level) alcohol causes more physiological harms to women because of differences in metabolism compared to men [6]. Patterns of alcohol consumption at this age (45-64 years) have been trending up (while drinking among other sub-population groups such as young Australians is trending down). Midlife women are thus, a target group identified within the National Alcohol Strategy as warranting urgent intervention. In the 2020s, women are drinking more alcohol than previous generations of women in this stage of life, and more than any other age group currently, and the reasons for this lie in research which indicates consumption provides some women with a form of stress-relief [7] and self-care [8][9][10] that is socially acceptable [11,12] and can function as a tool to promote wellness, sometimes within a limited range of resources; and women's options diminish as they experience more social disadvantage [13]. Australian women's reasons for alcohol consumption are also differentiated on the basis of social class [13][14][15]. Of particular relevance to this study is that affluent women have more agentic'relationships' with alcohol, whereas there is a tendency for less control over alcohol-related decisions for women living with less privilege. This prompts a question about how differences in social class translate to women's capabilities for alcohol reduction-strategies might be less readily accessible to women from lower social classes in order to reduce consumption, than those for women with more resources. Urgent action is required to identify socially acceptable alcohol reduction options for heavy drinking midlife Australian women. The emerging evidence of widespread curiosity among middle-aged woman about sobriety and alcohol reduction highlights a messaging tactic that may provide a new public health campaign strategy for harm reduction. Herein we engage with the notion of sober curiosity; that is, for some, a socially acceptable movement challenging the idea that specific social contexts require alcohol. While the reasons for midlife women's alcohol consumption are well explored; there is a gap in knowledge about options, like sober curiosity, that might enable sustained reductions in consumption among this heavy drinking sub-population. Further, given the complex structural factors that may make it difficult (and perhaps even impossible) for some midlife women to contemplate reducing alcohol consumption, it is important to explore how these movements and 'options' may vary with social class. Nonetheless, the feasibility of reducing midlife women's alcohol consumption while the concept of'sober curiosity' is in the public spotlight offers new opportunities for public health in contrast to typical approaches that use instruction on limiting alcohol intake as a guiding principle. The study reported here represents a paradigm shift where the normative social environment supports moderate or no intake. In this paper we present our findings and explore factors that impact women's preparedness to reduce drinking including, what women anticipate gaining or losing from consuming less alcohol or not drinking at all and important considerations in designing messages that reinforce sober curiosity. --- The 'Sober Curious' Movement and Reducing Alcohol Consumption The sober curious movement has developed progressively over the past decade and a half but has become highly prominent recently among social media influencers (mostly women and young people) who pitch not drinking or drinking in moderation (i.e., framed as drinking'mindfully') as pleasurable and beneficial. Part of the 'curiosity' of the movement entails exploring the idea that social contexts generally associated with alcohol use can be enjoyable without it, thus challenging social norms that position drinking as the 'default' position in many social settings. Sober curiosity differs from complete abstinence ('quitting alcohol'); the latter is supported by popular and well-known organised periods of non-drinking such as 'Dry July' or 'FebFast' that are usually short-term (lasting one month) and validated through a philanthropic pursuit such as raising money for a cause. Sober curiosity is geared toward moderating consumption and decreasing risky drinking practices in a manner that is sustainable over the long-term; encouraging an ongoing 'questioning' of drinking alcohol and a decision to reflect on reasons for drinking relative to alcohol-related health risks. Whilst a period of sober curiosity may ultimately lead to a decision to stop drinking completely, the emphasis is primarily on reflection-and subsequent change-rather than necessarily on complete abstinence. The movement is a shift away from binary conceptualisations of 'normal' versus 'problem' drinking that might associate particular drinking practices with addiction and advocate full abstinence only (as evidenced through 12-step programs such as Alcoholics Anonymous) [16]. Various other approaches to reduce alcohol consumption are possible including restrictions on availability for example through price control measures. However, with the exception of legislation policing driving under the influence of alcohol (which has been highly efficacious in reducing some alcohol-related harms), restrictions are often not implemented or enforced for political reasons and have limited feasibility [17]. Education measures such as public information campaigns and warning labels also vary in effectiveness [18]. Persuading behaviour change requires that consuming alcohol is recognised by the individual as being a problem. However, the vast majority (87%) of Australian drinkers consider themselves'responsible drinkers', even though 68% of Australian drinkers consume 11 or more standard drinks on a 'typical occasion' [19]. In this way, the majority of people who are drinking over the recommended limits do not regard their consumption as risky or problematic, which leads to alcohol reduction interventions being resisted or even unnoticed [20]. Efforts to reduce risks from alcohol intake often compete with the seemingly valuable aspects of drinking alcohol consumption, a value-base upon which industry capitalise [21]. The sober curious movement and thus, sober curiosity, does not obscure or ignore women's reasons for consuming alcohol, rather it encourages reflection on those reasons. In our previous work we have explored how alcohol functions as a resource in women's lives and have identified that these 'uses' of alcohol 'compete with' public health risk messaging [14]. Research on sober curiosity is relatively new. Promoting the idea of reducing alcohol consumption through more'mindful' drinking is accompanied by an expanding market of alcohol-free beverages, 'dry' drinking venues, or licensed bars offering alcohol-free options and increased visibility of these through their endorsement at popular leisure events (i.e., the arts, sport). Sober curiosity not only represents a shift in Australia's 'culture of intoxication' [22], but is also being appropriated by the alcohol industry. In Australia between 2016-2019 the proportion of ex-drinkers increased from 7.6% to 8.9% [23]. Sales of no or 'zero' alcohol products (less than 0.5% alcohol by volume) increased by 83% in the 12 months following Australia's initial COVID-19 lockdown periods in 2020 [24]. Since then, there has been a substantial increase in the supply of alcohol-free wines, beers and spirits (mainly gin), linked to the sober curious movement. Globally, sales of zero alcohol products are surging and are predicted to increase 24% in Australia by 2024 [24]. There is an abundance of research on alcohol reduction from the perspectives of alcoholism and dependency, also legislative or guideline-based approaches, and alcohol refusal, but research on reduction toward moderate consumption as part of a global movement toward wellness, which was prominent during the COVID-19 pandemic, is a rapidly emerging area of interest [25,26]. Research on sober curiosity is relatively new. Ours is the first Australian study to our knowledge that reports empirical data on midlife women's sober curiosity. In Australia and other high-income countries, it is young adults who appear to be driving alcohol-free lifestyles and drinking declines [27][28][29]. For example, in Australia research notes a decline in drinking amongst younger people that is attributed, at least in part, to increasing pressure to value and prioritise 'healthy' choices and lifestyles and be'successful' and 'productive' [30,31]. Similarly, in the UK, research suggests young people feel increasing pressure to 'hustle' and achieve success in an increasingly anxious and uncertain social context, leaving little time for the pursuit of pleasure or hedonism drinking alcohol may offer [32]. There is also evidence that moderate drinking or not drinking is becoming increasingly socially acceptable for young people; moving beyond the stigma or judgement that might traditionally be associated with alcohol refusal [33,34]. However, women are over-represented in programs promoting temporary periods of abstinence, such as FebFast (a month of sobriety) or Hello Sunday Morning (an online program where people commit to a period of abstinence and communicate with others about their experiences) [35]. Similarly, platforms and spaces for expressions of sober curiosity-including social media accounts, contemporary 'quit lit' and new online communities are mostly run by and used by women [36]. Participation in online sobriety communities fosters an inclusive space for like-minded individuals, offering emotional and social support to others attempting to reduce or cease their drinking [37,38]; this support may be salient for women who may feel under-represented in more 'traditional' recovery communities [39]. Studies evaluating participation in these programs have shown that being newly sober provides opportunities for doing identity work [40,41]. Such findings are echoed in other research with those new to sobriety; for example, research with recently sober women living in the UK suggests sobriety is an opportunity to reclaim control and agency over one's life and present a more 'authentic' self [42], particularly for midlife women [43]. Within this research, sober curiosity is framed as a flexible and positive 'lifestyle choice'; a decision to reduce drinking is promoted as beneficial for everyone and linked to wellness, authenticity, personal growth and improvements of the mind and body [34]. Such conceptualisations move away from medicalised language of addiction, target a specific subset of the population and advocate for complete, lifelong abstinence. This previous research points to the desirable aspects of the expanding sober curious movement for women. However, against a backdrop of 'healthism' [44], that is, the increasing moral imperative to take responsibility for one's own health [30,42] women's preparedness for sober curiosity-and ability to engage with notions of wellness more widely-continue to be overwhelmingly shaped by social class and this is not always considered. This paper addresses this theoretical gap, contextualising women's reflections on their drinking practices against the groundswell of a burgeoning wellness industry [45]. --- Materials and Methods Interviews were conducted with 27 Australian midlife women (aged 45-64) and 2 women's advocacy groups living in Adelaide, Melbourne and Sydney in February and March 2022 by BL. BL was a 39-year-old woman with experience conducting qualitative interviews with midlife women on the topic of alcohol consumption-using techniques of 'empathic neutrality' and those akin to life histories which are suggested to increase validity by privileging women's own subjective meanings [46]. --- Sampling Women who consumed alcohol but expressed interest in exploring reducing alcohol (having a'sober curiosity') were recruited through a targeted Facebook advertisement that asked 'are you sober curious?', ran for 2 weeks and was released in February 2022. This coincided with 'FebFast' and the advertisement was released while FebFast was being advertised online. Initially, 30 women were recruited, but one became unable to participate after contracting SARS-COV-2, another withdrew due to a significant life event occurring, and one was lost to follow-up after completing the social class survey, resulting in 27 participants. In addition to individual interviews, two interviews were conducted with representatives from advocacy groups for women to speak on behalf of women who are single mothers and who live in poverty, this was because of limitations in access to women living with such experiences. To explore the notion of sober curiosity as it relates to class, we sampled for women with access to different levels and compositions of several forms of capital-economic but also social and cultural resources per Bourdieu's sociological model of class [47]. To measure women's social class positionings, we operationalised a novel sociological approach recently validated in the UK [48] and Australia [49] and our previous study [13,14]). This approach extends beyond simple economic, employment, and educational markers and has contemporary relevance to the nuances of social class divisions and consumer behaviour that extend to the social and cultural dimensions that shape life chances and alcohol-related outcomes. We have provided detail about the survey tool we adapted from Sheppard and Biddle's survey of Australian's social class in 2015 and shown its value for seeing social class in data on women's alcohol consumption behaviours elsewhere [8]. The survey tool measures social class across three domains: economic capital was measured as income, property and assets, social capital was measured by social contacts and occupational prestige of women's social networks and cultural capital was measured by the level of women's participation in various cultural activities. To determine economic capital, assets was measured by combining responses to the questions: what is your annual income before tax or anything else is taken out? (responses were indicated by income brackets provided); what would you say is the approximate value of the property owned or mortgaged by you, and roughly how much do you have in savings? (<unk>$20,000; $20,000 to <unk>40,000, $40,000 to <unk>60,000, $60,000 to <unk>80,000, $80,000 to <unk>100,000, $100,000 to <unk>150,000 and $150,000 or more). Social capital was measured by totaling the number of a range of known occupations within the respondent's social contacts (i.e., yes = 1) and the average prestige of those occupations. Occupational prestige was assigned using the Australian Socioeconomic Index 2006-a validated index for occupational prestige for the following occupations: secretary, nurse, teacher, cleaner, university lecturer, artist, electrician, office manager, solicitor, farm worker, chief executive, software designer, call centre worker, and postal worker. Cultural capital was measured by a count of "highbrow" and "emerging" cultural activities (where 1 = yes) per Bourdieu's description of cultural tastes. Respondents selected activities they had engaged in within the 12 months prior to completing the survey from a list of cultural activities including: seen plays or gone to the theatre, watched ballet or dance, gone to the opera, gone to museums or galleries, listened to jazz, listened to classical music (classified as "highbrow") and listened to rock and/or indie music, attended gigs, played video games, watched sports, exercised or gone to the gym, used Facebook or Twitter, done arts and crafts, socialised at home, listened to rap music (classified as "emerging"). Figure 1 shows the five social classes that resulted and have been collapsed into three classes on the basis of compositions of more or less capital for the purposes of presenting findings: working, middle and affluent: Int. J. Environ. Res. Public Health 2022, 19, x FOR PEER REVIEW 6 of 18 of consumption or relationship with alcohol). Self-report data was utilised because it captures women's self-perceived levels of alcohol consumption, which was fit for purpose for our study about women's perceptions of the possibilities for alcohol reduction that in turn influence their preparedness for sober curiosity. Our sample was mainly Anglo-Saxon although several European migrants participated including one woman from Germany and one from Italy. We did not purposively recruit Aboriginal and Torres Strait Islander women nor women from South Asian or Middle Eastern ethnicities as we acknowledge the specific historical, racial religious and on-going health and social inequities that would require an intersectional study. --- Interview Questions and Approach The interviews were open-ended, lasted on average 60 min and were focused on understanding the contexts that make midlife women interested in sober curiosity or willing to consider reducing alcohol consumption by exploring the factors (practices, intentions, motivations, tools, social networks) that might enable alcohol reductions and allow women to consider reducing consumption. We aimed to explore how and why women's Women varied in their employment arrangements (full-time, part-time, unemployment, on a pension, retired), their experiences of COVID-19 countermeasures per statebased differences (because women living in Melbourne experienced far longer periods of lockdown in the 12 months preceding the interview than women in other states and one of the world's longest lockdowns and this may have been relevant to their perceptions of alcohol and health risks [50][51][52]), and their living arrangements (alone, with others including children and/or partner). Some women's living arrangements changed as a result of COVID-19 lockdowns including relationship breakdown. We recruited women who were alcohol drinkers at the time of the interview and were interested in moderating their consumption or abstaining-i.e., interested in the sober curious movement. Participants self-identified as 'occasional/light','moderate' or 'heavy' drinkers-there was some variation here as the main phenomenon of interest was the practices and processes associated with reducing or moderating alcohol consumption (irrespective of the participants' initial level of consumption or relationship with alcohol). Self-report data was utilised because it captures women's self-perceived levels of alcohol consumption, which was fit for purpose for our study about women's perceptions of the possibilities for alcohol reduction that in turn influence their preparedness for sober curiosity. Our sample was mainly Anglo-Saxon although several European migrants participated including one woman from Germany and one from Italy. We did not purposively recruit Aboriginal and Torres Strait Islander women nor women from South Asian or Middle Eastern ethnicities as we acknowledge the specific historical, racial religious and on-going health and social inequities that would require an intersectional study. --- Interview Questions and Approach The interviews were open-ended, lasted on average 60 min and were focused on understanding the contexts that make midlife women interested in sober curiosity or willing to consider reducing alcohol consumption by exploring the factors (practices, intentions, motivations, tools, social networks) that might enable alcohol reductions and allow women to consider reducing consumption. We aimed to explore how and why women's drinking practices develop, why they persist and how these influences shape continuation or change in consumption patterns. Women's perceptions about any changes that would enable reductions and what they anticipated gaining or losing from reducing alcohol consumption were explored and also their perceptions of structural enablers and constraints on reduced intake for women 'like them'. Interviews were conducted and recorded via Zoom, transcribed using Otter Ai and refined by author BL. Only two women did not want to conduct the interview on Zoom (both in working-class positions who expressed discomfort and distrust with online technology) and accordingly, these interviews were conducted over the telephone and recorded via a digital recorder. We did not perceive a difference in rapport or the depth of information collected by using either of the two mediums for data collection because we followed women's personal preferences. All participants received a $30 shopping voucher to recompense their resources spent on participating. --- Ethics Approval Ethics approval was provided by Flinders University Human Research Ethics Committee. Consent to conduct and record the interview was sought verbally and provided by all women and documented via the interview recording. Pseudonyms are used here to present findings. --- Data Analysis Data were managed using QSR NVivo version 13 data analysis software. Analysis followed a rigorous method of pre-coding, conceptual and thematic categorisation and then theoretical categorisation [53]. Inductive coding was free-hand, paying attention to significant concepts. NVivo 13 was then used for conceptual and thematic categorisation. Using a combination of inductive and deductive logic [54] an initial coding framework comprising open-coding, emerging ideas from literature and media reports on sober curiosity guided coding of all transcripts. Authors BL and PRW discussed themes to check for agreement in coding. Author PRW read data summaries developed by BL and his expertise in social theories of risk and public health was utilised to expand theoretical interpretations and identified areas to improve explanatory rigor-particularly concerning women's motivations and intentions to reduce consumption, deductively inferring social class differences. Interpretive discussions also took place with author EN who has undertaken research on sober curiosity in the UK. The coding framework was refined accordingly and applied to all transcripts. NVivo 13 was used to generate matrix coding queries across social class attributes and concepts/themes concerning sober curiosity; classed patterning was identified and directed researchers to key excerpts for closer reading. For example, classed patterns were obvious in the coding organised under concepts such as 'future expectations', 'health consciousness', 'class identity','sustainable change', 'normalisation of drinking' and across various positive/negative emotions that were described in the context of not drinking alcohol. --- Findings and Discussion Two overarching themes relevant to public health understandings of women's willingness and capability to reduce alcohol consumption emerged through the process of thematic categorisation and the factors that impact women's sober curiosity including women's perceptions of the possibilities available to them to not drink alcohol (or drink less) and the circumstances required for them to feel prepared to reduce alcohol consumption. Accordingly, findings are organised by social class advantage and disadvantage. The intent is to distinguish clearly social class-based patterns and differences in women's sober curiosity. This allows us to readily identify inroads for public health approaches to alcohol reduction messaging segmented by levels of advantage and disadvantage in women's resources and their life chances. --- Affluent Women's Preparedness to Reduce Alcohol Consumption: Desiring Self-Regulation through Sober Curiosity Affluent women in our study have given sober curiosity extensive thought, to the extent that several have self-imposed rules governing their consumption levels. For example, Rosie who works in a full-time professional role that she describes as 'taxing' and has therefore limited her mid-week drinking says: "I had all of these rules about alcohol. And for the most part very successfully, and certainly people's perception would have been that I was very successful in controlling my alcohol consumption [....] but what I actually felt was that all of these rules meant that alcohol was looming large in my life in a way that just didn't make any sense to me, like all of the rules actually meant that I was thinking about alcohol all the time." Affluent women's preparedness to reduce alcohol consumption seems to stem from a desire to self-regulate as part of a perceived need for self-control and health consciousness. Women in our studyrefer to 'desirable' levels of consumption, using the drinking occasion or setting as a barometer for gauging what is 'too much'. For example, Gloria comments on her impetus to reduce consumption as a "realisation I'm above my weekly target" and that "it's not even'special occasion' [type of] drinking". Her sober curiosity is an effort to "eliminate midweek drinking". Sonia expresses dismay at being successful in life but 'unsuccessful' at reducing alcohol consumption: "I suppose one of the things that I find really interesting about being a person who has an alcohol problem is that I am a kind of a high achiever at a ridiculous rate. So, I find it really hard that this is the one thing I can't solve" and says: "I suppose it just frustrates me [... ] I think the best me probably would be better without alcohol or to be able to manage it in a more appropriate manner". Sonia explains she is "not the only one". She elaborates that her girlfriends are "exactly the same" as her in terms of their alcohol consumption patterns and perceptions of alcoholrelated risks, and remarked upon the "strong strings the alcohol has over them [all]"; adding she has "self-determination in most ways, except that one". Sober curiosity appears to be inhibited by the perceived utility of alcohol consumption. For some it serves as emotional self-management, facilitating "winding down" and managing the strains on their time. For example, Penny explains: "it's [drinking] just about the fluidity of the moment and (being) able to move with it". Affluent women in our study mostly occupy full-time paid work roles in professional careers and some describe being at the "peak of my career"; they said they feel their work is demanding and rewarding and it is integral in their personal and social class identity. Affluent women use words like "hectic", "exhausted" and "achievement" frequently and often in unison; and as they rationalize alcohol consumption or as Penny says explain how it "takes the edge off " when you've been "on the rails all week". For Gloria, sober curiosity competes with the value of alcohol and the ritualistic aspects of a drink when "switching off between work and home" and creating "headspace from work or demarcating relaxation time". She remarks that sober curiosity would be possible if she "switched the narrative slightly to be away from the default position of the glass of wine to an alternative". Beneath the seemingly simple desire for self-regulation, it seems Gloria, and affluent women like her, are cognisant of more complex social expectations regarding respectable alcohol-related behaviours; she describes feeling negative emotions when expectations are unfulfilled: "myself and probably quite a lot of my friends who are very competent, educated, high functioning people, that kind of shame does [factor in], I feel really ashamed that I can't get more on top of my drinking, because I feel like I'm pretty on top of most things in my life. But I do have... this sense of shame and that's the one thing I just can't seem to really get on top of. I feel a lot of shame about it, which is interesting... it gives me far more than it takes so it's really hard to... and it's all I've thought about doing [drinking]." Part of this shame would be relieved through alcohol reduction, which most of the affluent women explain feeling it would allow them better interactions and relationships with other people. Several of the affluent women who are mothers, such as Bronwyn, describe complexities with parenting teenagers or young adult children and concerns about "increasingly anxious young people" and that reducing alcohol would allow for good role modeling to their older children and relief from the burden of worry about their older children. However, they also describe feeling "peer pressure" to drink when away on holiday with girlfriends which as Bronwyn explains "leads to excessive consumption" and as Gloria comments, leads to moments of feeling "regretful I crossed the line". Our analysis suggests reducing alcohol offered women a chance to cope with shame that they had failed in proper self-management at work or in their role as a mother. In some instances, the shame seemed to manifest in women trying to manage the contradictions and tensions that 'too much' alcohol consumption has with their identity as a 'good mother' [55,56]. For example, Bronwyn comments: "I wouldn't tell most of my friends how much I actually drink"; whereas Elizabeth describes a tolerance not necessarily encouragement to drink excessively amongst her friends: "we drink a lot and it's very accepted". Both instances point to the possibility for alcohol reduction. For affluent women, the general need for self-preservation was part of the value of drinking alcohol in the first place; as Ellen explains when she considers what she would lose if she reduced alcohol: "you [would] lose the ability to hide and push things back" the "ability to be able to shut myself down and relax". Affluent women also spoke of feeling that their struggles were invisible: Elizabeth explains "we need recognition and to be recognised" referring to a lack of recognition for her achievement of juggling multiple competing family responsibilities and career success in a role where gender bias exists. The role of alcohol as a stand-in acknowledgment did not give women visibility but fulfilled their need to cope and this is evident in Sonia's comment: "it's also because they're insanely busy. But then there is the sort of high functioning anxiety that comes with that busyness and that giving to everyone... " and "I have a lot of girlfriends who are about my age. So, in their 60s [... ] we're not really terribly alone in the sense that a lot of people who are drinking about a bottle of wine a day who really shouldn't be they're intelligent, capable and wonderful women who, for whatever reason, have been self-medicating [by drinking alcohol]". In such instances, affluent women like Ellen describe a flipside of reducing alcohol would be adding to her existing mental load: "it would cause unnecessary strain to completely abstain which requires management and would induce more stress." Contemplating reducing alcohol clearly can feel like a lot of 'work' for affluent women when managing the'mental load' in their lives, which is one of the reasons they gave for consuming alcohol in the first instance. For some affluent women the benefit of alcohol is felt so acutely it is perhaps impossible to see drinking as a problem; such as Penny who says "the motivation hasn't been strong enough to lean that way" and "I don't see alcohol as a problem"; the general sense was that giving up alcohol would need to result in a tangible, noticeable and worthwhile outcome for women. Bronwyn remarks that the benefits of alcohol reduction would have to be "really dramatic" in order to make the strain required to abstain "worth it". Affluent women perceive the idea of going without alcohol and completely abstaining "confronting" (Ellen) and sober curiosity appeals because they could retain a sense of agency and control and this seems to feel gentler; "you don't have to give up all together, you can try and cut back" (Ellen). Affluent women want assistance to achieve "sustainable long-term change" (Rosie) and "a good quality of life as long as possible" (Gloria). Others feel dissonance with the religious and philosophical stance of programs like AA or simply they cannot'see themselves' as represented within or suited to the program (an online app or social media account). A peer-support type network emulating current programs that have been successful for other population groups but purposed for midlife women is possibly suitable. Sonia suggests: "I wonder whether something like the [name of alcohol sobriety program], but geared toward women in midlife, and their reasons for consumption, the supportive platform like that might be a way forward because from the women I speak to, they are looking on social media. And I know I wasn't sure I do. Are you on Instagram? Everything almost everything they've got is on social media.... [it could be a] very effective platform for us." Bronwyn's narrative shows agreement with Sonia, and she recommends adaptations to the goals of the program to suit affluent midlife women's lived experiences: "I talk about this a lot with my friends, a bunch of extremely busy educated women with sort of still crazy lives... we talk about it quite a lot.... quite a lot of my friends are in the same situation they realise they're probably drinking too much... dependent on alcohol. Use it as a bit of a coping mechanism in life. I think sometimes then when you go down the [mentions alcohol sobriety program]... I just actually got so sick of reading these stories of women who say how their whole lives are transformed when they stopped drinking. I just stopped reading the articles halfway through, because I'm like 'no, that's not me', I don't want to totally stop drinking and alcohol isn't destroying my life, but I am probably drinking too much alcohol. I like this idea [sober curiosity] more around actually being curious, and thinking about maybe reducing, moderating, being very mindful that you're drinking, but not saying, my goal is to never drink again" (Bronwyn). Most affluent women in our sample felt it would be "appealing and desirable to go somewhere you can be a part of the normal" (Gloria) warranting the assimilation and normalisation of reduced consumption within women's social and leisure spaces. --- Middle-Class Women's Preparedness to Reduce Alcohol: Sober Curiosity as Civility and Respectability Within our data, sober curiosity seems the most possible for middle-class women; for a multitude of reasons. Reasons include women's concerns with the sustainability of drinking patterns (specifically an increase in the frequency of consumption) they had established during COVID-19 lockdowns and a realisation that the stresses that lead to drinking alcohol in order to cope have not gone away because COVID-19 continues to impact their lives: "COVID is going on for so long that it's [drinking a lot] is not sustainable" (Kelly). Sober curiosity arose through experiences that resulted in feeling like alcohol takes more than it gives "alcohol as 'giving' is fiction" (Angie) and feeling boredom with daily living and looking for a personal challenge: "the drinking has become mundane, not drinking is a new challenge" (Mandy). Some women link sober curiosity to the lifecourse and feeling "ready for a new phase of life" (Kelly) evident in statements such as "I am developing an awareness of my own mortality" (Sonia) and descriptions of non-drinking as "identity work that people are experimenting with" (Heather). The middle-class was the only social class group where women mention the lifecourse from a physiological perspective and women's sober curiosity is encouraged when they felt reduced inflammation or hormonal disregulation by not drinking alcohol: "the menopausal symptoms are better without alcohol" (Pamela) and "menopause and alcohol is a really bloody hard combination" (Raven). Civility and notions of respectability are key themes that emerged through our analysis of middle-class women's preparedness for alcohol reduction. Compared to affluent and working-class women, middle-class women speak about sober curiosity with undertones reminiscent of neoliberalism, particularly prominent in their individual responsibilisation for drinking to 'excess' | Background: Urgent action is required to identify socially acceptable alcohol reduction options for heavy-drinking midlife Australian women. This study represents innovation in public health research to explore how current trends in popular wellness culture toward 'sober curiosity' (i.e., an interest in what reducing alcohol consumption would or could be like) and normalising nondrinking could increase women's preparedness to reduce alcohol consumption. Methods: Qualitative interviews were undertaken with 27 midlife Australian women (aged 45-64) living in Adelaide, Melbourne and Sydney in different social class groups (working, middle and affluent-class) to explore their perceptions of sober curiosity. Results: Women were unequally distributed across social-classes and accordingly the social-class analysis considered proportionally the volume of data at particular codes. Regardless, social-class patterns in women's preparedness to reduce alcohol consumption were generated through data analysis. Affluent women's preparedness to reduce alcohol consumption stemmed from a desire for self-regulation and to retain control; middle-class women's preparedness to reduce alcohol was part of performing civility and respectability and working-class women's preparedness to reduce alcohol was highly challenging. Options are provided for alcohol reduction targeting the social contexts of consumption (the things that lead midlife women to feel prepared to reduce drinking) according to levels of disadvantage. Conclusion: Our findings reinstate the importance of recognising social class in public health disease prevention; validating that socially determined factors which shape daily living also shape health outcomes and this results in inequities for women in the lowest class positions to reduce alcohol and related risks. |
become mundane, not drinking is a new challenge" (Mandy). Some women link sober curiosity to the lifecourse and feeling "ready for a new phase of life" (Kelly) evident in statements such as "I am developing an awareness of my own mortality" (Sonia) and descriptions of non-drinking as "identity work that people are experimenting with" (Heather). The middle-class was the only social class group where women mention the lifecourse from a physiological perspective and women's sober curiosity is encouraged when they felt reduced inflammation or hormonal disregulation by not drinking alcohol: "the menopausal symptoms are better without alcohol" (Pamela) and "menopause and alcohol is a really bloody hard combination" (Raven). Civility and notions of respectability are key themes that emerged through our analysis of middle-class women's preparedness for alcohol reduction. Compared to affluent and working-class women, middle-class women speak about sober curiosity with undertones reminiscent of neoliberalism, particularly prominent in their individual responsibilisation for drinking to 'excess' and therefore, for making reductions. Several middle-class women's narratives suggested reducing alcohol is a sign of personal strength and resilience for example, Alison says: "having a drink is nice to do when you're pretty wound up and weak" and Heather expresses feeling "more disciplined and structured" and that moderate drinking is "a standard [I] want to set". Even where social influences and cultural acceptability of alcohol consumption are acknowledged, middle-class women link limitations in the possibility of sober curiosity to personal motivation. For example, when asked about the factors that make reducing alcohol (im)possible Kelly responds: "Tough to know. I mean, some of it is socialising. I know that, because I've got friends who drink, family... they'll drink. So some of that will be that. So, who you are socialising with what you're doing. So that'll be fun, you know, some friends particularly probably. Family somewhat... Cause I'll just be drinking. I think 'I'll just have one more' and I'm enjoying that taste and then you don't stop. So they're probably the main things... almost feels like willpower, really". Some middle-class women's logic for sober curiosity represents dutiful ideals and, in several instances, censure of 'irresponsible' behaviour. For example, Alison feels herself personally responsible for "breaking the chink in the chain of alcohol" and to "establish new patterns for ourselves" when she comments on Australia's heavy drinking culture. Alison says she feels "affronted by the (drinking) culture and participating in that"; she explains that this influences her preparedness to reduce alcohol consumption. Concerningly, Angie describes this personal responsibilisation for reductions as "laborious" and she feels it results in women'sneaking' alcohol and then justifying it as a valid reward-she describes feelings of guilt and sadness surrounding her 'failure' to moderate alcohol. Ruth explains her sober curiosity in the context of explaining the social popularity of heavy drinking amongst her peer network (both close friends and more distanced peers such as colleagues and school or sport parents) she is willing to be "going against the grain". She is familiar with the alcohol guidelines for'moderate' consumption in order to reduce health risk and remarks "that's what I will stick to" adding that "I'm not afraid not to be cool"; she connects this to personal strength and remarks "I'm strong enough [to reduce alcohol]". Her preparedness for sober curiosity seems motivated by personal responsibility to reduce drinking and she uses words such as "destroy" and "sabotage" to describe drinking to excess and wants to avoid those negative experiences 'for and by herself'. Ruth does acknowledge that mental health options are unavailable or under-utilised by women she knows and that alcohol's role as'self-care or self-medication' makes sober curiosity less possible for them: "The women that are kind of using it as their medicine of choice, rather than getting mental health plans or whatever... they will fight you tooth and nail: 'No, no, nothing wrong with me 'you know, like, 'No, no', they will not take mental health medication or seek the help or maybe it's too hard to get the help. So, they get out now just go get a drink. Alcohol works... that's like a chemist for them". Alison explains that for her, sober curiosity is possible because of "a social network that allows it". For other middle-class women such as Pamela, reductions are only possible by "flying under the radar and not making a scene of it"; among her social network she feels people hassle non-drinkers because they want their own drinking normalised. Certainly, middle-class women observe heavy-drinking norms among women like them, for example Mandy comments on the culture of drinking among mums that she feel 'glamourise alcohol'. Joanne comments that it was difficult for her to feel part of her social network without drinking: "I often think afterwards, I didn't have the same feeling of having been fulsomely in the social situation when I've been not drinking... I have found that it hasn't felt the same kind of authentic socialising." Alison feels that "focusing on alcohol would expose a personal failing of not coping" potentially conveying to others that you are not an 'authentic' midlife woman and mother who fits the 'right' stereotype. While on a similar theme of acceptability and expectations, Nancy remarks "my partner is resentful if I don't drink" because drinking is something she feels they do together that he thought symbolises to her partner that she is relaxed. It seems alcohol reduction and interest in sober curiosity would each be more possible for middle-class women if their use of alcohol as a stand-in for absent support was taken seriously rather than joked about within a socially accepted culture of drinking; which women express 'plays down' the seriousness of their emotions and their emotional needs in relation to alcohol. Nancy says she is seeking options that are "wholly relaxing" not just "momentary"-"alcohol is very momentary" and there is an absence of alternatives. Raven describes feeling she is living in an "invisible age" in terms of appropriate support, she feels there is suitable and purposeful mental health services for young people and there is a group chat in mental health prevention forums that would be good for midlife women so she "wouldn't feel so alone". Alongside this, alcohol is so readily available, she says: "it can easily get into the house" and she spoke about having used alcohol home delivery services. Raven feels it relieves some of the burden of her caring role when she "couldn't leave [her] parents" and describes feeling "hypervigilant" and says she was drinking in order to cope with the feelings of constant burden from caring. Our findings also suggest middle-class women have the affordances of resources to participate in periods of intermittent sobriety or 'fasts' (e.g., FebFast, Dry July and Sober October). Middleclass woman Nancy feels this offers a defined period of time (and thereby a sense of control and possibility), where no one questions alcohol abstinence and the philanthropic pursuit attached to fasts is considered a noble thing. --- Working-Class Women's Preparedness to Reduce Alcohol Consumption: Complexities and Impossibilities for Sober Curiosity Many working-class women in our sample describe feeling'scared' of what life without alcohol would be like; quite a distinct difference from the narratives we heard from more affluent women who describe reduction as difficult to achieve but not impossible. For working-class women, reducing alcohol seems particularly complicated, and our analysis reveals the breadth of the value of drinking for such women that extends from and also beyond the intoxication of the drinking occasion into recovering the day after. When we consider working class women's preparedness for sober curiosity, we realise the deep layers of oppression that need to be peeled away before alcohol reduction can become a possibility, particularly for women living with considerable disadvantage let alone something they feel prepared to do. For example, Barbara describes feeling a "deep seated loneliness" and explains she lives alone in a caravan park where she is unable to house a pet for company. The possibility for reducing consumption hinges on her finding happiness and confidence outside of alcohol: "I just want happiness and to be able to just go and do things and not need a drink to make me happy and outgoing". Alcohol consumption for each of the working-class women seems a key form of enjoyment (sometimes the main or only form), as Celeste remarks: "alcohol is an only form of fun or interest... something to look forward to" and adds "we don't get many other opportunities to feel carefree". Sober curiosity would reduce working-class women's chances for reprieve from their hard lives, and would require a presence when the desire and need is absence or distance from the difficulties of life, such as Barbara: "I'm drinking just to take me away from everything" and "I'm drinking to numb negative thinking and horrible stuff ". This poses a crucial barrier to preparedness and stifles working-class women's possibilities for sober curiosity. The working-class women in our sampledescribe worrying about feeling exposed by having nothing to do and this worry manifests in alcohol consumption, as Barbara explains: "I get lonely and I get bored. I lost a job back a year and a bit ago because I turned up at work smelling of alcohol and I have had more jobs since.... I drink alcohol like water and I was getting to the stage where I was drinking to combat the after effects of drinking that has been hair of the dog for [my] hangover". The possibility for sober curiosity seems non-existent in this continual and 'necessary' engagement with alcohol. Recovering from the night of drinking and 'nursing a hangover' seems to give some working-class women (without paid employment) something to do; something to manage in the absence of another plausible way to occupy time-certainly, the hangover is a valid if not sometimes revered experience in Australia's alcohol saturated society (its telling of a 'big night' spent drinking), perhaps more valid than having nothing to do, as Mary explains "at least you have a hangover". Perhaps a hangover offers workingclass women a distraction from the bleakness of life and its 'daily grinds'; certainly, Helen expresses concern about her own self and women like her: "what would life look like if we weren't hungover". Mary explains: "I worry about what I would do with my time if I didn't have a hangover as an excuse" and "the alcohol numbs and the hangover provides a distraction". Having a hangover is spoken about in ways we interpreted as a productive means of demonstrating agency and regaining control: "actively engaging in disentangling yourself from real life". It seems that an extension of the numbing effect of inebriation is the dull headedness of the hangover, and both are coping strategies. It follows that limits in possibility for sober curiosity among working-class women, were limits in their preparedness for alcohol reductions. Unlike affluent women who could identify avenues for support for more moderate drinking, albeit not directed at their age-group, working-class women cannot conjure up such support; as Celeste explains: "I think that's one of the things is you get scared about doing this, because you think I don't want to give up alcohol for the rest of my life. And I think it's [... ] an either/oreither you drink or you give up completely. That's where I want to find that medium. Where do you talk to likeminded people who actually want to achieve that? Is there a group that you can actually say, 'This is what I want to achieve?'". The alcohol reduction programs that affluent women feel could be tailored to them cost money and this precluded participation among less well-resourced women: "we need to make free the ones that sell you stuff and say the perk is reducing your money [spent on alcohol]" (Barbara). Some of the working-class women we interview describe feeling surveilled once they had searched for internet sources of alcohol reduction support, and prefer the idea of phone consultations, Celeste says: "I don't think I've necessarily found something that is just about trying to change my drinking a little bit [... ] what I find interesting is obviously, everything's monitored, so you might search for that [reducing alcohol] and then all of a sudden, I'm getting all these adverts for basically how to deal with being an alcoholic and it's like, no, no, I'm not saying I'm an alcoholic. I certainly have a relationship with alcohol I don't need all that self-help stuff. So, I've not quite found anything yet that's supportive of women without it being we've all got a problem here sort of thing". An advocate speaking on behalf of women in poverty delineated the layers of complexity experienced by working-class women living on very low incomes (if not in poverty) for accessing suitable support for mental wellness in order to feel prepared to reduce alcohol consumption: "to get counselling, depending upon your age, you might need to go to a doctor and have a mental health plan and then you have X amount of sessions, and it's never free. It's just at a reduced cost. Then if you know of some community services, you need to actually do a whole lot of introductory work before you can even access that service. So it's a lot of cost and emotional work, and a willingness to put yourself out there before you can find any sort of level of support." She also sheds light on the limitations for social participation and the need for peer support opportunities that acknowledge the limitations working-class women face: "the ability to share with your peers... when you're engaged in a professional network, or you have a nice social network, you can bounce around what's happening to your life in a trusted supportive space with like others. But if you're isolated, or you're trying to pretend in a group... you start to isolate yourself away"; she added "there's a whole lot of reasons why isolation, based on money is a real factor" and that we need to "create a way for people [women] to speak and make sure that there are different sorts of support networks for them". Another advocate reminds us that working-class women are also often single parents and this means if they spend time engaged in self-care, their full time 'position' as domestic labourer and carer needs to be 'backfilled'. Having a drink of alcohol as a stand-in support requires no engagement in any of these resource intensive and emotionally intensive processes and so this needs to be factored into reduction possibilities for working-class women. The timing of delivery of risk reduction messaging is also crucial, in the current economic climate: Australian women on low incomes are dealing with inflation in the costs of living and interest rates increasing if they have borrowed money. As one of the women's advocates remarks: "why the hell would you give up [drinking] now?" --- Class-Segmented Approaches to Alcohol Reduction for Women Social class differentiated the factors that influenced women's preparedness for sober curiosity and, most critically, were the contextual and social class frames for their drinking. More affluent and middle-class women discussed a desire for self-regulation and 'proof of willpower' as motivations for sober curiosity; most often this individual goal was nestled within a deeper social context comprising gendered social norms, where they felt personal responsibility to control the direction of their lives even when circumstances were outside of their control. Our analysis suggests drinking alcohol is a means of managing social expectations, or for coping, for which women internalised responsibility and therefore felt drinking was something they felt they'should' personally manage. For example, women with more advantage felt that alcohol consumption rather than moderate or non-drinking options is normalised in their social contexts (both face-to-face and online Zoom drinking sessions) and are practices from which they had positive experiences and gained social and cultural (social class position affirming) capital. For women with less advantage, who consumed alcohol to manage difficult lives and negative emotions, where alcohol consumption was less embroiled in socialisation and more so part of daily coping, preparedness for sober curiosity was particularly limited. Extrapolating from our findings, below are options for alcohol reduction targeting the social and cultural contexts of consumption that is -the things that lead midlife women to drink or feel prepared to reduce drinking rather than women's individual drinking routines or habits -segmented by levels of disadvantage. Options draw attention to and extend from the social factors and contexts that shape women's possibilities for sober curiosity, and most arise from midlife women's own ideas put forward during the interviews. --- Drinking Culture and Social Expectations The social settings women occupy that typically feature alcohol contain various avenues for supporting sober curiosity. Affluent and middle-class women who recognised aspects of social acceptability and socialisation in their social class responses to alcohol reduction possibilities noticed a general absence of non-alcohol options in settings where consuming alcohol is normalised. An obvious avenue for change is creating women's social events or themed activities that do not involve alcohol. Current available options are often alcohol themed or sponsored by the alcohol industry or partnered with alcohol products. Increasing the visibility of these events and alternatives to alcohol in media marketing to women would support preparedness through demonstrating the social acceptability of sober curiosity for midlife women. This would directly respond to women who noticed a general absence of modelling of midlife women non-drinkers, and commented that sober curious social media influencers who are in the same phase of life would be helpful to normalise non-drinking among midlife women. --- Increasing Support Common to all the women in our studywas an absence of support for mental wellbeing but for different reasons on the basis of social class. For affluent and middle-class women, feelings of stigma and shame exacerbated mental instability and conflicted their feelings around admitting they were not coping with the struggles of multiple demands and this reduced preparedness for alcohol reduction. Our findings suggest that for affluent women, improving the availability of accessible support services such as an online forum (women commented on feeling time poor) and tailoring them to be sites where women can debrief with other women and seek solace. Our findings differ from previous research [20] in revealing that some affluent women do see their alcohol consumption as a 'problem' and this allows an openness to sober curiosity. According to the affluent women in our study, traditional sobriety programs designed to assist 'problem' drinking require a complete overhaul and felt that the sober curious movement is a'softer' and more appropriate option. It would increase their feelings of social engagement and connectedness, allowing them to have open dialogue with other women about struggles that result in alcohol consumption in turn increase their preparedness for alcohol reduction. For middle-class women, making mental health care more available and accessible and particularly, reducing the'mental load' caused by encouraging women to 'consider their drinking' through campaigns that'responsibilise' women is critical, because we know that while the normalisation of heavy consumption and women hiding consumption levels beneath dark comedy like termagants continues to occur, and perhaps this is why women talk about the tensions they feel between drinking in order to be social and reducing alcohol consumption in order to manage health risks. For middle-class women, social media has a use-value for sober curiosity and could feature influencers as 'peers' who are reducing alcohol and are in the middle phase of life (not young women). For less privileged women, suitable support groups where women are not tasked with quitting in the absence of other structural or emotional supports might be helpful. Interventions should be designed to reduce feelings of surveillance and be mindful not to exclude women because they are 'unable to invest in themselves'. Women on low incomes may be resource-poor in particular ways that make stopping or even reducing drinking feel difficult to imagine or contemplate. In formulating reduction options, consideration of the resources required is warranted, they need participation which requires trust, safety, literacy and confidence to explain what they are feeling and needing and to feel heard and supported. Women feeling shame about failing to embody the 'proper' level of constraint and self-control over their alcohol consumption (gendered shame) is exacerbated for working class women by the shame that seemed to result from their stigmatised social class position. --- Limitations and Areas for Research Extension We cannot be sure about how differences in women's life circumstances alongside their social class might increase their preparedness for sober curiosity; several women spoke about distancing themselves from parental heavy alcohol intake and one woman mentioned smoking cannabis as her substitute for alcohol. Further exploration of how women's sober curiosity is shaped by differences in factors in addition to social class is a limitation of this study but seems like a relevant point of inquiry for future studies. --- Conclusions This study represents innovation in public health research to explore how current trends in popular wellness culture toward'sober curiosity' and normalising non-drinking or lighter drinking and increased health consciousness could increase women's preparedness to reduce alcohol consumption. It offers insight into how we can drive public health change effectively to reduce population level alcohol harms via reducing alcohol consumption among midlife women, and with sustained impact. Our findings reinstate the importance of recognising social class in public health disease prevention; validating that socially determined factors which shape daily living also shape health outcomes and this results in inequities for women in the lowest class positions. In this case, the inequities are unequal opportunities to reduce alcohol and reduce alcohol-related risks. By exploring modifiable alcohol consumption practices through women's social class rather than focusing on individual consumption patterns (which is proving ineffectual), we have provided ideas for structurally improving the conditions that would allow all women to feel prepared for sober curiosity. If translated into tailored social class-based options to support midlife women's sober curiosity, we can plan health interventions that are realistic within women's life contexts and therefore more likely to have a meaningful impact-and progress toward achieving equity in the reduction of population-level alcohol harms. --- Data Availability Statement: Data summaries can be provided upon reasonable request from the authors. --- Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. | Background: Urgent action is required to identify socially acceptable alcohol reduction options for heavy-drinking midlife Australian women. This study represents innovation in public health research to explore how current trends in popular wellness culture toward 'sober curiosity' (i.e., an interest in what reducing alcohol consumption would or could be like) and normalising nondrinking could increase women's preparedness to reduce alcohol consumption. Methods: Qualitative interviews were undertaken with 27 midlife Australian women (aged 45-64) living in Adelaide, Melbourne and Sydney in different social class groups (working, middle and affluent-class) to explore their perceptions of sober curiosity. Results: Women were unequally distributed across social-classes and accordingly the social-class analysis considered proportionally the volume of data at particular codes. Regardless, social-class patterns in women's preparedness to reduce alcohol consumption were generated through data analysis. Affluent women's preparedness to reduce alcohol consumption stemmed from a desire for self-regulation and to retain control; middle-class women's preparedness to reduce alcohol was part of performing civility and respectability and working-class women's preparedness to reduce alcohol was highly challenging. Options are provided for alcohol reduction targeting the social contexts of consumption (the things that lead midlife women to feel prepared to reduce drinking) according to levels of disadvantage. Conclusion: Our findings reinstate the importance of recognising social class in public health disease prevention; validating that socially determined factors which shape daily living also shape health outcomes and this results in inequities for women in the lowest class positions to reduce alcohol and related risks. |
the south (13.6 million) or from the north to the north (55.1 million). At present, most international migrants are of working-age and live in Europe, Asia and North America (Figure 1). Apart from international migrants, an astonishing figure of 740 million people is estimated to have migrated internally within their origin country. 1 Migration is as old as humankind. People have always moved in search of better living conditions for themselves and for their loved ones or escaping dramatic situations in their homeland. These two major drivers were the fundamentals of the 'push and pull' theory that was first proposed by Lee in 1966, 3 encompassing economic, environmental, social and political factors pushing out from the individual homeland and attracting him/her towards the destination country. Lee's theory has the merit of being one of the first trying to identify in a modern and scientific way the drivers of such a complex phenomenon after Ravenstein first addressed them in Scotland in 1885. 4 The main elements of the 'push and pull' theory will also be considered in this article for didactic purposes, but the Author recognizes that in the current global world reality is certainly much more complex and faceted, involving both local national realities and macro-level causes as well as mesolevel and micro-level causes related to the link of the individual to his/her ethnic or religious group and the personal characteristic of the individuals respectively. 5 (Figure 2) Recently, the 'pull-push plus' theory has also been proposed, which considers predisposing, proximate, precipitating and mediating drivers of migration. 6 Regardless of the theoretical framework adopted, the topic addressed by this article is difficult because sound scientific data are scarce, existing literature is mainly qualitative and often presented as grey literature. In addition, geographical and cultural elements may influence the weight of the single determinant in different continents and in different periods. Finally, although the various drivers will be presented separately, we recognize that they are part of a unique complex scenario where they strongly interact. --- Definition of migrants According to the International Organization for Migration (IOM), a migrant is 'any person who is moving or has moved across an international border or within a State away from his/ her habitual place of residence, regardless of (1) the person's legal status; (2) whether the movement is voluntary or involuntary; (3) what the causes for the movement are; or (4) what the length of the stay is', a broad definition indeed. Under such definition, and strictly limiting our analysis to south-to-north migrants, two major broad categories may be identified: (a) Labour (or economic) migrants (and family reunification) and (b) Forced migrants (asylum seekers and refugees); whose reasons to migrate may differ, even if difference between the two categories are probably smaller that estimated once and the same migrating individual may fall in one or the other category at the same time. 5 In this respect, it is useful to report below the synthetic definitions of asylum seekers and refugees from IOM. 7 --- Asylum seeker A person who seeks safety from persecution or serious harm in a country other than his or her own and awaits a decision on the application for refugee status under relevant international and national instruments. In case of a negative decision, the person must leave the country and may be expelled, unless permission to stay is provided on humanitarian grounds. --- Refugee A person who, 'owing to a well-founded fear of persecution for reasons of race, religion, nationality, membership of a particular social group or political opinions, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country (Geneva Convention, 1951, Art. 1A).' --- Drivers of migration The factors acting together and determining the final decision of an individual to migrate may be subdivided in macro-elements (largely independent from the individual), meso-elements (more closely related to the individual but not completely under the individual's control) and micro-element (personal characteristics and attitudes). Those that have been more extensively studied will be discussed in this article. --- Inadequate human and economic development Human development is enormously unbalanced in the various regions of the planet and the gap is increasingly wide. The economic and political reasons underlying this sad situation are beyond the scope of this article and will not be addressed here. The Human Development Index (HDI) is a composite index combining the performances of the different countries on health (lifeexpectancy), education (years of schooling) and economics (per capita income) proposed by the United Nations Development Program (UNDP). The 2016 HDI top ranking includes 15 western countries (11 European, 2 North American, 2 in Oceania) and 5 Asian countries among the first 20 ranked nations. 8 At the opposite extremity of the list, 19 out of the last 20 nations with the lowest HDI indexes are from Africa, a striking difference. However, during the first decade of the new millennium, many African countries experienced a remarkable economic growth, with gross domestic product (GDP) increases exceeding 5% in average according to the International Monetary Fund. Unfortunately, the consequent relative wealth has not been equitably distributed in the population and the subsequent world economic crisis since 2011 has slowed down the economic performances of most African countries to a bare 2% yearly GDP increase. As a consequence, most jobs in developing countries are still in the informal sector, with little salary and social protection, thus nurturing the willing to find better job conditions elsewhere. Low performances in the health, education and economic sectors are a reflex of the vulnerability of the health, education and productive systems which is caused by the lack of economic and human resources. With particular regard to the health sector, such situations that provide little professional and economic motivation pave the way for qualified health professionals to leave their origin countries, a phenomenon known as 'brain drain' and creating a vicious circle. Poor health services, little educated and qualified work force and poverty are a fertile background promoting migration of individuals in search of better life. New communication technologies, largely available in urban settings even in developing countries, allows people to compare the western lifestyle with the local situations where the luxurious houses and cars of expatriates (and local authorities...) often contrast with the poor living conditions of the local populations. The gradient of prosperity. Migration and development are strictly linked and influence each other. Paradoxically enough, in fact, migration may be driven by both a lack of development and by an increasing socioeconomic development in a specific country, at least in the initial phase. 9 --- Demographic increase, urbanization The world's living population has increased in an unprecedented way during the last two centuries, from 1 billion estimated to live in the year 1800 to the more than 6 billion living at the beginning of the second millennium, to the roughly 11 billion that will probably inhabit the earth in 2100. 10 The bulk of this massive increase is taking place in Asia and Africa, where high fertility rates, driven by infant mortality, and poor birth control programmes result in high annual population increase rates. On the contrary, the fertility rate in western industrialized countries is shrinking. According to the World Bank, the average fertility rates in high income countries was 1.7 children per woman in 2015, while it was 4.8 per woman in low-income countries. 11 As a global result, the population of western industrialized countries is reducing in size and getting progressively old (aging population), while the young working-age population of the developing countries is rapidly increasing. The African continent offers a striking example. From 493 million in 1990, the African population grew to 1 billion in 2015 and it is expected to rise to 2.2 billion in 2050 and to 4 billion in 2100! 12 With particular regard to the African continent, the increasingly young population will probably exceed by far the otherwise improving-but not equitably distributed-economy, giving origin to the so-called 'jobless generation' phenomenon. This means that the increasing global wealth is not mirrored by a proportional number of jobs to satisfy the increasing expectations of the growing skilled young generation, at least in the short-medium term. 13 As a matter of fact, the flow of migration in relation to demographic increase could also be regarded in the opposite way, raising the question 'why do so few people migrate?' 14 In fact, even if the stereotype of migration proposes a model of'mass' invasion of rich countries by migrants from low-income countries in terms of absolute numbers, the proportion of migrating people is quite stable (3.3% of the world population in 2015, 2.4% of the world population in 1960). --- Climate changes It is now almost universally accepted that the climate is becoming warmer and warmer at an increasing speed, causing health inequalities across the world 15 apart from other unwanted effects. It is also accepted that the driving causes of such climate changes started with the industrial revolution, are mainly anthropogenic in nature and are largely due to the emission of greenhouse gases (in particular CO 2, methane and nitrous oxide) by industrial activities from carbon-based energy. It has been estimated that 97% of such emissions occur in industrialized rich countries, leaving a mere 3% emission coming out from low-income countries. 16 The impact of climate changes is astonishingly severe in the south of the world, where 150000 are estimated to have died in 2000 from the consequences of the planet warming. 17 Drought, flooding, increases in arthropod borne infections due to vector spreading in regions where the contrast measures are difficult to implement due to scarcity of means also indirectly impact on morbidity and economic agricultural revenues. The case of Lake Chad is extreme but enlightening. From the nearly 25000 square kilometres Lake Chad had in 1963, its water now covers a bare one-twentieth of its original extension, with severe impact on the fertility of the surrounding land. This shortage of water, food and agricultural resources forces people and livestock to move in search of a less hostile environment. 1 Examples of land degradation induced by climate changes are multiple and represent a driving force for people to migrate by producing food insecurity and risk of health-related crisis. 18 According to the IOM, environmental migrants are those 'persons or groups of persons who, for reason of sudden or progressive changes in the environment that adversely affect their lives or living conditions, are obliged to leave their habitual homes, or choose to do so, either temporarily or permanently, and who move either within their country or abroad'. 19 It has been suggested that the environment may impact on migration flows by directly affecting the hazardousness of place but also indirectly changing the economic, political, social and demographic context with very complex interrelationships. 20 The 'climatic migrants', as they are sometime called, might possibly reach the astonishing figure of 200 million by the year 2050, according to the IOM. 21 However, forecasts are difficult to make because sound scientific data on this topic are extremely scarce and do not permit reliable estimates. 22 The assessment of the real impact of worsening environmental conditions, albeit logical, would greatly benefit from sound research studies. --- Wars and dictatorship Even now, at the beginning of the third millennium, many areas of the world-in virtually all continents-host bloody conflicts and social instability where armed parties fight or where rude dictatorships are ruling and denying social rights. Some are well-known to the public (i.e. Syria and Afghanistan), while others are not as is the case of the Horn of Africa (Eritrea, Somalia) and some areas of West Africa (Mali, Gambia) and the Sahelian region or in Central and Southern America. 1 People may be denied basic human rights and the access to education and to a dignified life may be prevented, especially for females. Fundamentalism is such countries may easily grow, as it is the case with the deadly activities of Boko Haram in Northern Nigeria, that it is estimated to have caused the internal displacement of nearly 2 million people. 23 It is to be noted that the majority of displaced people in warring nations are relocated within national borders, thus officially they are not considered international migrants, but rather internal refugees. --- Land grabbing Land grabbing is a phenomenon that has become increasingly important since the beginning of the new millennium. The term 'land grabbing' refers to the intensive exploitation of vast areas of land in rural areas of low-income countries by private international enterprises or even by foreign governments in order to implement large-scale intensive cultivations (mainly biofuels and food crops) or to exploit minerals, forestry or the touristic industry. This happens to the detriment of the poor local population, which is poorly (and often forcedly) compensated and virtually obliged to leave the rural areas to reach the degraded urban peripheries within their own countries, where they often live a difficult life in a different setting from the one they and their families have experienced for centuries. Psychological and physical impairment is frequent in such communities and international migration may then occur. Apart from this direct impact, the economic benefit of small-scale agricultural industry is of advantage of the local communities, while the intensive exploitation of lands as a consequence of land grabbing is mainly to the benefit of the private enterprise stock owners and the international market, 24 leading to the progressive impoverishment of the increasingly resource-poor country. Together with environmental damages due to climate changes, the loss of small-scale land property and its turning into intensive exploitation causes a progressive land degradation, which leads to a progressive abandonment of native lands by a mass of people. 25 --- Religion This issue will only be briefly alluded to, as it is too wide and complex to be adequately addressed in such context. The history of humankind offers many examples of mass population movements caused by religion persecution or following the dream of a land where individual faith could be freely preached. However, these movements have often been the consequence of a political will as it has been the case of the conflictive Muslim, Hindu and Sikh movement across the newly created border between India and East Pakistan (now Bangladesh) in 1947. Similarly, Jews flowed to Palestine after the Second World War, also attracted by the law of return, favouring migration of Jewish people to the new state of Israel. In many other instances, religion has been the pretext for ethnic persecution and expulsion, as is possibly the case for the Rohingya Muslim population from Myanmar or the mass movements caused by armed fundamentalists groups such as Daesh or Boko Haram in the Middle East and sub-Saharan West Africa, respectively. --- Sexual identity A number of countries have a quite restrictive policy on sexual identity and LTGB people (lesbians, gay, transgender and bisexual people) face psychological and even physical violence, forcing them to hide their sexual identity. The impact of such policies on international migration has recently been the subject of some investigation that is in its infancy. No doubt, however, that an impact exists, especially from countries where'machismo' is considered a value. 26,27 A comprehensive overview of the issues related to the protection of social rights in those people forced to migrate due to their sexual orientation may be found in the 2013 thematic issue of Forced Migration Review. 28 --- Education A final note has to be dedicated to the education level of migrants. International migrants are often regarded as illiterate and poor people escaping poverty from remote rural areas. This stereotype is far from being true in most instances for both economic and forced migrants. Migrants in search of a better future usually have a more pronounced initiative, attitude and boldness than the average person, with some skills and financial resources needed to plan and fund a long-distance journey as it is the case for international migration. 29 In most instances, they are more educated than their peers left behind in their origin country. 30 Sometimes they are even more educated than their peers in the destination country. 31 In addition, individuals from families or communities that already positively experienced migration in previous years are more inclined to migrate as their travel abroad is regarded as of possible benefit to the origin society. 5 For such individuals, the existence of ethnic or family links in the destination country is a further driver of migration. The relationship between education and migration are twofold. From one side, the migration of educated people from lowmiddle income countries to OECD countries constitute a net loss of human qualified resources for the origin countries and a gain for the host country. A phenomenon known as 'brain drain'. From the other side, the financial and ideational remittances from destination countries may also have an impact on the education of non-migration children and adolescents in their origin countries. 30 --- Personal willingness to migrate All the above drivers of migration act, with different strength in different places, to build the general frame at the macro-level of each specific geographical, economic and political situation. However, the meso-or even micro-levels are also important in driving the final choice of the individual to migrate. The influence of the ethnic group, the family support-both economic and societal-is of the upmost importance for a specific individual to make the final choice to migrate or to stay. Educational level and access to financial means permitting to afford the migration travel have already been discussed above, but other factors such as ethnic and social customs are also important. The aspiration and desire to migrate is a crucial key factor that interacts with other external drivers of migration to build the final decision to actually migrate. 32 Health challenges in the destination country Regardless of the mix of drivers leading to migration in any individual person, migrants usually undergo a difficult integration process in the hosting community. Conversely, the receiving country could also be obliged to adapt its social and health systems to face the needs of the hosted population. In many instances, this process is not without conflict for the cultural and economic adaptations that it implies. From the health point of view, although generalization is inappropriate due to the heterogeneity of provenance and epidemiology of diseases in the origin countries, newly arrived migrants are usually healthy (the 'healthy migrant' effect) but more affected by latent infections than the host populations, 33 requiring screening policies and links to care. Crowded and inadequate living conditions in hosting camps may also lead to infectious diseases outbreaks, as recently reported in France. 34 However, despite the reported higher prevalence of selected infections in migrants, including potentially diffusive respiratory tract infections, the risk of significant spread in the receiving populations has been reported to be negligible, if any. 35 Once resettled in the host country, foreign-borne individuals may face infectious exposure when travelling back-often accompanied by children born in the host country-to their countries of origin. They are then referred as VFRs (Visiting Friends and Relatives), and represent a significant proportion of imported diseases in western countries, as in is the case for imported malaria. 36 Pre-travel advice in such VFR populations poses significant challenges to optimally address adequate preventive measures. 37 However, even the non-communicable diseases burden (diabetes, hypertension, metabolic disorders, cardio-vascular diseases, etc.) is increasing among migrants, as a result of changing alimentary habits in developing countries and to the progressive acquisition of western lifestyles after a few years in the receiving country. 38 Finally, the cultural interaction between the migrant patient and the care provider is often not without conflicts. The emphasis on the possible exotic nature of otherwise ubiquitous illnesses or, on the contrary, the underestimation of culturally bound complaints (cultural barriers) are often aggravated by linguistic barriers leading to potential medical errors. The knowledge of culturally sensitive medical issues, such as genital mutilations, is generally poor in western physicians, requiring specific training and research. 39 --- Conclusions In conclusion, the migration flow is now a structural phenomenon that is likely to continue in the next decades. While many migrants from low-income countries aim to reach more affluent areas of the world, it is to be appreciated that a similar, or even bigger, mass of people migrates to neighbouring low-income countries in the same geographical area. Migration is always the result of a complex combination of macro-, meso-and micro-factors, the former acting at the society level and the latter acting at the family or even individual level. The prevalence of a factor over the other is unpredictable. Among the'macro-factors', the inadequate human and economic development of the origin country, demographic increase and urbanization, wars and dictatorships, social factors and environmental changes are the major contributors to migration. These are the main drivers of forced migration, both international or internal. Among the'meso-factors', linking the individual to his/her ethnic group or religious community, land grabbing, communication technology and diasporic links play an important role. The role of communication technologies and social media to attract people out of their origin countries is indisputable today. Awareness of living conditions in the affluent world-albeit often grossly exaggerated-contributes to nurture the myth of western countries as Eldorado. The ease of communication with the diaspora and family members who migrated previously reinforces the desire of escaping from poverty to a challenging new life abroad. However,'micro-factors' such as education, religion, marital status and personal attitude to migration also have a key role to make the final decision to migrate that is an individual's choice. In any case, the stereotype of the illiterate poor migrant coming from the most remote rural areas and reaching the borders of affluent countries does not stand. The poorest people simply do not have the means to escape war and poverty and remain trapped in his/her country or in the neighbouring one. Some degree of entrepreneurship, educational level, social and financial support is usually requested for international south-north economic migration and personal characteristics and choices also play a role. This phenomenon has a positive aspect, as the possibility of success of migrants increases as do remittances, but also a negative one, as the most active part of the origin country may be drained preventing local development. Usually, even if generalization is inappropriate, newly arrived migrants are in good health, despite a higher prevalence of latent chronic infections ('healthy migrant' effect). However, marginalization in the host country may lead to a deterioration of such health status, a phenomenon known as the 'exhaust migrant' effect. Host countries, which may have also an economic benefit from migration in the medium long-term, have to be prepared to receive migrants for the benefit of the migrants themselves and their native population. --- Conflict of Interest None declared. | More than 244 million international migrants were estimated to live in a foreign country in 2015, leaving apart the massive number of people that have been relocated in their own country. Furthermore, a substantial proportion of international migrants from southern countries do not reach western nations but resettle in neighbouring low-income countries in the same geographical area. Migration is a complex phenomenon, where 'macro'-, 'meso'and 'micro'-factors act together to inform the final individual decision to migrate, integrating the simpler previous push-pull theory. Among the 'macro-factors', the political, demographic, socio-economic and environmental situations are major contributors to migration. These are the main drivers of forced migration, either international or internal, and largely out of individuals' control. Among the 'meso-factors', communication technology, land grabbing and diasporic links play an important role. In particular, social media attract people out of their origin countries by raising awareness of living conditions in the affluent world, albeit often grossly exaggerated, with the diaspora link also acting as an attractor. However, 'micro-factors' such as education, religion, marital status and personal attitude to migration also have a key role in making the final decision to migrate an individual choice. The stereotype of the illiterate, poor and rural migrant reaching the borders of affluent countries has to be abandoned. The poorest people simply do not have the means to escape war and poverty and remain trapped in their country or in the neighbouring one. Once in the destination country, migrants have to undergo a difficult and often conflictive integration process in the hosting community. From the health standpoint, newly arrived migrants are mostly healthy (healthy migrant effect), but they may harbour latent infections that need appropriate screening policies. Cultural barriers may sometimes hamper the relation between the migrant patient and the health care provider. The acquisition of western lifestyles is leading to an increase of non-communicable chronic diseases that require attention. Destination countries have to reconsider the positive medium/long-term potential of migration and need to be prepared to receive migrants for the benefit of the migrants themselves and their native population. |
INTRODUCTION The health of the oral cavity is part of the overall health of the body in the body health system because the mouth is the initial entrance of food into the body. Bad effects, if the oral cavity is not healthy, can cause various diseases such as cardiovascular and respiratory diseases and abnormalities in the respiratory teeth. The proportion of dental and oral health problems in Indonesia is also still high at 57.6% while in Sulawesi it is around 68.9%, which then has an impact on the prevalence of tooth loss in Indonesia which is also still Quite large at 24.52%, and cavities at 45.3%2. This shows that the prevalence of tooth loss still requires serious attention, corroborated by tooth loss data reaching 23% at the age of >60 years while 7% at the age of >20 years 3. The complexity of dental problems and their treatment has caused various responses from the community in the search for treatment both modern and traditional. Although the era has been sophisticated, until now dental care is still traditionally preserved for generations by the community in locality and has become a characteristic of andarea. The inheritance of local culture that has become a value of trust in the community and preserved by its generation is a way to honor its predecessors, this tradition is held as true (Hindaryatiningsih, 2016) (Ushuluddin, NO DATE) (Asrina, 2018). Traditions related to dental care are always maintained and not easily eroded because of the support from family or social environment where the individual grows and develops. (Syahrani, 2020). Attention to dental health is very important but it is necessary to note the impact that can be caused if done without expertise and just preserving tradition. Data released by Basic Health Research in 2018 stated that there is a high proportion of dental and oral health problems in Indonesia at 57.6%, while in South Sulawesi there are 68.9% which has an impact on the prevalence loss of teeth and cavities. One of the reasons is that people usually check their teeth if they experience complaints that cannot be treated by themselves. The traditional habit of caring for teeth that is still carried out in the Bugis community in South Sulawesi is Mappanetta' isi (repairing/strengthening the position of teeth). The tradition of mappanetta' isi is not only carried out when a person is still a child but until adulthood, the tradition is still carried out because it is believed to strengthen and tidy the arrangement of teeth. This tradition is introduced and carried out by parents to children since childhood, especially for those who have grown permanent teeth by biting firmly into the cloth that has been wrapped so that the teeth of the upper and lower jaws meet while rubbing left and to the right and back each morning, usually for a week and can be repeated at any time. Based on the data obtained that there is pain and soreness felt after doing mappanetta' isi, this could be due to strong pressure when biting the cloth. However, this did not last long and the next morning mappanetta' The contents can be restarted. This study is important to conduct to analyze family support on the preservation of Mappanetta' tradition of contents associated with dental health. In modern medicine, if the pressure is applied continuously without knowing how much energy is given, it will result in the occurrence of Trauma From Occlusion (TFO) or occlusion trauma to the patient's teeth, muscle mass disorders and jawbone or dental periodontal tissue that can result in periodontal tissue injury. Based on the previous background description, the purpose of this study is to analyze family support in the Mappanetta' isi tradition as an effort to maintain dental health in the Bugis community in South Sulawesi. --- METHODS Using qualitative (Bungin, 2020) methods to interpret and analyze in depth family support for mappanetta'isi as a cultural tradition of local wisdom of the Bugis Tribe in Wajo Regency in maintaining dental health. Data were obtained through observation, in-depth interviews and documentation related to mappanetta's content activities during the study. The research was located i n W a j o R e g e n c y, S o u t h S u l a w e s i. --- Data types and sources Primary data: obtained directly at the research site by using interview guidelines and observation sheets as well as measuring the degree of tooth shakiness (measurement data on the results of the study). Purposive selection of informants: 1 key informant is a religious figure and community leader, 4 main informants are people who carry out mappanetta'isi activities, and 1 supporting informant is a dentist. Secondary data in the form of dental patient visits and dental and oral diseases obtained from the Health office, books and related journals. Data Collection: Observations related to the direct implementation of mappanetta'isi and the interaction of each party involved, In-depth interviews on three categories of informants regarding tradition, support beliefs Family and impacts or complaints related to the activities of Mappanetta ISI and documentation related to the implementation of the Mappanetta'ISI tradition. Data analysis Using thematic analysis to find patterns of meaning from data that has been collected, validity of data using source triangulation, engineering triangulation and time triangulation. Extension of observations will be made if additional data are required. --- RESULTS AND DISCUSSION The research was conducted in March-May 2023 in Wajo District, South Sulawesi with 4 main informants, 1 supporting informant, namely the local dental health officer and 1 key informant, Based on table 1, it can be described that the age of informants who carried out the mappaneta' content tradition during the study was not only children but until adolescence and parents were still doing it, this asserts that if an individual's habits are considered positive for him, then they will be preserved. Based on work and education, it is illustrated that the tradition of mappanetta' isi developed in the Bugis community in this study and formed habits and values for its conservationists regardless of the level of education and occupation. The results of research on family support for the preservation of the mappanetta' isi tradition were generally introduced by their parents who also still maintain the custom, as revealed by informants: "Mappanetta' isi ini saya lakukan karena saya percaya supaya tidak cepat sippo (ompong) gigi dan dulu saya selalu diingatkan oleh orang tua dan sekarang juga saya selalu ingatkan anak dan cucu saya" (D, 51 tahun). The same was expressed by another informant: " Saya mappanetta' isi setiap bangun tidur pagi sejak kelas 6 supaya gigiku kuat, tidak cepat ompong seperti yang dikatakan dan dilakukan oleh orangtua saya"(NH, 20 tahun). The system of inheritance of traditions carried out by parents will be carried out by their children especially if it is easy to practice as revealed by informants: Based on table 2, regarding the frequency of informants based on tooth shak, according to Miller, obtained from 5 people as informants, it was found that in general, all informants had 5 degrees of tooth shambles, category 1, namely mild tooth wobble. This happens because the average informant conveys doing mappanetta'isi and routinely to the dentist 2 times a year so that the process of tooth growth and development is still well maintained. "Saya diajarkan Based on the information that has been described, it can be concluded the theme and meaning obtained from family support for the mappanetta tradition" content, as follows: There are many habits that grow and develop in society related to health, as well as in South Sulawesi which consists of several tribes and customs and is still preserved today. One of the customs that is still carried out in the Bugis community is mappanetta' isi which --- Vol. 11 Issue 1SP, August 2023, 40-45 doi: 10.20473/jpk.V11.I1SP.2023.40-45 <unk>2023. Jurnal Promkes: The Indonesian Journal of Health Promotion and Health Education. Open Access under CC BY-NC-SA License. Received: 09-06-2023, Accepted: 02-07-2023, Published Online: 02-08-2023 began in childhood. Mappanetta' isi is a traditional dental treatment to be strong and not easily toothless, done with Biting the upper and lower teeth using a holster, usually done in the morning. The trick, by twisting the tip of the sheath and then bitten while the upper and lower teeth are back and forth back and forth the same thing is done left and right, the goal is to make the jawbone more solid so it is not easy to shake. Another benefit felt by people who have mappanetta 'isi is not easy to experience dental rice disorders in old age. Masalah gigi terutama Periodontal including loss in old age is indeed the most common complaint and is usually caused by a lack of attention to dental and oral hygiene and care (Mappanetta, 2023) (Sari, 2015). Based on the results of the study, it was found that the habit of mappanetta' isi because it has been felt the benefits by the previous people in the Bugis community, there is a sense of satisfaction and trust in dental care Traditionally in order for the tooth structure to be strong, so it was continued by subsequent societies. According to Sutana (Wilson, 2019) (WHO, 2022) the nature of his research on the Nginang tradition reveals that people's charities determine their usefulness, habits in the activities of a tradition show that society manifests self in the context of time and space. Associated with this study, activities carried out for generations are efforts to preserve culture expressed orally and behavior aimed at maintaining healthy teeth and gums. If it is associated with the theory of behavior formation by L. Green 14, Family is a reinforcing factor in behavior, especially if there are facilities that support the formation of such behavior as well as the tradition of mappanetta' isi. The role of the family in the formation of behavior is very influential on the individuals in the family, the values taught will be internalized and become habitual patterns that will be passed on to their generations. Family is the most important part and is the first source of support in social life that a person receives (Yusselda, 2016). As well as family functions expressed by Friedman that families provide informational, instrumental, judgmental support and emotional support so that modeling or inheritance of behavior occurs in the home environment (Friedman, 2010). In the mappanetta' isi tradition, the family performs its function by providing emotional support in the form of attention and concern for their children, informational support related to introducing and providing information and Understanding of mappanetta' isi, instrumental support in the form of facilitating the implementation of these habits by using materials in the house such as sarong cloth as a basis for biting, as well as assessment support in the form of motivation and direction to children to take care of their teeth so that they are neatly arranged, and not easily shaken with meppanetta' filling. In the process of inheriting cultural values and traditions in the family environment played by parents by carrying out habits that will be seen, noticed and internalized by children so that they feel These cultural values and carried out from generation to generation (Hindaryatiningsih, 2016). --- CONCLUSION Based on the results of the study, it can be concluded that the role of family support in the preservation of mappanetta' content is in the form of a process of socializing values by introducing, practicing, sharing experiences and paying attention to the aesthetics of their children's teeth So that it is internalized and functions into a habit from generation to generation. | Background: Dental and oral health is very important because it is one of the highest indicators of individual health and disease that people complain about so various ways are done by individuals to maintain dental health both medically and traditionally. Various traditions of caring for teeth have been carried out for generations by the community, one of which is Mappanetta' isi which is a traditional way of the Bugis community in South Sulawesi in maintaining dental health. Objective: To analyze family support in the Mappanetta' isi tradition as an effort to maintain dental health in the Bugis community in South Sulawesi. Method: research is descriptive qualitative with an ethnographic approach using observation techniques, in-depth interviews and documentation during the research. Informants consisted of 1 community leader as a key informant, 1 health officer and family as support and 4 main informants of purposively selected research with the criteria of the Bugis community who preserve the tradition of mappanetta' isi. The data that has been collected will be reduced, catagorized and presented in narrative form. Data analysis with taxonomy, data validity plan using triangulation. Results: It was found that the preservation of the mappanetta' isi tradition could not be separated from family support or other social support. The role of the family is always to introduce and carry out the tradition to its generation using simple tools, this strengthens the instillation of values and beliefs of the Bugis community in Wajo Regency on mappanetta' isi dental care. Conclusion: Family support is a factor in the preservation of the mappanetta' isi tradition, so that the custom can be accepted and is still practiced today. |
E p u b a h e a d o f p r i n t --- Introduction Retirement is a multifaceted decision with broad economic and social implications for modern society. Leaving the workforce has an inexplicable relationship with health outcomes and subsequent public policies (1)(2)(3). However, the cost and benefits of retirement on health and wellbeing has seen conflicting results (1,2,(4)(5)(6). Some reviews find that retirement decreases stress, improve health perceptions, and lower the severity of medical diseases (1,2). Other studies however found that retirement increase social isolation (7), and even advocate the benefits of working beyond retirement (8). Evidently, different phases of retirement (i.e., early vs. late retirement, etc.) have unique effects on different dimensions of health (i.e., mental health, mortality, frailty, etc.) (2). Demographic factors like age, gender, education, marital status and socio-economic status moderate the relationship between retirement and health (1,2). There is also little consensus regarding the cost and benefits of retirement on health across different countries (9). These suggests that there is substantial heterogeneity of health experiences among retirees, making the relationship unique to each society. Several Asian countries have conducted longitudinal aging and retirement studies (10)(11)(12)(13), but none exists in Singapore (14,15). Singapore has experienced a significant decline in birth rate since the early 1970s with historic lows averaging 1.19 over the past ten years (16,17). In 2018, older adults aged 65 years and above equaled that of youths 15 years and below, described as a "demographic time bomb" (16). By 2030, youths 15 years and below will plunge to 11% while older adults 65 years and older will reach 27% of the population (16,18). Simultaneously, disability prevalence were projected to grow by five-fold in 40 years (19). Medical ailments like, cardiovascular diseases (14.2 percent of total disability-adjusted life years), cancers (13.3 per cent), musculoskeletal disorders (12.6 percent), mental disorders (10.2 per cent) will be the top leading causes of disability-adjusted life years from 2017 in Singapore (16). As a result, lifetime hospital expenditure is projected --- E p u b a h e a d o f p r i n t to increase by 30%, posing economic and policy difficulties for long term care (20,21). Based on these predictions, trajectories and profiles of retirement and health are likely to diversity (22), presenting a challenge for policy makers on issues such as retirement adequacy, healthcare spending, long-term care, and psychosocial issues such as ageism (23)(24)(25)(26)(27)(28). Subsequently local policies need accurate data to format appropriate long-term health plans for the population. To circumvent challenges related to the rise of aging in Singapore, the government convened an 'Inter-ministerial Committee on the Ageing Population' in 1999 to produce a report on the challenges, opportunities, and a policy roadmap to prepare for a rapidly aging population (29). The report underscored the need for nationally representative cohort studies to inform policy( 29 There are several purposes of the survey, one of which addresses the paucity of longitudinal studies with nationally representative cohorts in Singapore (30,31). The RHS was designed to obtain information with a clear understanding of aging in Singapore, especially to map longitudinal trajectories of health, social, and economic developments of the aged population. This contribution provides micro-data for policy making long-range planning for aging, and simultaneously informs research practices about the current state of health E p u b a h e a d o f p r i n t adequacy among older Singaporeans. Its aims are not restricted for local use and is aligned with international efforts to reframe aging (32)(33)(34)(35), and preventive health policy developments especially during COVID-19 (36)(37)(38). Recent efforts in Southeast Asia have amassed several longitudinal surveys of the aged population across the region (39,40). The RHS compliments these international efforts to enable comparative studies of aging across datasets of other countries. This RHS also contributes to the literature extant about the relationship between retirement and wellbeing, detailing trajectories of retirement, while accounting for a broad range of pertinent demographic factors, and its impact on several health outcomes. --- Study Participants In --- E p u b a h e a d o f p r i n t was unable to provide consent due to physical or mental disabilities, and approximately 0.79% (n=120) of wave one, and 0.57% (n=73) of wave two required an LAR (Table 1.). In particular, the educational qualifications are representative of the baby boomer generation characteristics of Singapore's early beginnings as a third world country in the 1950s (i.e., approximately 20% did not complete elementary education) (23). --- Attrition Attrition is of concern for longitudinal studies, the RHS circumvents this in two ways. The RHS oversampled during the initial phase to maximize sample size to ensure a reasonable number of follow-ups. By the second wave 1.8% (n=270) participants were deceased, an additional 20.69% (n=3106) of the participants declined to follow-up. This resulted in 77.65% (n=11727) who were eligible for the second wave (Table 1.). To ensure that the sample remains representative of the wider population, the sample was refreshed by including an additional 1,142 new age-eligible participants (Table 1.). Among these new participants 2.28% (n=26) were proxy respondents, and 0.79% (n=9) required LARs. --- Measurements The survey was catalogued into 10 sections, with a broad range of topics related to physical and mental health, employment and retirement characteristics, financial status, utilization of healthcare and insurance, lifestyles and recreation, and cognitive function. These reflect psychosocial, socio-economic, and health characteristics of the aging population in Singapore. Health related developments in the RHS assessed for cancer, high blood pressure, hypertension, cholesterol levels, diabetes, arthritis, depression, cognitive function, and dementia. Among these, diabetes, hypercholesterolemia, hypertension, dementia, and depression were identified as the top five chronic non-communicable diseases in Singapore (41). --- E p u b a h e a d o f p r i n t Table 3 provides a brief overview and describes items measured in each section. To maintain consistency and comparability of data, efforts were made to ensure that differences between the survey items for the first and second wave were kept to a minimal. However, some changes were required to enhance the clarity and relevance of survey respondents. This primarily served to accommodate new priorities by the government in policy development. Data collection was outsourced to a global survey company through a public tender process-was done via face-to-face interviews lasting between 1.5 to 2 hours. Participants were briefed about the scope and aims of the study, and informed that all responses were strictly confidential to maintain anonymity. The survey included questions about the respondents' spouse or partner, participants were allowed to disclose information on their behalf or have their partners answer directly for those portions of the survey. Participants were contacted within two months of the interview if necessary to ensure accuracy of the information. Respondents were compensated for taking part in the survey with cash vouchers worth S$50 (circa US$37) if they completed the first wave of interview, and cash voucher of S$10 (circa US$7.40) upon completing the second wave of interviews. To reinforce data representativeness, weights were assigned to all sample units indicating how the respective population sizes are represented by each unit, with adjustments for non-responses. Both longitudinal and cross-sectional weights can be applied to account for within sample and time variant characteristics. This ensured that samples were nationally representative of census for gender, ethnicity, marital status, education, and socio-economic status (SES) in Singapore. Although no physical examination was conducted during the interview process, the study was validated by forming limited linkages to relevant administrative data of the respondents, and this also circumvented missing data. The RHS sought approval from respondents to access administrative data from the Ministry of Health. This enabled cross-referencing health data such as mortality, morbidity, comorbidities, and E p u b a h e a d o f p r i n t disability. Additionally, this link gave information about public healthcare utilization such as usage of community clinics, out-patient visits, and medical expenditure. Linkage consent rate was 94% for wave one and 96% for wave two. --- Key Findings and Publications The RHS was primarily set up to track retirement and health trends, but this extends into evidenced-informed policy agenda setting, formulation, scenario planning, communication, and evaluation across domains of social and family development. Table 3 present a preliminary snapshot of demographics known to be important toward the relationship between retirement and health (age, race, citizenship, marital status, education, housing, and income). The data included adults from 45 years to enable researchers and policy makers to track retirement trajectories and transitions. The proportion of ethnicities were also representative of the overall ethnic composition in Singapore with a Chinese majority (42,43). Broadly, the RHS include participants from low and middle socio-economic status, which is generally representative of our broader society in general. A large proportion of the population reside in mid-range priced public housing subsidized by Singapore's Housing Development Board (HDB) compared to more expensive private housing (42). Also, a smaller number attained a higher education (i.e., post-secondary, degree, etc.), which is reflective of the characteristics of local baby boomers in the early 1950s (44). We present a preview of health and retirement descriptors over two waves (Table 4). For comorbidities, high cholesterol and high blood pressure are the most prominent clinical diseases followed by diabetes among adults. Impairment for activities of daily living (ADL) were relatively consistent across most activities, but difficulties were relatively higher for managing finances and indoor mobility. A unique feature of the RHS was that it accounted for the intention to seek work as well as employment status. While there are studies that examine the intersection between work and wellbeing, few articulate how underlying motivations to --- E p u b a h e a d o f p r i n t engage employment impact on health outcomes among retirees (45). In this data, most participants were either still under employment, or unemployed with no desire to seek employment (Table 4). A report from the Central Providence Fund (CPF) in 2019 from RHS data found that 42% of participants did not withdraw their pension funds, taking advantage of the interest rates from their pension scheme. Up to 51% of those who withdrew from their pension deposited the monies into private saving accounts and finance companies, reflecting a desire for liquidity among older cohorts. Decisions to withdraw or retain finances in their pension depend on the tradeoff between meeting their current financial needs and investing their pensions funds for retirement. In the same report, 86% of working participants opted to transit into partial retirement before full retirement. Amongst them, 58% preferred to enter partial retirement by gradually reducing their work hours, and 33% preferred to reduce their workload. Beyond this, other fields of disciplines such as health and epidemiological research have used data from the RHS. --- E p u b a h e a d o f p r i n t Another study examined the association between retirement intentions and employment after the statutory age of 67 in Singapore, where older adults with higher SES had a decreased risk of unemployment, while those from the manufacturing sector posed an increased risk of unemployment. This suggests that health care services need to help older adults clarify their intentions and attitudes toward retirement to prepare for productive aging. Additionally, two studies demonstrated that the lack of intention to seek employment increased the risk of developing disability over a two-year period, and inversely those actively looking for work demonstrated a higher probability of recovering from a disability. Engaging in meaningful work activities potentially help adults with disabilities recover toward independence. This meant that therapeutic efforts for disabilities may consider the benefits of purposeful engagement toward employment. Beyond retirement and health, the RHS also found collaborations with other fields of study, exemplifying its contribution to a broader extent of the literature. Recently, a published study utilized the RHS to project long-term care needs. The researchers modelled not only functional disability, but also included social factors such as isolation and living arrangements (19). Including social factors such as these to project long-term care has received less attention, especially for ethnic minorities in Southeast Asian societies (46)(47)(48). They found that physical disability was projected to increase by five-fold, and social isolation would escalate by four-fold over the next 40 years (19). The study also found ethnic disparities in social functioning, where Malays were more likely to be socially isolated compared to Chinese after adjusting for demographic variables (19). Therefore, social functioning and ethnicity are potential factors to consider for local long term care policies. --- Strengths and Weaknesses There are several strengths and weaknesses. It is the first and largest nationally representative longitudinal study of aging in Singapore, providing significant detail on health, E p u b a h e a d o f p r i n t psychosocial, socio-economic processes of aging. The sample is nationally representative with weights that closely approximate the socio-economic and ethnic composition of Singapore's population. This therefore provide valuable insights for evidence-based policy interventions at a national-level. The study's biennial design ensures that longer term withinsubject effects can be accounted for during modelling, with unbiased estimates of factors affecting health outcomes across the study population. Furthermore, the interview questions were carefully designed to provide rich resource for researchers undertaking research into the social wellbeing of seniors in Singapore. For instance, the survey design introduced an innovative focus on family structure. Finally, efforts were made to validate the survey by linking the respondent's administrative data. This serves to cross-reference responses to ensure a high quality of reliability and representativeness of the data. However, the RHS is not without limitations. Administering the questionnaire with a plethora of topics proved challenging. As such, it is difficult to strike a balance between the breadth of the survey and its ability to track aging factors in greater detail. The experience of aging can vary over time, and thus the importance of different aspects of lives can change as a function of social and environmental pressures (49)(50)(51). Furthermore, the survey may need to consider that cohorts are not always homogenous, and aging can be experience differently according to the individual's lifetime experiences (49,50). This means that standardized procedures to alter items and procedures in the survey to fit the changing socio-economic landscape of aging need to be articulated. b Indicates the number of respondents who declined after completion of Wave 1 to be followed up in Wave 2. c Legally acceptable representatives. This project laid the groundwork to design resilience programs for older adults to promote recovery from disabilities. --- E p u b a h e a d o f p r i n t --- Frequent admission to public hospitals This project sought to distil the psychosocial factors associated with frequent admission to public hospitals in Singapore. Insights from this study will lay the groundwork to design social service and community programs to decrease the risk of frequent readmissions. --- E p u b a h e a d o f p r i n t --- Ethics approval for the RHS was granted by the Health Promotion Board Medical and Dental Board (HP24:03/31-2). --- Section C: Employment Status, History and Retirement Respondents' employment status and characteristics, job history, workplace features, and employment characteristics of spouse/partner. --- Section D: Financial Background and Status Comprehensive assessment of assets, primary sources of income, residential property characteristics, attitude toward housing, debt, loans, liabilities, bequest, investments, life insurance, and includes financial information of spouse or partner. --- Section E: Sources of Financial Support and Subsidies Other sources include financial assistance from spouse, children, agency subsidies, family, ad-hoc and regular transfer of funds. Section F: Household Expenditure Amount spent on food, transport, recreation, utility bills, rent. --- Section G: Health Insurance Plans Covers government schemes (e.g., Medishield, Eldershield), company healthcare benefit plans, family insurance, and any other insurance plans. Section H: Healthcare Utilization Consist of dental care, out/in-patient care, nursing costs, local/overseas surgery, home care services, day care usage, family healthcare expenses, usage of health aids, health supplements, and alternative treatments. --- Section I: Lifestyle Factors Linked to Health Assess for physical activity, smoking, drinking, social connectedness, and recreational lifestyles. | Restrictions apply to the availability of the study's data. Data were analyzed via limited secure access at the Ministry of Health. Collaborations are encouraged, and interested parties may contact the corresponding author. Author contributions: RN conceptualized the cohort profile, outlined the methodology, analyzed the data, wrote the manuscript and acquired the funding. YWT wrote the manuscript. KBT provided valuable input into the conceptualization of the cohort profile. |
Background A population is "hidden" when no sampling frame exists and public acknowledgment of membership in the population is potentially threatening [1][2][3]. As a representative of hidden populations, people who are infected with HIV/ AIDS tend to suffer pressure and discrimination. However, due to social environmental pressure and other factors, there are many difficulties in conducting comprehensive and representative studies of the HIV population. To date, the study of this population has mainly focused on interviews and questionnaire surveys based on offline or online population sampling. In most cases, these traditional methods are inefficient, limited in sample size and representativeness, and challenged by privacy concerns and reporting error [4][5][6][7][8]. As a result of the development of Internet technology, people's social lives have undergone tremendous changes from offline to online. People frequently publish, send, and share information in various virtual communities [9], thereby generating large amounts of data concerning online activity, which can be useful for the study of hidden populations. Through analyzing such data, it is possible to excavate the behavior patterns of hidden groups effectively, particularly as the number of online community users is unprecedentedly large, and it has been found that people are usually more honest and trusting when talking online [10][11][12]. It is expected that characteristics excavated from large-scale online community data may be more reliable, representative, and broad than those derived from offline data. Hidden populations are gathered in all kinds of virtual communities, such that many scholars recruit or investigate hidden populations online [13,14], especially recruiting respondents through links of online social networks [15]. In recent years, there have been many studies of hidden populations that have used snowball sampling and respondent-driven sampling (RDS) in online communities [16][17][18]. However, studies of hidden populations that have directly analyzed their online data in virtual communities are rare, and existing studies have mostly investigated social support for the targeted population. For example, Winefield examined the content and frequency of messages in an Internet support group to analyze the emotional support of women with breast cancer [19]. Im et al. used thematic analysis to explore the social support of patients with cancer in Internet cancer support groups (ICSGs) through an online forum [20]. Coursaris conducted content analysis of postings from a selected online HIV/AIDS forum to assess the types and proportions of social support exchanged among the HIV population [21]. Instead of discussing social support for hidden populations, in this study we try to understand the multidimensional characteristics of a hidden population by analyzing the massive data generated in the largest Chinese online community, Baidu Tieba. Specifically, we aim to extract features of the online users in the HIV group with regard to various aspects, including temporal patterns of online activity, social network structure, community structure and its connection to social distance and similarity of content, emotional tendency, etc. Most of these characteristics are typically difficult to study with traditional survey methods. Therefore, online data mining serves as an important supplement for the study of hidden populations, allowing researchers to investigate multidimensional characteristics of hard-to-access groups with unprecedented richness of information. --- Methods --- Data sources As the world's largest Chinese online community, Baidu Tieba has attracted a large number of social groups based on common interests [22]. Baidu Tieba is provided by Baidu, the dominant Chinese search engine company established on December 3, 2003. It functions by having users search or create a bar (Forum) by typing a keyword. If the bar has not yet been created, it is then created upon the search. "Bar" refers to a forum providing a place online where users can interact, covering topics related to games, films, popular stars, books, news, diseases, etc. Currently, Baidu Tieba has more than 20 million bars and the number of active users has reached 300 million [23]. To collect activity data on the HIV population in the online community, we chose the largest bar related to HIV on Baidu Tieba, "HIV bar" (http://tieba.baidu.com/ f?kw=hiv), and used Scrapy, a fast web-crawling framework, to extract the data we needed from the webpages. By elaborately designing the crawler, we were able to retrieve a complete dataset from the HIV bar for all records from January 2005 (the time when it was created) to August 2016. The dataset contains user information, content of posts, and the complete text of comments and replies. The collected data are saved in the local PostgreSQL database as three tables (Table 1), including a total of 72,328 user records, 76,865 posts, and 1,726,227 comments. There is considerable heterogeneity in the number of posts and comments generated by each user: while the majority (80%) of users wrote fewer than 4 posts and 15 comments, a small proportion of users actively generated a large number of posts and comments. The distribution of users' comments and replies is shown in Fig. 1a. Based on the time of users' posting in the HIV bar, we can analyze the temporal characteristics of the online HIV-related group and differences from other groups. As shown in Fig. 1b, compared with the news-related users and men who have sex with men (MSM)-related users (based on a large archive of data retrieved from MSM-related bars and news-related bars, including 270,229 users and 6,316,158 posts, respectively), the peak time of posting for HIV-related users is 22:00-23:00, and the lowest period is 3:00-5:00. It is worth mentioning that while the posts of ordinary users (news-related) decline from 20:00, those for the two representative hidden populations, MSM-related and HIV-related users, are on the rise. Initial inspection of the posts reveals that for ordinary users, their online topics are around news concerning politics, the economy, and social issues. For MSM-related users, their motivation for online posting is mostly for entertainment and to meet partners, and they are more active around midnight. For HIV-related users, their online topics are mostly related to consultation about HIV/AIDS, and as they are more concerned about their health status, they tend to go to sleep earlier than MSMrelated people. More sophisticated analysis of the online content can be found in the Results section. --- Community mining For many online activities, it has been shown that users tend to interact with others who are similar to themselves, forming distinct network communities, and stimulating studies on influence-based contagion or homophily-driven diffusion [24][25][26][27]. However, in-depth examination of the characteristics and dynamics of online community structure for hidden population was rarely found in literature [28,29]. To fill in this gap of knowledge, in this study we explore possible communities in the HIV population who are active in the HIV bar, and analyze the characteristics and links of different communities to study the organizational model and behavioral characteristics of the HIV population. Using the response and comment relationships between users in the HIV bar as links and users as nodes, we construct an interaction network of the HIV population (hereinafter interaction network, see Fig. 2). As can be seen from Fig. 3a, the degree distribution of this network follows a power law, indicating large heterogeneity in the number of users with which each node interacts. Using community detection in concert with topic modeling is a useful way to characterize communities for online population [30]. In this study, we implement community mining from two perspectives. First, we classify the users according to the content of their posts, and then discover user communities according to the topological structure of the interaction network. It is worth noting that the clustering based on text similarity only focuses on the content of users' posts, regardless of links in the interaction network. After extracting all the content in the posts of active users (who have written more than three posts) in the HIV bar, we start the preprocessing of text, such as text cleaning and word segmentation, then use the Doc2Vec algorithm to construct the feature vectors of documents [31]. Finally, we implement text clustering with the unsupervised algorithm, K-means, to divide the users into groups with similar features. Since the K-means needs to determine the number of clusters manually, we use the sum of distances from all nodes to their cluster centers, as a criterion to select the best number of clusters: SSE 1<unk>4 X k i1<unk>41 X x<unk>C i j x-u i j j j 2 where k denotes the number of clusters, u i denotes the cluster center in cluster C i, and ||xu i || denotes the distance between node x and the corresponding cluster center u i. k is then determined by minimizing SSE. We also carry out community mining from the perspective of the network topology. Based on the structure of the users' interaction network, we choose Infomap [32,33], which is a highly efficient algorithm for detecting non-overlapping communities in directed weighted networks [34,35], to detect communities in the interaction network. The sizes of the two groups of communities we found are shown in Fig. 3b. We can see that while the sizes of text similarity-based communities are all quite similar, the interaction network-based communities exhibit a wide range of sizes. To explore the relationship between the text clusters and topological communities, we first use topic modeling algorithm to extract the topics of documents and measure the similarities of topics among users in the same cluster. Since the Latent Dirichlet Allocation (LDA) model [36,37] requires the specification of the number of topics, the Hierarchical Dirichlet Process (HDP) model [38,39], which is derived from LDA and can automatically determine the optimal number of topics, is used in the process of topic extraction in this study. Then for each cluster, we calculate the average topic similarity, which is the average of the similarities between all pairs of users in a cluster: S C <unk> <unk> 1<unk>4 2 n n-1 <unk> <unk> X i; j<unk>C s i; j <unk> <unk> where n denotes the number of users in a cluster C, and s(i, j) denotes the topic similarity between a user i and another user j. To measure the social distance of users in each community, we calculate the network efficiency [40], which is defined as: E G <unk> <unk> 1<unk>4 2 n n-1 <unk> <unk> X i<unk>j<unk>G 1 d i; j <unk> <unk> : where d(i, j) denotes the length of the shortest path between a node i and node j. In the figures of this paper, the average topic similarity is denoted by S and network efficiency is denoted by E. --- Text mining To analyze popular terms (vocabulary) in HIV communities and the context in which these words are used, we use keyword discovery to extract the words the HIV population frequently posts online. It is worth mentioning that these HIV communities are those found by community mining based on the content of posts. In this study, we discover popular keywords according to their TF -IDF values. Based on the word segmentation results, we can calculate the TF -IDF value of each word, so the popular keywords can be selected after removing stop words. For the purpose of this study, we define the popular keywords as the top 100 meaningful words with the largest TF -IDF value in a community. In addition, to analyze the topics the HIV population tends to discuss, and the purpose of the members' online activity, topic detection is then carried out. Topics are discovered using the HDP model and we develop document clusters based on topic similarity. Thereby, we can conveniently identify the themes of clustered documents, i.e., the topics addressed by different users. --- Sentiment analysis Sentiment analysis concerns analysis, processing, induction, and reasoning related to emotional subjective text aimed at discovering the attitude of the speakers on certain topics or their emotional state. By mining the text content of posts of users in the HIV bar, we can analyze the emotional state of this group. While both supervised learning and unsupervised learning can be used in this case [41][42][43], in this study, we mainly adopt the rulebased method to analyze the emotions of each user in the HIV group and the tendency in sentiment of different communities to uncover the emotional characteristics of the HIV population [44]. Sentiment words extraction is mainly based on two popular Chinese sentiment dictionaries, the Hownet lexicon and the National Taiwan University Sentiment Dictionary (NTUSD), both have approved ability of achieving high precision in the Chinese sentiment analysis [45,46]. All posts by a user consist of a document. According to the text of each document, we extract the sentiment words, then calculate the sentiment score based on the frequency and intensity of sentiment words it contains. Positive words score from 1 to 5, negative words score from -1 to -5, and the absolute values represent sentiment intensities. If the sum of positive scores is greater than the sum of negative scores, the document is considered positive. Finally, each community is assigned a positive and negative score, representing the percentages of positive and negative users, respectively. The precision and correctness of the dictionary-based sentiment analysis are further validated with a comparison to human judgments on a sample of 100 posts randomly selected from the data. As one can see from Table 1, Additional file 1: Table S2 and Table S3, the precision and recall rate are above 85% and 89%, respectively. --- Results --- Community mining based on text similarity Using text clustering we find 150 clusters, each of which corresponds to a network (community) formed by interaction between users. We find a positive correlation between the average topic similarity of each cluster and the network efficiency of the cluster's corresponding community, as shown in Fig. 4a. The correlation coefficient is r = 0.70 (p <unk> 0. 001, nonlogarithmic, the same below), indicating that the higher the network efficiency of the text-based community (cluster), the greater the average topic similarity. That is, the closer the association within the community, the more similar the topics the community members discuss. Moreover, the average topic similarity of each text-based community also shows a positive correlation with the size of the largest weakly connected component [47], the maximal sub-graph in which for every pair of vertices there is an undirected path, and the correlation coefficient is 0.74 (p <unk> 0.001). To explore this finding further, we also analyze the correlation between the topic similarity and the network density (the number of connections divided by the number of possible connections) [48]. The results reveal that there is a significant positive correlation (r = 0.79, p <unk> 0.001) between the average topic similarity and the network density, and there is a weak negative correlation (r = -0.36, p <unk> 0.001) between the topic similarity and the community size. Therefore, the more frequent the interaction between users, the greater the density and the efficiency of the users' community, and the greater the similarity among the topics discussed. Comparing the average topic similarity of the largest connected component in the community to the average topic similarity of all users in this community, we find that the theme similarity of the connected component is much greater than the theme similarity of the community (Fig. 4b). That is, after excluding the non-connected nodes, the theme similarity within a community increases. This is because there is a greater difference between the topics discussed by users who do not interact with each other. --- Community mining based on network topology Based on network topology, we find a total of 1948 communities, of which 1605 are meaningful (excluding communities with only one node or without links). It can be observed that the degrees of connectivity of these communities are very high, and the proportion of fully connected (weakly) communities is 99.88%, i.e., 1603 out of the 1605 communities are themselves formed by nodes that are all weakly connected. Calculating the network efficiency and the average topic similarity of each topology-based community, we find a significant positive correlation (r = 0.73, p <unk> 0.001, Fig. 4c), indicating that the greater the network efficiency, the higher the topic similarity within the community. This is in line with the finding concerning text-based communities above, validating the positive correlation between network efficiency and text theme similarity from a network perspective. --- Community-based text mining After extracting keywords for each community, we find that there is a significant overlap of popular keywords between different communities. The keywords appearing in most communities are presented in Fig. 5. We can see that words related to HIV/AIDS counseling and diagnosis, e.g., hope, infection, feel, may, know, appear very frequently. Most of the self-tagged HIV/AIDS patients are willing to share their physical states, as well as their own diagnosis or counseling on the online social network. In addition, among these popular keywords, the negative words, e.g., do not, not, cannot, is not, appear frequently, indicating that there is a negative emotional tendency among the users in HIV/AIDS communities online. Moreover, most negative words are related to anxiety and fear of AIDS, e.g., "These symptoms make me worry, but I do not dare take a test." It is worth noting that the phrase the first time occurs with high frequency, such as "The first time I checked HIV was in Shanghai Xinhua Hospital," "For the first time I kissed a man, and then we got a room in the bathhouse [where people can sometimes call for sexual services in China]," "I drank last night and had WTGJ [the short form of the Chinese spelling for "anal sex without a condom," i.e., high-risk behavior] for the first time." This indicates that many people who suspect initial infection with HIV or have first contact with high-risk behavior tend to seek help and advice on a social networking platform in the beginning, rather than immediately going to a hospital for blood tests. We find a positive correlation between the topic similarity and the degree of interaction among community members. The closer the community members, the more similar the topics discussed in the community. Analysis of the topics discussed in these communities can reveal the needs and interests of HIV population. We analyze the top ten communities with maximum network efficiency and topic similarity values, and find that topics concerning HIV/AIDS diagnosis and treatment comprise a high proportion of the main topics of the HIV population, as shown in Fig. 6. In addition, it is worth noting that people tend to relieve their emotions in online communities, expressing their anxiety, horror, compunction, gratitude, or other feelings. --- Community-based sentiment analysis Because sentiment analysis of communities based on network topology is very sensitive to the size of the community, we implement the analysis for each user community based on the results of text clustering. In The size of scatter represents the size of community and the color corresponds to the level of topic similarity (green, low; yellow, high) Fig. 7a, we can see that in most communities the proportion of users with negative emotions is greater than 50%, indicating that most members' emotions in these communities are negative. Moreover, the proportion of negative users in each community is around 60% and has a weak positive correlation with community size (r = 0.25, p = 0.002). We select communities in which emotional tendencies are extreme, i.e., there are many more positive users in the community than negative users (hereinafter extreme positive community), or vice versa (hereinafter extreme negative community), to provide a comprehensive analysis of emotions. Specifically, we choose the top five communities in which positive users or negative users respectively account for the largest proportion, and extract the popular keywords posted in different communities according to the TF -IDF values. The results are shown in Fig. 7b, which shows that posts in extreme negative communities exhibit greater similarity, with a percentage of different popular keywords of only 35.75%; that is, 64.25% of the keywords discussed in all these communities overlap. In addition, we find that most of these keywords are about HIV/AIDS testing and treatment, physical condition, and family. In contrast, the percentage of different popular keywords is as high as 56% in extreme positive communities, and most keywords are about HIV/AIDS symptoms, counseling, testing, treatment, sentiment, and family. Comparing popular keywords between the extreme positive and extreme negative communities, we can see that in the extreme negative communities, more words are related to horror, anxiety, repentance, and other negative emotions, e.g., acute, high risk, side effects. However, in the extreme positive communities, users tend to express confidence, inspiration, gratitude, hope, and other positive emotions, and most popular keywords are about the HIV/AIDS diagnosis and active treatment. --- Discussion --- Summary of findings In this paper, we analyze the mentality, behavior, and needs of the HIV population based on online communities formed by similar text content or by social interactions to understand the current living conditions and emotional status of the HIV/AIDS-related population online. Based on community data mining, we have found that there is a positive correlation between the average topic similarity of the HIV community and the degree of internal interactions; that is, users discussing similar topics are more likely to interact, and vice versa. In HIV communities, the topics of the online HIV groups are primarily related to HIV/ Fig. 5 Common popular keywords appeared in more than 80% (left) and 50% (right) of HIV communities (see Additional file 1: Table S4 for details) Fig. 6 Distribution of topics in the HIV communities. Inner pie represents topics, and the outer ring represents popular keywords in each topic AIDS diagnosis and treatment, and there is a domination of negative emotions in this community. --- Discussion of the main results While it is a longstanding hypothesis that there is a correlation between similarity and friendship in human social activities [49][50][51], we demonstrate with real data that this is the case for online hidden populations: The degree of interaction and the topic similarity among users is positively correlated in HIV-related online communities. Moreover, this finding may provide insights for general social network studies, for which there may also be a relationship between interaction content and network topological structure. This study reveals that most topics of concern to the online HIV community are related to HIV/AIDS testing, treatment, and HIV-related consultation, consistent with existing studies in which social support for the HIV population has been studied through text-only analysis [52]. And we also find that many users who suspect initial infection with HIV or have first contact with high-risk behavior tend to seek help and advice on social networking platform as their first choice. Because of the traditional conservative culture in China, people who are infected with HIV bear considerable social pressure and discrimination. In China, it is difficult to investigate the HIV population, and to understand their needs accurately through traditional survey methods. However, we have found that the main topics of the HIV group online are related to HIV/AIDS diagnosis and treatment, indicating that the HIV population tends to acquire HIV knowledge and seek help through online channels. These all supports the notion that we can provide more effective and timely help for the HIV population through text mining of data they post online, and it is important to improve support from online services for HIV/AIDS consultation and diagnosis to avoid privacy concerns and social discrimination. Through sentiment analysis, we can see that negative emotions dominate in HIV communities, and these emotions are mostly related to the anxiety of initially infected patients, who tend to seek help and advice on social networking platforms as their first choice. To foster better social management, relevant agencies should pay more attention to the extreme negative communities. It is important to put these potentially HIV-infected groups under constant surveillance and to analyze their emotions continuously so that we can understand their needs, and provide relevant guidance and interventions promptly. With the rapid expansion of Internet use in China, a large number of people who are interested in HIV-related topics are involved actively online nowadays. We have shown that there is great potential in extracting behavioral characteristics of such populations by analyzing the content and interaction networks generated online. --- Strengths and limitations By analyzing the text content and social network of the HIV group from the largest Chinese online community, we have demonstrated the usefulness of online data mining for systematic investigation of the characteristics of hidden populations. There are several advantages with this methodology. First, the number of users in online communities is fairly large in comparison to the sample sizes achieved through traditional survey methods for hidden populations, such as people infected with HIV. Second, the richness of the data provided by online communities enables researchers to extract multidimensional characteristics of the target population, including features S5 for details) that are traditionally very hard to infer, e.g., social networks, emotional needs, etc. Third, the anonymity of online communities mitigates the privacy concern, and users can express their views at liberty, which ensures the accuracy of studies on hidden populations. However, it is worth noting that it remains to be validated that whether the findings concerning users in online communities can be extrapolated to the target population in real life. The representativeness of the online populations in topic-specific communities, differences in population characteristics across social networking platforms, and the design and implementation of public health intervention strategies are yet to be studied in the future. --- Conclusion By analyzing the text content and social network of the HIV group from the largest Chinese online community, we have demonstrated the usefulness of online data mining for systematic investigation of the characteristics of hidden populations, including temporal patterns of online activity, social network, community structure and its connection to social distance and similarity of content, emotional tendency, etc. The methodology is of particular importance to China, which is experiencing a heavy burden of HIV infection, with surprisingly high number of new infections among certain populations such as MSM [53]. The rapid expansion of Internet use and increasing online engagement thereby offer new opportunities for the study of hidden population with unprecedented sample sizes and richness of information. Our study also suggests that public health agencies should promote education online to reduce high risk behaviors and expand channels for HIV/AIDS counseling and testing such that those who are suspecting of initial infection could seek for advice. In addition, psychological counseling and guidance for HIV/AIDS patients are also in need, as newly infected patients are greatly worried about their condition and are psychologically fragile [54]. --- Availability of data and materials The datasets used and analyzed during the current study are available from the corresponding author on reasonable request. --- Additional file Additional file 1: Supplementary Information. (DOCX 55 kb) Abbreviations AIDS: Acquired immunodeficiency syndrome; HDP: Hierarchical Dirichlet Process; HIV: Human immunodeficiency virus; LDA: Latent Dirichlet Allocation; MSM: Men who have sex with men Authors' contributions XL conceived and designed the research. CCL and XL performed the research and analyzed the data. XL and CCL wrote the paper. Both authors read and approved the final manuscript. --- Ethics approval and consent to participate The study was approved by the Medical Ethical Committee of the Institutional Review Board (IRB) at Peking University (IRB00001052-16016). The study itself does not involve any physical, social or legal risks to the participants, and the data is anonymous, confidentiality of the participants' information has been strictly protected. --- Consent for publication Not applicable. --- Competing interests The authors declare that they have no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Background: Traditional survey methods are limited in the study of hidden populations due to the hard to access properties, including lack of a sampling frame, sensitivity issue, reporting error, small sample size, etc. The rapid increase of online communities, of which members interact with others via the Internet, have generated large amounts of data, offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. In this study, we try to understand the multidimensional characteristics of a hidden population by analyzing the massive data generated in the online community. Methods: By elaborately designing crawlers, we retrieved a complete dataset from the "HIV bar," the largest bar related to HIV on the Baidu Tieba platform, for all records from January 2005 to August 2016. Through natural language processing and social network analysis, we explored the psychology, behavior and demand of online HIV population and examined the network community structure. Results: In HIV communities, the average topic similarity among members is positively correlated to network efficiency (r = 0.70, p < 0.001), indicating that the closer the social distance between members of the community, the more similar their topics. The proportion of negative users in each community is around 60%, weakly correlated with community size (r = 0.25, p = 0.002). It is found that users suspecting initial HIV infection or first in contact with high-risk behaviors tend to seek help and advice on the social networking platform, rather than immediately going to a hospital for blood tests. Conclusions: Online communities have generated copious amounts of data offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. It is recommended that support through online services for HIV/AIDS consultation and diagnosis be improved to avoid privacy concerns and social discrimination in China. |
Introduction As with other healthcare systems across the world, the early months of 2020 saw the beginning of the COVID-19 pandemic and its effect across the Canadian healthcare system. The impact was widespread and included not only an increase in hospitalizations of patients with the virus but also other ripple effects, including scarcities of supplies such as Personal Protective Equipment (PPE) and staffing shortages. Additionally, throughout the pandemic, healthcare providers in British Columbia (BC) were faced with balancing professional duties and responsibilities with personal considerations including public backlash [1] and their own health issues, which could include the effects of long COVID. In many cases, healthcare workers were redeployed due to staff becoming sick or needing to isolate. Numerous surgeries and healthcare visits were postponed, and there was a shift toward virtual visits and/or decreased visits overall. At this time, many morally complex issues were encountered throughout the healthcare system, including ethically fraught decisions related to resource allocation, duty to care, vaccine prioritization, exacerbations of social and systemic inequities, and being required to make choices when data were suboptimal [2][3][4]. Evidence from previous pandemics such as the 2003 severe acute respiratory syndrome (SARS) outbreak, the 2009 H1N1 influenza outbreak, and the 2014 Ebola outbreak indicates that having to face morally complex situations has a strong impact on healthcare workers' emotional well-being and can lead to increased moral distress and injury [5][6][7][8][9][10]. Moral distress occurs when an individual identifies the ethically appropriate action, but that action conflicts with personal values, perceived obligations, or institutional constraints [11,12]. When moral distress is severe and is left unresolved, it may lead to moral injury [13]. Recent studies have shown that, as with previous public health emergencies, the COVID-19 pandemic has also led to moral distress experiences among nurses [14][15][16][17][18], physicians [15][16][17][18][19][20], and non-clinical healthcare workers [17,18]. Additionally, it has been suggested that COVID-19-driven moral injury remains stable for three months, even while moral distress declines [19]. However, the cause and nature of the moral distress related to the COVID-19 pandemic requires further exploration, to determine how this experience manifests across different geographical regions and stages of the pandemic response. This study is unique in that it sought to gain a better understanding and broader view of the moral distress experiences of BC healthcare workers (HCWs) during the COVID-19 pandemic. To achieve this goal, participants from varied professional backgrounds were invited to complete online surveys over different stages of the pandemic response. The ultimate aim of the project was to identify effective ways to enhance individual and organizational resilience, in order to support the healthcare system in managing pressures related not only to pandemics but also to other unknown or known potential stressors on the healthcare system such as climate change events and aging populations. --- Methods --- Study Design In designing the study, we relied on interpretive description methodology, an established approach to qualitative-knowledge development within the applied clinical fields, proposed by Thorne and colleagues [21][22][23]. Interpretive description supports the process of describing and interpreting the lived world as experienced in everyday situations to capture themes and patterns. Therefore, its goal is not to study a representative sample to allow for generalizing findings to a wider population but to explore, describe, and explain human experience. The study design was also informed by two contextual characteristics of the COVID-19 outbreak: (1) the existence of significant differences in terms of impact, infection rates (i.e., 'waves'), and management across geographical regions; and (2) the unprecedented dynamic nature and scale of the impact on society. Thus, survey questions were original and unvalidated, developed based on qualitative feedback from the population under study instead of using established moral-distress surveys, which were developed and validated under different circumstances and with different populations. In the first survey, respondents answered a series of open-ended questions, and the results were analysed for common themes. The most common themes were then used to construct the second and third surveys, which were deployed to validate and assess changes in the expressed themes over time. Ethics approval to conduct this study was obtained from the University of British Columbia's Behavioural Ethics Board (H20-01104). --- Participant Recruitment We used purposive sampling, a strategy commonly employed in qualitative research, to identify information-rich cases [24]. The study was restricted to one Canadian province, BC. Individuals were eligible to participate if they were employed by one of the six provincial healthcare authorities that provided clinical care (including in-patient care, long-term care, pre-hospital care, and out-patient clinics) during the COVID-19 pandemic. There were no specific exclusion criteria other than working in a health authority, as we wished to capture experiences of moral distress at all levels of the healthcare system. To identify participants, study team members disseminated invitation letters through list-serves, posters, and presentations and by snowball sampling. --- Data Collection Three surveys were distributed to BC healthcare employees between May 2020 and July 2021 (see Supplementary S1 and S2). All surveys included demographic questions such as age, gender, health authority, religious affiliation, ethnicity, role, number of years in the role, and area of service and questions related to moral distress. The questions related to moral distress varied between surveys, as they were adjusted to align with different stages of the provincial response to the pandemic (Figure 1) and to probe themes arising in the concurrent analysis. It is worth noting that, while the vast majority of healthcare workers were vaccinated for COVID-19 in early 2021, the vaccine order that mandated all BC HCWs to be vaccinated (or be put on unpaid leave) came into effect on 26 October 2021, after these surveys were completed. provincial healthcare authorities that provided clinical care (including in-patient care, long-term care, pre-hospital care, and out-patient clinics) during the COVID-19 pandemic. There were no specific exclusion criteria other than working in a health authority, as we wished to capture experiences of moral distress at all levels of the healthcare system. To identify participants, study team members disseminated invitation letters through listserves, posters, and presentations and by snowball sampling. --- Data Collection Three surveys were distributed to BC healthcare employees between May 2020 and July 2021 (see Supplementary S1 and S2). All surveys included demographic questions such as age, gender, health authority, religious affiliation, ethnicity, role, number of years in the role, and area of service and questions related to moral distress. The questions related to moral distress varied between surveys, as they were adjusted to align with different stages of the provincial response to the pandemic (Figure 1) and to probe themes arising in the concurrent analysis. It is worth noting that, while the vast majority of healthcare workers were vaccinated for COVID-19 in early 2021, the vaccine order that mandated all BC HCWs to be vaccinated (or be put on unpaid leave) came into effect on 26 October 2021, after these surveys were completed. Figure 1. Timeline of surveys superimposed on a graph depicting the waves of COVID-19 cases in BC, Canada as published by BC Centre for Disease Control (https://experience.arcgis.com/experience/a6f23959a8b14bfa989e3cda29297ded). Most relevant public-health measures in effect during each survey period are also summarized [25]. Survey 1 was deployed between 8 May and 28 May 2020, at a time when the number of COVID-19 cases in BC had just started to increase, several restrictive measures were in place, and there was heightened social uncertainty that precipitated coping behaviours, such as panic buying (Figure 1). This survey included mostly open-ended questions that were then coded for common themes. The most common themes were used to develop the closed-ended questions for Surveys 2 and 3 to validate and assess changes in these findings over time. Free-text options were also included in Surveys 2 and 3 to allow participants to describe new experiences. Survey 2 was deployed between 22 October 2020 Figure 1. Timeline of surveys superimposed on a graph depicting the waves of COVID-19 cases in BC, Canada as published by BC Centre for Disease Control (https://experience.arcgis.com/experience/ a6f23959a8b14bfa989e3cda29297ded, accessed on 20 June 2022). Most relevant public-health measures in effect during each survey period are also summarized [25]. Survey 1 was deployed between 8 May and 28 May 2020, at a time when the number of COVID-19 cases in BC had just started to increase, several restrictive measures were in place, and there was heightened social uncertainty that precipitated coping behaviours, such as panic buying (Figure 1). This survey included mostly open-ended questions that were then coded for common themes. The most common themes were used to develop the closed-ended questions for Surveys 2 and 3 to validate and assess changes in these findings over time. Free-text options were also included in Surveys 2 and 3 to allow participants to describe new experiences. Survey 2 was deployed between 22 October 2020 and 17 March 2021. During this period, there was a significant and stable increase in the number of COVID-19 cases, and gatherings were still restricted, but some facilities were able to operate to some extent, and schools and daycares were open (Figure 1). The third survey was distributed between 18 March 2021 and 31 July 2021, at a time when the number of COVID-19 cases reached the highest peak in BC and started to decrease again, the widespread public vaccination program was launched, and some restrictions, such as on small gatherings, were lifted (Figure 1). --- Data Analysis Sociodemographic data were analysed using descriptive statistics. Qualitative data analysis was conducted simultaneously with data collection, each informing the other in an iterative process. The analysis followed Braun and Clarke's 6-step framework [26] to identify themes and patterns of meanings across the dataset. This method involves the following steps: (1) reading and familiarization, (2) coding, (3) generating themes, (4) reviewing themes, (5) defining and naming themes, and (6) finalizing the analysis [26]. --- Results A total of 135 HCWs completed Survey 1, 320 completed Survey 2, and 145 completed Survey 3. As shown in Table 1, the majority self-reported as White females between the ages of 31 and 60. Participants represented a diverse collection of professional backgrounds including nurses, physicians, paramedics, allied health professionals, researchers, administrative staff, managers, and executives. Most participants across all surveys stated that they were experiencing moral distress in their work (Survey 1 = 60%, Survey 2 = 69%, and Survey 3 = 68%). When asked to describe their experiences, several interrelated themes emerged from the open-ended responses of Survey 1 and continued to be expressed by respondents of Surveys 2 and 3 (Figure 2), as described in more detail in the next sections.. Environ Participants represented a diverse collection of professional backgrounds includi nurses, physicians, paramedics, allied health professionals, researchers, administrati staff, managers, and executives. Most participants across all surveys stated that they were experiencing moral distre in their work (Survey 1 = 60%, Survey 2 = 69%, and Survey 3 = 68%). When asked to d scribe their experiences, several interrelated themes emerged from the open-ended sponses of Survey 1 and continued to be expressed by respondents of Surveys 2 and (Figure 2), as described in more detail in the next sections. --- Experiences of Moral Distress --- Theme 1: Healthcare Professionals' Capacity to Serve Patients As shown in Figure 2, the main theme emerging from Survey 1 centred on the HCW capacity to serve patients. This theme included three sub-themes: changes introduc compromise the ability to provide patient-centred, compassionate care; pandemic pro cols prevent HCWs from carrying out their professional duties; and the effectiveness telehealth (Figure 2). Prefer not to respond 5% 6% 3% 1 Please note that numbers may not equal 100% due to rounding and, for race/ethnicity, because participants could select more than one option. 2 Indigenous (non-Canadian) was not offered as an option in Survey 1. As shown in Figure 2, the main theme emerging from Survey 1 centred on the HCWs' capacity to serve patients. This theme included three sub-themes: changes introduced compromise the ability to provide patient-centred, compassionate care; pandemic protocols prevent HCWs from carrying out their professional duties; and the effectiveness of telehealth (Figure 2). --- Experiences of Moral Distress As explained by a Survey 1 participant: "The very pillars of healthcare and social work practice: patient centered-care, consent to accept risks, right to self-determination/agency, are no longer upheld". These themes were corroborated and further explicated by participants responding to Surveys 2 and 3 (Figure 3). For example, a Survey 2 participant wrote: "I've witnessed a steep decrease in quality of care that can be provided by myself and other colleagues due to restrictive measures during an outbreak (... )". A Survey 3 participant similarly wrote: "We see clients based on a waiting list; however, clients eligible for service were skipped over if they needed interpreters or had more complex needs which were difficult to meet under pandemic management protocols". Some respondents across all three surveys expressed concerns about how the effectiveness of telehealth and how a shift towards virtual visits could impact their "capacity to serve patients" with certain populations (Figure 3). For example, a Survey 2 participant stated: "I am very limited in my face-to-face encounters with my clients due to COVID (related) precautions and many of the ways I would have been able to support them are currently on hold. Almost none of my clients have phones/other means to tele-communicate so I really rely on face-toface encounters". --- Theme 2: Risks A second theme emerging during the initial stage of the pandemic response (Survey 1) centred on the risks that healthcare professionals were facing (Figure 2). One sub-theme focused on concerns over risks to themselves, while a second sub-theme focused on risks to colleagues, family members, and friends (Figure 2). For example, a Survey 1 participant wrote "I had to express my concerns to senior leadership and refuse to participate in a plan that was putting [my colleagues] at risk". These sub-themes were corroborated by participants of Surveys 2 and 3 (Figure 4). Thus, one Survey 2 participant wrote "I felt very scared about possibly having to provide direct clinical care, which I don't usually do. My family has risks for severe COVID-19 and poor outcomes". While a Survey 3 participant wrote: "I was asked by my manager to swap roles with a colleague in a high-risk area of the hospital early on during the pandemic due to their pre-existing health conditions. I felt uncomfortable doing so because little was known about how COVID-19 was transmitted. It felt unfair that I was asked to put myself at risk in place of another colleague". One additional sub-theme related to the theme "Risk" also emerged from Surveys 2 and 3, which centred on participants being required to work regardless of personal challenging circumstances (Figures 2 and4). Thus, one Survey 3 participant wrote: "Personal and family struggles related to COVID-19 stress has been difficult. I have felt forced to put my work over my family because it is so busy, and it will let my team and patients down if I call in sick Some respondents across all three surveys expressed concerns about how the effectiveness of telehealth and how a shift towards virtual visits could impact their "capacity to serve patients" with certain populations (Figure 3). For example, a Survey 2 participant stated: "I am very limited in my face-to-face encounters with my clients due to COVID (related) precautions and many of the ways I would have been able to support them are currently on hold. Almost none of my clients have phones/other means to tele-communicate so I really rely on face-to-face encounters". --- Theme 2: Risks A second theme emerging during the initial stage of the pandemic response (Survey 1) centred on the risks that healthcare professionals were facing (Figure 2). One sub-theme focused on concerns over risks to themselves, while a second sub-theme focused on risks to colleagues, family members, and friends (Figure 2). For example, a Survey 1 participant wrote "I had to express my concerns to senior leadership and refuse to participate in a plan that was putting [my colleagues] at risk". These sub-themes were corroborated by participants of Surveys 2 and 3 (Figure 4). Thus, one Survey 2 participant wrote "I felt very scared about possibly having to provide direct clinical care, which I don't usually do. My family has risks for severe COVID-19 and poor outcomes". While a Survey 3 participant wrote: "I was asked by my manager to swap roles with a colleague in a high-risk area of the hospital early on during the pandemic due to their pre-existing health conditions. I felt uncomfortable doing so because little was known about how COVID-19 was transmitted. It felt unfair that I was asked to put myself at risk in place of another colleague". ). Participants stated that their disagreements with the new protocols were causing moral distress. For example, a Survey 2 participant explained that "It is not clear that there is a balance between the cost/benefit of some significant changes". Similarly, a Survey 3 participant was concerned that "The restrictions were not always supported by logic or the current epidemiology". The reasons for the disagreements were diverse, including disagreements related to scientific understanding, operational concerns, or the provision of care. --- Impact of Moral Distress The impact of moral distress on survey respondents is shown in Figure 6. By far the most common impacts were reports of stress, anxiety, and irritability. Many also expressed feelings of helplessness, had difficulty sleeping, and reported that their One additional sub-theme related to the theme "Risk" also emerged from Surveys 2 and 3, which centred on participants being required to work regardless of personal challenging circumstances (Figures 2 and4). Thus, one Survey 3 participant wrote: "Personal and family struggles related to COVID-19 stress has been difficult. I have felt forced to put my work over my family because it is so busy, and it will let my team and patients down if I call in sick to take care of my family's mental health". Another participant wrote "I have friends that work at the hospital who have immunocompromised spouses or roommates that were still required to work in high-risk areas (like emergency department) and management was not supportive of them temporarily moving to a lower risk area". --- Disagreements with the New COVID-19 Protocols A third theme was focused on disagreements with the new COVID-19 protocols (Figures 2 and5). Participants stated that their disagreements with the new protocols were causing moral distress. For example, a Survey 2 participant explained that "It is not clear that there is a balance between the cost/benefit of some significant changes". Similarly, a Survey 3 participant was concerned that "The restrictions were not always supported by logic or the current epidemiology". The reasons for the disagreements were diverse, including disagreements related to scientific understanding, operational concerns, or the provision of care. --- Impact of Moral Distress The impact of moral distress on survey respondents is shown in Figure 6. By far the most common impacts were reports of stress, anxiety, and irritability. Many also expressed feelings of helplessness, had difficulty sleeping, and reported that their experience of moral distress had either increased or decreased their ability to empathize with others (both increased and decreased empathy were explained in a negative manner). For example, a Survey 2 participant expressed, "I'm now having literal nightmares about lack of vaccine, lack of PPE etc., especially on the night before I come back to work". While a Survey 3 participant said that they were "Suffering from PTSD and will need counseling. Unfortunately, there is no time to find help right now with the workloads, demands, and endless amounts of needed overtime. We are forced to decide whether we leave our colleagues working short or to put our mental health first. Always, mental health is pushed aside". While yet another Survey 3 participant said "I am burnt out. I would like to leave the healthcare profession. At this point I don't feel that the financial compensation is worth the mental and physical distress". --- Disagreements with the New COVID-19 Protocols A third theme was focused on disagreements with the new COVID-19 protocols (Figures 2 and 5). Participants stated that their disagreements with the new protocols were causing moral distress. For example, a Survey 2 participant explained that "It is not clear that there is a balance between the cost/benefit of some significant changes". Similarly, a Survey 3 participant was concerned that "The restrictions were not always supported by logic or the current epidemiology". The reasons for the disagreements were diverse, including disagreements related to scientific understanding, operational concerns, or the provision of care. --- Impact of Moral Distress The impact of moral distress on survey respondents is shown in Figure 6. By far the most common impacts were reports of stress, anxiety, and irritability. Many also expressed feelings of helplessness, had difficulty sleeping, and reported that their Since participants could select multiple themes, it is not possible to calculate percentages. --- Current and Anticipated Ethical Challenges Survey respondents were also asked to describe the main ethical challenges they believed HCWs currently faced or would be facing in response to the COVID-19 pandemic. Table 2 shows responses in descending order of popularity with more detailed explanations of each theme provided below. --- Current and Anticipated Ethical Challenges Survey respondents were also asked to describe the main ethical challenges they believed HCWs currently faced or would be facing in response to the COVID-19 pandemic. Table 2 shows responses in descending order of popularity with more detailed explanations of each theme provided below. --- COVID-19 Fatigue A new theme that arose in Surveys 2 and 3 was the most popular overall in response to the question about current or future ethical challenges. This theme referred to being tired of all COVID-19 related matters. As stated by a Survey 2 participant, "I for sure have COVID fatigue. Definitely, I have compassion fatigue. I am snippy with my colleagues. I am exhausted helping family, patients, and my colleagues deal with their lives and issues. This has been a hard time and serious struggle". --- Collateral Impacts of COVID-19 A second new theme that emerged in Surveys 2 and 3 as an ethical challenge that respondents were facing or could face in the future referred to the collateral impacts of COVID-19, including exposing social inequities in healthcare and effects on the overall population's mental health. For example, a Survey 2 participant wrote: "The collateral impacts of the Covid restrictions and policies are a huge problem and it feels as if it's not being talked about or acknowledged enough beyond the front-line. Many healthcare workers see it and worry about it every day and it's extremely upsetting". While another participant wrote: "I am concerned about the social inequities and further impact on families that are already challenged-reduced access to services, technology, mental health, safety". --- Additional Current or Anticipated Sources of Ethical Challenges Other themes that were considered by the participants to be a current or future source of ethical challenges were similar to previous answers about moral-distress experiences. These themes were again centred around the ability to serve patients, disagreements with the implemented pandemic management protocols and the risks faced by the participants, their colleagues, or family members, and to a lesser extent the effectiveness of telehealth. Interestingly, participants stated that not having a safe environment to discuss disagreements with colleagues or leadership regarding the COVID-19 protocols that should be implemented was also identified as a current or future ethical challenge. --- Sources of Support Respondents were also asked to identify the main sources of support they had used to cope with the negative psychological impacts of COVID-19. Informal resources such as self-care resources and support provided by colleagues, family members, or friends were identified as the most-popular sources of support followed by professional or formal sources of support such as discussions with supervisors or the use of counselors (Figure 7). Finally, respondents to Surveys 2 and 3 were asked to identify the top sources of support they would like to see established by their employer. Mental health supports were the most-popular response for both surveys. These supports included improved access, coverage, and quality of mental health support and increased resources for staff wellness including mindfulness sessions, yoga, gym time, a place to relax, and opportunities to socialize. In Survey 2, improving communications was identified as the second-most-popular recommendation, which included suggestions such as creating a safe place for discussions about pandemic-related challenges, and providing consistent, clear, unbiased, transparent, and personalized communications done at regular, timely intervals and scheduled effectively to allow time for planning. The second-most-popular theme for Survey 2 was directed at leadership (being receptive, open to listening, interested, present, responsive, supportive, interactive, able to engage with staff, and providing acknowledgement and recognition for staff). This theme was followed by recommendations related to improving workload management (e.g., permitting flexible schedules and work locations, improving staffing levels overall, accommodating leaves and sick time) and increasing financial compensation for all employees, including administrative staff and management, as well as introducing paid wellness days. As a Survey 2 participant explained "It has been difficult to rest/rejuvenate. A few personal/paid days off would be helpful. The full-time grind is more arduous than normal, for a year now. Increased fatigue/stress/mentally & emotionally exhausted". Interestingly, by Survey 3 the relative predominance of these categories had changed. While improving access to, coverage of, and quality of mental health support remained as the most-popular support suggested to be implemented, workload and financial compensation emerged as the second-most popular, while improving communications and lead- Finally, respondents to Surveys 2 and 3 were asked to identify the top sources of support they would like to see established by their employer. Mental health supports were the most-popular response for both surveys. These supports included improved access, coverage, and quality of mental health support and increased resources for staff wellness including mindfulness sessions, yoga, gym time, a place to relax, and opportunities to socialize. In Survey 2, improving communications was identified as the second-most-popular recommendation, which included suggestions such as creating a safe place for discussions about pandemic-related challenges, and providing consistent, clear, unbiased, transparent, and personalized communications done at regular, timely intervals and scheduled effectively to allow time for planning. The second-most-popular theme for Survey 2 was directed at leadership (being receptive, open to listening, interested, present, responsive, supportive, interactive, able to engage with staff, and providing acknowledgement and recognition for staff). This theme was followed by recommendations related to improving workload management (e.g., permitting flexible schedules and work locations, improving staffing levels overall, accommodating leaves and sick time) and increasing financial compensation for all employees, including administrative staff and management, as well as introducing paid wellness days. As a Survey 2 participant explained "It has been difficult to rest/rejuvenate. A few personal/paid days off would be helpful. The full-time grind is more arduous than normal, for a year now. Increased fatigue/stress/mentally & emotionally exhausted". Interestingly, by Survey 3 the relative predominance of these categories had changed. While improving access to, coverage of, and quality of mental health support remained as the most-popular support suggested to be implemented, workload and financial compensation emerged as the second-most popular, while improving communications and leadership was lowered to third place. --- Discussion These findings offer a snapshot into the moral distress experience of BC HCWs at several time points during the COVID-19 pandemic. The longitudinal and regional aspects of this study improve our understanding of how moral distress experiences during COVID-19 manifest differently in different contexts and how they evolve over time in response to a continued stressor. The majority of participants who self-selected to complete these surveys stated that they experienced moral distress, which is unsurprising given that they may have been attracted by the topic of this study and the title of the surveys, and, therefore, decided to participate due to their current situation at work. The themes identified by the first survey offer an overview of participants' common concerns during the initial stage of the public health response to the COVID-19 pandemic in BC. This initial stage was characterized by the presence of heightened uncertainty and the introduction of several restrictive social measures and pandemic management protocols across the healthcare system. In this context, BC HCWs participating in this study stated that they experienced moral distress due to two main reasons: the impact that introduced changes were having on their capacity to serve patients and the new risks related to COVID-19 transmission and infection that they had to personally face. More specifically, BC HCWs were concerned about not being able to provide patient-centred, compassionate care, not being able to carry on their professional duties effectively, and about the impact of telehealth. They were also concerned over the risks that they, their colleagues, family members and friends were facing, including the presence of personally challenging circumstances in some cases. An additional source of moral distress centred on having disagreements with the pandemic management protocols that were being introduced. These results align with previous studies [14,16,17,20] and highlight how by challenging standard professional routines and approaches to effective, compassionate, and patient-centred healthcare delivery, the implementation of pandemic-management protocols contributes to moral distress. They also suggest that the pressure to prioritize the health and safety of patients and communities over their own safety and that of those closest to them leaves HCWs feeling vulnerable and overburdened. These sources of moral distress continued to be present during Surveys 2 and 3, despite the fact that those surveys were deployed at a time when some social restrictions had been lifted and progress was being made in the provincial vaccination program. This finding highlights the constant pressure that BC HCWs experienced during at least the first 15 months of the BC pandemic response and contrasts with that of Song [27]. In their study, which also included several surveys deployed at different times during the COVID-19 pandemic, the authors found that by stage 2 (24 October to 30 November 2020) the participants expressed "resignation around adapting to the new normal" [26] (p. 3). The fact that this theme did not emerge in our study highlights how context-specific the impact of the COVID-19 pandemic can be. Importantly, participants identified two sources of current or anticipated ethical challenges: COVID-19 fatigue and the collateral impacts of the pandemic response. As the pandemic progressed, and HCWs had to continue to endure the moral stressor as they simultaneously experienced increasing fatigue. Interestingly, this fatigue was associated to an increased concern over the unanticipated consequences that the pandemic was causing to more vulnerable populations, with concerns over the quality of the clinical care provided and to disagreements with the protocols in place. Study participants stated that they were relying on personal sources of support to cope with moral distress. This result highlights the importance of individual factors in managing this type of negative experiences, which are deeply personal [28]. However, as previous studies have indicated [29,30], broader institutional strategies are also required. Our study shows that it is important for HCWs that such institutional strategies are individualized; centred on meaningful, effective communications between leaders and staff members; and address operational concerns by, for example, managing workloads effectively and introducing financial compensation. --- Limitations of the Research This research was limited by several factors. First, the research was conducted solely in BC, which, compared to other areas of Canada and the world, had a unique experience of the pandemic in terms of the timing of certain events, including the impact of different waves of the pandemic, vaccine roll out, vaccine acceptance, etc. However, while the experience was unique, there were also many commonalities to other geographic regions, including significant disruptions to societal functioning due to public health measures such as lockdowns, social distancing, and travel restrictions as well as significant disruptions to the healthcare system. As such, although our study was geographically circumscribed, its results can likely apply across other jurisdictions. A second limitation relates to the characteristics of the HCWs who self-selected to complete the survey. The majority of participants worked for two of the five health authorities, and the sample size is insufficient to be scientifically representative. Therefore, the results, discussions, and conclusions described in this paper are strictly related to the sample researched and are not necessarily representative of the experience of all HCWs in BC. However, as aspects of the pandemic response were unified across the province, results are, nonetheless, informative. In addition, survey respondents were more likely to self-select if they were interested in the study, due to their experience of moral distress during the pandemic. However, because the aim of the study was to characterize the experience, impact, and response to moral distress during the pandemic, the self-selection bias likely had a positive impact on the permeation of moral distress in the sample. Finally, the study was predominantly completed by individuals who identified as White females between the ages of 31 and 60. This demographic characteristic is reflective of the make-up of the healthcare system yet under-represents the voices of those who likely faced significant and unique impacts of the pandemic, including those who were non-White, newer immigrants, and of lower socio-economic status. --- Future Research Directions Several future research directions are suggested to improve the ability of the healthcare system to respond effectively to moral distress. In particular, our team plans to conduct additional work to test the reliability of the survey tool and complete follow-up research regarding the experience of healthcare workers who are racialized and face systemic barriers and inequities, in order to determine whether the coping mechanisms identified in this study are applicable, accessible, and likely to be effective to them and the communities they represent. In addition, further efforts to address psychological health and wellness in effective, low-barrier and culturally appropriate manners are essential. Finally, further work and consideration should be given to how to prepare healthcare workers during early stages of their careers for the conflicting values and responsibilities they may face during public-health emergencies. --- Conclusions This qualitative study showed that many BC HCW survey participants experienced moral distress during the initial stages of the COVID-19 pandemic as they struggled to provide effective, compassionate, and patient-centred care, while also facing significant personal risks. Many also disagreed with aspects of the pandemic management protocols. Results demonstrate that COVID-19 fatigue and the collateral impact of the pandemic introduce additional layers to HCWs' experiences of moral distress. Coping strategies were identified at the individual, team, and organizational levels, including: providing personalized support; increasing the effectiveness of the communications between leaders and staff members; addressing operational concerns by managing workloads effectively; and introducing financial compensation. These strategies can be used by organizations as potential starting points to facilitate both individual and system recovery. This study adds to the literature on moral distress by highlighting the scale of the impact that pandemics can have on all aspects of the healthcare system, that is, beyond critical care, which is the main focus of the moral-distress literature. It also highlights how the societal impact can be a source of moral distress for HCWs. Finally, study results identify specific measures that healthcare organizations can implement to mitigate the experience of moral distress and inform healthcare leaders about the importance of maintaining and retaining a skilled workforce that has been significantly battered by the pandemic. This is particularly important given the impacts on healthcare workers and the healthcare system by long COVID-19 and the continuing impacts of variants of concern, leading to staffing shortages as well as supply-chain shortages. These shortages continue to place pressure on the healthcare system in multiple ways, including complex and morally distressing triage decisions and fatigue. Ongoing monitoring of the impacts of long COVID-19 and the pandemic's successive waves on the moral wellness of staff are essential to ensure adaptive and evolving strategies to aid in healthcare workers' wellbeing and overall system function. --- Data Availability Statement: The de-identified aggregated | Pandemic-management plans shift the care model from patient-centred to public-centred and increase the risk of healthcare workers (HCWs) experiencing moral distress (MD). This study aimed to understand HCWs' MD experiences during the COVID-19 pandemic and to identify HCWs' preferred coping strategies. Based on a qualitative research methodology, three surveys were distributed at different stages of the pandemic response in British Columbia (BC), Canada. The thematic analysis of the data revealed common MD themes: concerns about ability to serve patients and about the risks intrinsic to the pandemic. Additionally, it revealed that COVID-19 fatigue and collateral impact of COVID-19 were important ethical challenges faced by the HCWs who completed the surveys. These experiences caused stress, anxiety, increased/decreased empathy, sleep disturbances, and feelings of helplessness. Respondents identified self-care and support provided by colleagues, family members, or friends as their main MD coping mechanisms. To a lesser extent, they also used formal sources of support provided by their employer and identified additional strategies they would like their employers to implement (e.g., improved access to mental health and wellness resources). These results may help inform pandemic policies for the future. |
study adds to the literature on moral distress by highlighting the scale of the impact that pandemics can have on all aspects of the healthcare system, that is, beyond critical care, which is the main focus of the moral-distress literature. It also highlights how the societal impact can be a source of moral distress for HCWs. Finally, study results identify specific measures that healthcare organizations can implement to mitigate the experience of moral distress and inform healthcare leaders about the importance of maintaining and retaining a skilled workforce that has been significantly battered by the pandemic. This is particularly important given the impacts on healthcare workers and the healthcare system by long COVID-19 and the continuing impacts of variants of concern, leading to staffing shortages as well as supply-chain shortages. These shortages continue to place pressure on the healthcare system in multiple ways, including complex and morally distressing triage decisions and fatigue. Ongoing monitoring of the impacts of long COVID-19 and the pandemic's successive waves on the moral wellness of staff are essential to ensure adaptive and evolving strategies to aid in healthcare workers' wellbeing and overall system function. --- Data Availability Statement: The de-identified aggregated study findings are contained within this article. Individual survey data are available on request from the corresponding author and will be de-identified prior to sharing. --- Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/ijerph19159701/s1, Supplementary S1: Survey 1; Supplementary S2: Surveys 2 and 3. --- Author --- Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and was approved by the University of British Columbia's Behavioural Research Ethics Board (H20-01104). Approval was given on 28 April 2020. Informed consent was obtained from all participants involved in the study. Informed Consent Statement: Written informed consent was obtained from all participants to conduct and publish this paper, and all participants were assured during the informed consent process that their responses would remain confidential. --- Conflicts of Interest: The authors declare they have no conflicts of interest. | Pandemic-management plans shift the care model from patient-centred to public-centred and increase the risk of healthcare workers (HCWs) experiencing moral distress (MD). This study aimed to understand HCWs' MD experiences during the COVID-19 pandemic and to identify HCWs' preferred coping strategies. Based on a qualitative research methodology, three surveys were distributed at different stages of the pandemic response in British Columbia (BC), Canada. The thematic analysis of the data revealed common MD themes: concerns about ability to serve patients and about the risks intrinsic to the pandemic. Additionally, it revealed that COVID-19 fatigue and collateral impact of COVID-19 were important ethical challenges faced by the HCWs who completed the surveys. These experiences caused stress, anxiety, increased/decreased empathy, sleep disturbances, and feelings of helplessness. Respondents identified self-care and support provided by colleagues, family members, or friends as their main MD coping mechanisms. To a lesser extent, they also used formal sources of support provided by their employer and identified additional strategies they would like their employers to implement (e.g., improved access to mental health and wellness resources). These results may help inform pandemic policies for the future. |
Introduction --- Background African Americans continue to experience disproportionately higher rates of cardiovascular disease and metabolic disorders than their White counterparts [1]. Although health disparities have been attributed to multiple factors, African Americans have been more likely than other racial and ethnic groups to report perceived racial discrimination (eg, 71.3% vs 24% in non-Hispanic Whites) [2][3][4]. In extensive research, exposure to racial discrimination events or perceived racial discrimination contributes to poor health, health behaviors, and health disparities [4,5]. Social stress derived from systems of inequality, such as racial discrimination, may provoke severe psychological and physiological responses and has been associated with unhealthy behaviors [6,7]. Studies have shown that perceived racial discrimination is linked to the consumption of fatty foods, smoking, and alcohol intake [4,8]. Increased physical activity (PA) may buffer the impact of social stress resulting from racial discrimination [9,10]. To date, studies on the relationship between racial discrimination and PA have shown inconclusive findings. For example, in a multiethnic study of PA, racial discrimination was not associated with PA as measured by pedometers when examined among the full sample or separately by race and ethnicity [11]. An unexpected finding was reported in the Jackson Heart Study cohort [12], with higher daily and lifetime racial discrimination associated with more PA in women based on their self-reported PA. In addition, although not in the context of racial discrimination, some studies of psychological stress in other populations have linked perceived stress with less PA [13,14]. In recent studies that examined both between-and within-person effects of daily stress on PA, there was significant between-person variability in the relationship between PA and stress [15,16]. For example, the relationship may be bidirectional for some people; for others, it may be unidirectional or have no association, suggesting that examining the within-person effect of stress on PA may address the limitations of between-person analysis that predominate in traditional research [15,16]. To date, data on the relationship between racial discrimination and PA are sparse and inconsistent. Part of the reason is that the literature to date on the effect of perceived racial discrimination on PA comprises mostly cross-sectional studies that capture retrospective measures of lifetime discrimination associated with individuals' current health outcomes. Such data may be subject to recall and rumination biases. Furthermore, racial microaggressions-the brief and commonplace daily verbal or nonverbal denigrating messages directed toward racial and ethnic minorities that carry the offending party's implicit or unconscious bias-have been shown to disempower racial minorities and may negatively impact health outcomes [17,18]. However, this subtle form of racial discrimination may be difficult to capture by retrospective measures and has been understudied in research on perceived racial discrimination and health. In this study, we prospectively examined racial microaggression (hereinafter microaggression) as a subtle form of racial discrimination as well as lifetime racial discrimination. Examining perceived racial discrimination or microaggression at a single point in time-not incorporating the perspective that this experience fluctuates but combining with cumulative past experience of racial discrimination-may have limitations in examining the differences in behavioral responses across settings and time. Ecological momentary assessment (EMA) is a real-time, self-report data-capturing method in which people report behavior in real time at multiple time points in their natural environment. It may reduce recall biases and enhance ecological validity by collecting self-report data that are more proximal to the time and place (ie, real world) in which stressful events and behaviors occur [19]. Recently, a growing number of studies that explore discrimination and health outcomes using EMA have been published; for example, the relationships between real-time discriminatory experiences and health behaviors have been examined in various sexual and gender ethnic minority groups [20][21][22]. The EMA method provides the opportunity to examine how fluctuations in daily perceived racial discrimination or microaggressions are associated with PA among African Americans at the within-person level. In addition, the use of accelerometers can minimize the weakness of self-report measures of PA. --- Objectives Therefore, the purpose of this pilot study is (1) to describe the relationship among demographic (age, sex, income, and education), anthropometric and clinical factors (BMI, blood pressure, and body composition), and psychological factors (depression) with lifetime racial discrimination and (2) to examine the effects of real-time racial discrimination on total energy expenditure, sedentary time, and moderate-to-vigorous physical activity (MVPA) patterns of objectively measured PA using accelerometers and a real-time data capture strategy, that is, EMA in healthy African American adults at both the group (eg, between-person level) and individual (eg, within-person level approach or N-of-1) level. --- Methods --- Study Design This study is a substudy of an intensive, observational, case-crossover design to examine the effects of perceived racial discrimination on physiological (ie, stress biomarkers) and behavioral responses in African Americans. Details of the overall study protocols have been published elsewhere [23]. In a case-crossover design, each participant serves as their own control to assess the within-person effects on repeatedly measured PA outcomes [24]. Within-person analysis of effects on PA occurred at the 2-hour interval level (using EMA at the end of the interval querying participants about racial discrimination over the duration of the interval) and day level (using average scores of EMA response across the day). --- Participants and Recruitment Building on a relationship developed over the past 10 years, the research team recruited participants from greater New Haven communities in Connecticut via flyers and word-of-mouth communication within African American communities. Before implementing the study, we held meetings with community stakeholders to discuss an effective recruitment plan and the details of the pilot study protocols. Potential participants were called in and were screened by phone and scheduled for a baseline orientation visit. The inclusion criteria were (1) self-reported African American or Black, (2) aged between 30 and 55 years, (3) currently employed, (4) ownership of a smartphone, (5) able to respond to smartphone-based random survey prompts (ie, EMA) at least 3 times per day, and (6) English speaking. We excluded participants who were pregnant or who had serious acute or terminal medical conditions that would preclude PA. The sample size (n=12) was largely based on guidelines for pilot studies that suggest 10 to 40 participants per cell [25]. Even assuming moderate attrition of 20% (2/12), we would have 10 subjects, which is still within the guidelines for pilot studies [26]. We also estimated the minimum detectable effect sizes of other outcomes (ie, stress biomarkers-data not shown; [23]). Our observations would be able to detect medium effect sizes of 0.53-0.60 on primary outcomes (stress biomarkers) repeatedly measured within the individual with 80% power at a 5% significance level, based on a previous study using stress biomarkers [27]. --- Baseline Measures Baseline surveys included sociodemographic characteristics; current smoking status (yes or no); and alcohol consumption by the Alcohol Use Disorders Identification Test [28], which includes frequency of drinking and amount of alcohol consumption. We also used validated self-report measures collected at baseline that are mentioned below. Perceived racial discrimination was measured at baseline using 2 scales. The Major Life Discrimination (MLD) scale is a 9-item self-report measure of past exposures to lifetime discrimination in diverse domains. Respondents indicated whether they had ever experienced each listed major discrimination event (eg, denied a bank loan, unfairly fired, getting a job, at work, and stopped by police; Cronbach <unk>.88) [29]. The MLD score represented the sum of each yes or no item (range 0-9). Higher scores indicate more lifetime discriminatory experiences. The Race-Related Events Scale (RES) has 22 items to assess exposure to stressful and potentially traumatizing experiences of race-related stresses in adults. Respondents indicated whether they had ever experienced each event (yes or no), and the items were summed for a total RES score ranging from 0 to 22 (Cronbach <unk>=.78-.88) [30]. Higher scores indicate more experiences of race-related stressful events. The Black Racial Identity-Centrality subscale (Cronbach <unk>>.77) is an 8-item, 7-point Likert scale (ranging from strongly disagree=1 to strongly agree=7). The centrality dimension of racial identity refers to the extent to which individuals normally define themselves with regard to race. It is a measure of whether race is a core part of an individual's self-concept [31]. After reverse-scoring 3 items, the overall score was calculated by averaging all items, with higher scores indicating stronger racial identity. For subjective social status, participants were asked to place an "X" on the rung that best represented where they thought they stood on the ladder, with 10 rungs described as follows: at the top of the ladder are people who are the best off, those who have the most money, those who have the most education, and those who have the best jobs, and at the bottom are people who are the worst off, those who have the least money, those who have the least education, and those who have the worst jobs or no job (test-retest reliability, <unk>=0.62) [32,33]. --- The Center for Epidemiological Studies Depression Scale (CES-D) is a 4-point Likert scale that captures current depressive symptoms with 20 items on how respondents have felt or behaved during the past week by selecting 1 of the 4 options (0=rarely, 1=some of the time, 2=occasionally, and 3=most of the time). The items were summed to obtain a total score. Higher numbers indicate greater depressive symptoms (Cronbach <unk>>.85) [34]. A recent meta-analysis [35] showed that a cutoff point of 20 yields a more adequate trade-off between sensitivity and specificity, compared with the cutoff point of 16, which has been used to indicate probable clinical depression. --- EMA Measures Perceived racial discrimination was measured using the Experiences of Discrimination (EOD; Cronbach <unk>>.88) [29] and Racial Microaggression Scale (RMAS; Cronbach <unk>>.85) [36,37] adapted for EMA data collection. The EOD has subscales for worry, global, filed complaint, response to unfair treatment, day-to-day discrimination, and skin color [29]. The RMAS has subscales for invisibility, criminality, low-achieving or undesirable culture, sexualization, foreigner or not belonging, and environmental invalidations [36]. As the EOD and RMAS measure experiences of unfair treatment over the past month to year, of which response options are not relevant for real-time EMA assessment, response choices were revised for the EMA time frame using yes or no answers or Likert scale options. We also used a random subscale inclusion strategy so that only 60% of the items would be included in each EMA survey to reduce the subject burden and survey fatigue [38]. When prompted, participants were asked to report whether they had experienced any unfair treatment from a list of 11 common daily racial discriminations since their last prompt or within the past 2-3 hours if they missed or did not complete their last prompt (eg, "treated with less courtesy than other people because of your race or ethnicity," yes=1 or no=0) and also from a list of 32 microaggression experiences (eg, "people mistake me for being a service worker simply because of my race or ethnicity," 1=strongly disagree to 7=strongly agree). Possible daily scores of the EOD range from 0 to 10, with higher scores indicating more racial discriminatory experiences. Possible daily RMAS scores range from 15 to 105, with higher scores indicating more microaggression. Each survey (5 times per day) consisted of 8-15 different combinations of questions varying by time of day (sequentially from the first survey to the fifth survey throughout the day). --- PA Measures PA was measured using a triaxial hip accelerometer (ActiGraph GT9X), which samples movement at 30 Hz and aggregates data into 60-second epochs. The intensity cut points for PA were defined using validated thresholds for vertical axis accelerometry (sedentary<unk>100 counts/min, moderate=2020 counts/min, and vigorous=5999 counts/min) [39]. Energy expenditure was calculated using respective validated triaxial vector magnitude (VM) equations for >2453 VM counts per minute [40] and <unk>2453 VM counts per minute [41]. The nonwear periods were defined as <unk>60 consecutive minutes of zero activity intensity counts, with allowance for 1-2 minutes of counts between 0 and 100. We considered a day valid if <unk>10 hours of activity counts were collected [39] and a 2-hour interval valid if the full time was collected. Accelerometer data were downloaded into ActiLife software (ActiGraph) using the software's normal filters and scored to create the following variables: total wear time (min), daily wear time (hour/day), total daily energy expenditure, MVPA (min/day), and sedentary time (hour/day). For within-person analyses, these were normalized to the wear time (eg, percent time in MVPA). --- Procedures Institutional review board approval was obtained from Yale University, and written informed consent was obtained from all participants. At the initial study visit, face-to-face baseline interviews were completed using validated questionnaires. Body weight and height were measured using a portable electronic scale (Omron HBF-514C body composition monitor and scale) and a stadiometer (Seca) following standard procedures. BMI was calculated as weight (kg)/height squared (m 2 ). Percent body composition was measured using the same digital scale that measures foot-to-foot bioelectric impedance. This method has demonstrated significant correlations with the gold standard of body fat calculation (ie, dual energy x-ray absorptiometry scan) [42]. After 5 minutes of rest, blood pressure was measured twice with an automated cuff (Omron HEM 780 IntelliSense automatic blood pressure monitor), with 1 minute between readings, and the average of the 2 readings was recorded. To tailor the EMA survey delivery time, we asked participants for their sleep, wake, or commuting schedules by phone before the baseline study visit. At the baseline visit, we loaded the mEMA app, which is compatible with both iOS and Android operating systems, into each participant's smartphone. The EMA survey prompted each participant at a random time within the 5 preprogrammed windows daily (ie, signal-contingent sampling) for 7 days (a total of 35 signals) to ensure adequate spacing throughout the day, except for nighttime and commuting time. Upon hearing the signal or vibration, the participants were instructed to complete a short electronic question sequence using their smartphone. Each EMA survey took approximately 3-4 minutes to answer. The EMA data collection system recorded the date and time it took each participant to respond to a random prompt survey and the date and time the survey expired. The survey expired after 40 minutes of nonresponse. After no entry was made, the EMA program became inaccessible until the next recording opportunity. Participants were instructed to wear an accelerometer on their right hip during waking hours for 7 consecutive days to obtain at least three weekdays and one weekend day to determine the daily variability [39,43]. A paper diary was provided, and participants were instructed to fill out the diary on the time they took off (eg, shower) and wore their accelerometers. All participants received one-on-one in-person training in the EMA surveys and accelerometers. We also provided pictures and step-by-step written instructions on the use of EMA, accelerometers, a tiered payment schedule, and research staff contact information. In addition to the study questions, we sent reminders through EMA to wear their accelerometer daily for all 7 days. We also assessed the risks and symptoms of participants with a risk for depression (based on CES-D>16) and suggested primary care office visits or made referrals per study protocol. --- Data Management and Analysis EMA data were exported from the mEMA server to a comma-separated values file format. We entered the EMA and accelerometer data as well as the baseline surveys and anthropometric and clinical data into a database uploaded into SAS for analysis. We reviewed the data and corrected errors, missing data, outliers, and skewness and calculated the scale scores for the EMA responses. Descriptive analysis was used for demographic characteristics, anthropometric and clinical data, and the average values for EMA and PA data. Pearson and Spearman correlation coefficients were calculated at the individual level using the following variables: age, sex, BMI, CES-D, RES sum, MLD sum, annual income, education, blood pressure, body fat, racial identity, subjective social status, smoking and alcohol consumption, EMA survey data, and accelerometer data. EMA and PA data were scored for daily and individual (average in subject) levels. Intraclass correlation coefficients were calculated to quantify the proportions of total variance of PA explained by within-and between-person variances. Multilevel models for predicting PA (percentage sedentary time and percentage MVPA) were developed to examine the associations with EMA survey data (racial discrimination and microaggression) at the 2-hour interval (within-person), daily (within-person), and individual (between-person) levels. The models included within-and between-person levels of racial discrimination (model 1) or microaggression (model 2) with covariates (eg, age, sex, and BMI). Compound symmetry was used as a within-person correlation structure. Standardized coefficients were obtained using standardized outcomes and covariates with 0 mean and 1 SD. --- Results --- Overview The mean response rate for EMA surveys was 83% (29/35; SD 16%), and the mean number of EMA responses per day was 4.0 (SD 1.2) out of a possible maximum of 5 per day. A total of 83.3% (10/12) of participants met the inclusion requirements for valid accelerometer data (<unk>10 hours/day wear time) and wore the accelerometer on the hip 6 out of 7 days. The mean EMA-reported daily racial discrimination was 0.61 (SD 0.85) per day, with a range of 0-2.28 (possible range: 0 to 10 times/day). Three participants reported no daily racial discrimination over the 7-day period (ie, their 7-day mean racial discrimination was 0). For the EMA-reported daily microaggression, the mean score was 50.26 (SD 18.11), with a range of 19.14-76.71 (possible range: 15-105/day). Participant characteristics and descriptive statistics from the survey and anthropometric and clinical and accelerometer data are presented in Table 1. The mean age was 43.4 (SD 7.73) years. The majority worked full-time. Approximately 67% (8/12) had an annual income of less than US $60,000. The mean CES-D score was 21.08 (SD 8.36). The mean Black racial identity (centrality) was 5.21 (SD 1.46), indicating that most of our participants self-defined Black race as a core part of their self-concept. The mean subjective social status was 7.08, indicating that most rated their social status as high in the community. The mean BMI was 34.19 (SD 11.41) kg/cm 2 ; approximately 42% (5/12) of the participants were obese. The mean MVPA was 18.5 minutes/day, and the mean sedentary time was 8.6 hours/day. Paired data, including both EMA and valid accelerometer data, resulted in a sample size of 9. --- Between-Persons Survey and EMA Analyses In the bivariate analysis, using baseline surveys and anthropometric and clinical data, depressive symptoms were associated with major lifetime discrimination (r=0.58; P=.04) and a higher frequency of major lifetime discrimination (r=0.67; P=.04). Visceral fat was associated with diastolic blood pressure (r=0.62; P=.04) and sedentary time (r=0.73; P=.04) but was not associated with major lifetime discrimination. Income level was not significantly associated with Black racial identity (centrality; r=-0.26; P=.41). Table 2 shows the bivariate correlations between the baseline sample characteristics and the average of the EMA-reported daily racial discrimination variables or PA variables. Greater EMA-reported daily racial discrimination was significantly associated with younger age (r=-0.75; P=.02). Black racial identity was not significantly associated with EMA-reported daily racial discrimination (r=0.21; P=.58) or microaggression (r=0.06; P=.88). Daily EMA-reported microaggression was associated with depressive symptoms (r=0.66; P=.05), past race-related events (r=0.82; P=.004), and major lifetime discrimination (r=0.78; P=.01). A higher total energy expenditure was significantly associated with less major lifetime discrimination (r=-0.92; P=.004). Less sedentary time was significantly associated with a stronger Black racial identity (r=-0.68; P=.04). More MVPA was significantly associated with lower levels of subjective social status (r=-0.75; P=.02). --- Within-and Between-Person EMA Analyses Intraclass correlation coefficients were calculated to represent the proportion of the total variance of the PA outcomes explained by the between-person levels. They were 0.54, 0.26, and 0.66 for total energy expenditure, sedentary time, and MVPA, respectively. The within-person interval-level analysis found that during the 2-hour windows in which people reported more perceived racial discrimination, they had moderately greater sedentary time (<unk>=.30, SE 0.21; P=.18) and slightly more MVPA (<unk>=.04, SE 0.13; P=.77). Similarly, during the 2-hour windows in which they reported more perceived microaggression, they had less sedentary time (<unk>=-.11, SE 0.16; P=.51) and less MVPA (<unk>=-.34, SE 0.18; P=.10). However, none of these relationships during the 2-hour windows reached statistical significance. The within-person daily levels and between-person analyses are presented in Table 3. In the within-person daily-level analyses, the association of racial discrimination and sedentary time was significant (<unk>=.30, SE 0.14; P=.03), indicating that during days when participants reported more perceived racial discrimination, they had moderately more sedentary time. --- Discussion --- Principal Findings Perceived racial discrimination is a significant psychological stressor that is hypothesized to have negative mental and physical health consequences with potential interactions with unhealthy behaviors. The relationship between overall psychological stress level and PA using EMA and objective measures has been evaluated in the general population; however, in what we believe to be the first published study of its kind, we examined momentary-and daily-level perceived racial discrimination and PA levels using EMA and accelerometers in African Americans. We collected repeated real-time racial discrimination exposure data in the natural environment while simultaneously collecting objective measures of sedentary behaviors and PA among African Americans. We also demonstrated the utility and feasibility of EMA coupled with accelerometers in studying the relationship between daily racial discrimination and PA in African Americans. Conventional accelerometer protocols require only 4 valid days for a 7-day wear period to be considered valid [39,43]. Approximately 83% (10/12) of our participants met the inclusion requirement for valid accelerometer data (<unk>10 hours/day wear time) and wore the accelerometer 6 out of 7 days, and they also showed high adherence to the EMA protocol. In the examination of within-person level data, on days when participants reported more perceived racial discrimination than usual (ie, higher than their personal mean), more sedentary time was observed in the accelerometer data. The between-person analysis did not duplicate this finding in our study. However, this is consistent with the findings of between-person analysis in a prior study examining the relationship between general psychological stress and sedentary behaviors in other populations: end-of-day general stress ratings were not associated with sedentary time in the between-person analysis (at the group level) [16]. The influence of stress on sedentary behavior varies according to the source of stress within individuals [16,44]. Heterogeneity in the effect of stress on the amount and pattern of sedentary behaviors has been documented; for example, argument-related stress was associated with increased sedentary time, whereas work-related stress was associated with decreased sedentary time [16,45]. Similarly, in a study of sexual and gender minority individuals, between-person associations of discriminatory experiences and substance use were not significant, whereas more discriminatory experiences were significantly associated with more nicotine, alcohol, and drug use within the person [21]. This highlights the potential limitations of between-person methods (nomothetic) that predominate in research and suggests that the within-person level (idiographic) precision health approach may be highly relevant to target reductions in sedentary time and other unhealthy behaviors [16,44]. An important advantage of the EMA methodology is its ability to examine the frequency of racial discrimination experiences in real time and assess the impact of the experiences in a microtemporal relationship (eg, repeated assessments across minutes or hours). In our study, participants reported, on average, 0.61 overt racial discrimination experiences per day, and most participants experienced substantial daily microaggressions. The reported frequency of racial discrimination varies widely across studies [46][47][48][49]. In earlier cross-sectional studies using retrospective measures, discrimination was reported to occur only infrequently [50]; however, recent studies using EMA or other types of daily diaries have revealed that discrimination may occur multiple times per day. For example, in a study using EMA, African American participants reported about 2 experiences per day of racism [20]. In another study using EMA among African American adolescents [22], participants reported 5 experiences of racial discrimination per day when comprehensive measures of racial discrimination were used, including social media, vicarious, and teasing experiences, along with the more commonly measured individual and general forms of racial discrimination. In several studies of psychological stress, not specific to racial discrimination-related stress in the general population, episodic stress predicts less PA, more sedentary behaviors, and reduced total energy expenditure [15,51]. Consistent with these studies, we found that major lifetime discrimination (from a retrospective measure) was significantly associated with lower total energy expenditure measured by the accelerometer. However, EMA-reported microaggressions were not associated with PA outcomes in our within-or between-person analyses. The nonsignificant relationship may be because of the small sample size and lack of variability in terms of the frequency of microaggression experiences within and across days. Overall, our participants reported frequent daily microaggressions, which may not have had a significant impact on their daily PA levels. However, the observed effect size based on standardized <unk> coefficients [52] suggests the need for more studies to examine the determinants of PA and sedentary behaviors with a larger sample size and a longer assessment period. Consistent with other studies [46,53,54], retrospectively measured exposure to race-based discrimination over a lifetime (assessed at baseline) was significantly associated with more depressive symptoms and with more daily microaggression experiences measured by EMA. Given the different data collection methods (retrospective surveys vs EMA) in this study, we could not determine the temporal relationship between racial discrimination or microaggression and depressive symptoms, and the findings may reflect a reverse causal relationship (eg, people with more depressive symptoms or such traits may perceive more microaggression). However, lagged effects of racial discrimination on depressive symptoms in subsequent days were reported among African Americans and Hispanics or Latinos in other studies [49,55], suggesting that individuals may not easily or fully recover from discrimination, and racial discrimination may have lasting effects on mental health [50,56]. Taken together, our findings highlight the important association between racial discrimination and mental health. Furthermore, future studies examining additional psychological factors, such as traits and personality, are needed to determine both the concurrent and lagged effects of racial discrimination on health and health behaviors. Such studies may inform the development of individualized interventions that can buffer the harmful effects of racial discrimination on health. --- Strengths and Limitations This study had several limitations. Although we found similar trends in within-or between-person effects on sedentary behaviors and PA, compared with other studies of general psychological stress, our small sample size offers limited evidence supporting racial discrimination as an antecedent to sedentary behaviors or PA. EMA minimizes recall bias and errors. However, it is also possible that our study findings may have been influenced by vigilance to discrimination from the repetitive assessment involved in EMA. In addition, the high CES-D scores observed in our participants may have influenced the associations with perceived racial discrimination or PA. Although findings are mixed, previous studies have shown that neighborhood environments such as walkability, safety, or crime were associated with individuals' PA levels in the general population [57,58]. We obtained walkability (Walk Score) and crime index data based on participants' zip codes (data not shown); however, the predominantly Black neighborhoods in our sample showed a lack of variability. Future studies with measures of social environment, segregation, and perceived neighborhood environments, in addition to objective built environments, would be helpful in understanding the relationship between PA and relevant correlates. Owing to the exploratory nature of our pilot study with the scarcity of EMA studies of racial discrimination, we conducted a 2-hour within-person, prompt-level analysis; however, assessment may need longer time frames to determine the association between racial discrimination and PA levels. In addition, using event-contingent sampling (ie, EMA is reported when a discrimination event occurs) may be helpful in determining the frequency of racial discrimination; one caveat is that it may not accurately measure events if many participants forget to report them (missing EMA). In addition, our study included only in-person and individual racial discrimination experiences. Including web-based (eg, communication in social media) and vicarious discrimination experiences (eg, watching traumatic videos of police brutality) may provide more valid frequency estimates [22,59]. Future efforts should include studies with a large sample, more extensive racial discrimination measures, and EMA sampling to determine the optimal frequency of EMA to accurately capture discriminatory experiences and to examine its relationship with health behaviors. Despite these limitations, this study provides valuable insights into examining the within-person effects of racial discrimination on health behaviors and suggests the need to examine a more complex relationship between racial discrimination and lifestyle behaviors with time-varying factors. There is a growing emphasis on within-person examination of health behaviors and psychosocial correlates and on the importance of leveraging these data to develop personalized, just-in-time interventions [50,60]. Examining this daily process using a within-person approach has the potential to elucidate the mechanisms of which racial discrimination may have on health and health behaviors and to guide the development of personalized interventions for increasing PA and decreasing depressive symptoms in racial ethnic minorities. --- Conclusions In conclusion, the results of this study highlight the utility and feasibility of a within-person approach to target reductions in sedentary time and improvements in PA associated with daily racial discrimination by using EMA and an objective measure of PA. Further studies are needed to confirm the observed findings in light of the limitations of this study, including its small sample size. A precision health approach that incorporates between-person associations and accounts for within-person variations in the relationship between racial discrimination and health behaviors is warranted to mitigate race-based health disparities. --- Conflicts of Interest None declared. <unk>Soohyun Nam, Sangchoon Jeon, Garrett Ash, Robin Whittemore, David Vlahov. Originally published in JMIR Formative Research (https://formative.jmir.org), 07.06.2021. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included. --- Abbreviations | Background: A growing number of studies indicate that exposure to social stress, such as perceived racial discrimination, may contribute to poor health, health behaviors, and health disparities. Increased physical activity (PA) may buffer the impact of social stress resulting from racial discrimination. However, to date, data on the relationship between racial discrimination and PA have been mixed. Part of the reason is that the effect of perceived racial discrimination on PA has primarily been examined in cross-sectional studies that captured retrospective measures of perceived racial discrimination associated with individuals' current PA outcomes. The association between real-time perceived racial discrimination and PA among African Americans remains unclear. Objective: The purpose of this study is to examine the relationship among demographic, anthropometric and clinical, and psychological factors with lifetime racial discrimination and examine the within-and between-person associations between daily real-time racial discrimination and PA outcomes (total energy expenditure, sedentary time, and moderate-to-vigorous PA patterns) measured by ecological momentary assessment (EMA) and accelerometers in healthy African Americans. Methods: This pilot study used an intensive, observational, case-crossover design of African Americans (n=12) recruited from the community. After participants completed baseline surveys, they were asked to wear an accelerometer for 7 days to measure their PA levels. EMA was sent to participants 5 times per day for 7 days to assess daily real-time racial discrimination. Multilevel models were used to examine the within-and between-person associations of daily racial discrimination on PA. Results: More EMA-reported daily racial discrimination was associated with younger age (r=0.75; P=.02). Daily EMA-reported microaggression was associated with depressive symptoms (r=0.66; P=.05), past race-related events (r=0.82; P=.004), and lifetime discrimination (r=0.78; P=.01). In the within-person analyses, the day-level association of racial discrimination and sedentary time was significant (β=.30, SE 0.14; P=.03), indicating that on occasions when participants reported more racial discrimination than usual, more sedentary time was observed. Between-person associations of racial discrimination SE 0.28; P=.29) or microaggression SE 0.36; P=.34) with total energy expenditure were suggestive but inconclusive. Conclusions: Concurrent use of EMA and accelerometers is a feasible method to examine the relationship between racial discrimination and PA in real time. Examining daily processes at the within-person level has the potential to elucidate the mechanisms of which racial discrimination may have on health and health behaviors and to guide the development of personalized interventions for increasing PA in racial ethnic minorities. Future studies with a precision health approach, incorporating withinand between-person associations, are warranted to further elucidate the effects of racial discrimination and PA. |
INTRODUCTION AND RATIONALE Despite advances in prevention, HIV/AIDS remains a serious public health concern among women around the globe, with seroprevalence variation by geographic location, age, race/ ethnicity, and other factors. Epidemiological data for the New York City metropolitan area indicate that the rate of HIV diagnosis was 15.0 per 100,000 women in 2011(Centers for Disease Control and Prevention, 2013b), which reflects a decrease from the prior year (18.5 per 100,000 women in the same area; Centers for Disease Control and Prevention, 2013a). However, while there has been a decrease in the annual number of women diagnosed with HIV, the proportion of HIV diagnoses attributable to heterosexual transmission has increased. The percentage of reported HIV diagnoses for women in New York City attributed to heterosexual transmission was 45.6% (n=892) in 2001 and 78.1% (n=593) in 2011; the percentage attributed to injection drug use went from 14.8% (n=289) to 3.7% (n=28) at these same time points (New York City Department of Health andMental Hygiene, 2008, 2012). The continued risk for women associated with heterosexual transmission demands further examination, particularly among women with substance use problems who often experience multiple HIV sexual risk factors. Readily apparent factors include substance use with sexual activity, which negatively affects safer sex practices, involvement with intimate partners who are at high risk for HIV, and diminished relational power to request condom use by partners. Additional risk factors include this group's high rates of posttraumatic stress disorder (PTSD) and exposure to traumatic events, which are associated with increased HIV sexual risk behavior among women (Arriola, Louden, Doldren, & Fortenberry, 2005;Cavanaugh, Hansen, & Sullivan, 2010;El-Bassel, Gilbert, Vinocur, Chang, & Wu, 2011;Engstrom, Shibusawa, El-Bassel, & Gilbert, 2011;Hebert, Rose, Rosengard, Clarke, & Stein, 2007;Kalichman, Sikkema, DiFonzo, Luke, & Austin, 2002;Moreno, 2007;Plotzker, Metzger, & Holmes, 2007). More specifically, prior estimates have shown that approximately 29% of women in methadone treatment in New York City experience PTSD, exceeding the general population of women in the U.S. by 2.5 times; approximately 90% experience lifetime exposure to intimate partner violence (IPV), exceeding the general population of women in the U.S. by 3-4 times; approximately 58% experienced childhood sexual abuse, exceeding the general population of women in the U.S. by 1.3-4 times; and approximately 22% are living with HIV, exceeding the general population of women in New York City by 27.5 times (Browne, 1993;Engstrom, El-Bassel, Go, & Gilbert, 2008;Kessler, Sonnega, Bromet, Hughes, & et al., 1995;New York City Department of Health and Mental Hygiene, 2012;Pereda, Guilera, Forns, & Gómez-Benito, 2009;Tjaden & Thoennes, 2000;Wyatt, Loeb, Solis, Carmona, & Romero, 1999). These collective experiences underscore the numerous vulnerabilities experienced by women in methadone treatment and the critical need to examine and address factors associated with HIV sexual risk behavior among this population. As a precipitant of PTSD (Plotzker et al., 2007), intimate partner violence (Engstrom et al., 2008), sexual exchanges for drugs, money or other goods (El-Bassel, Simoni, Cooper, Gilbert, & Schilling, 2001;Vaddiparti et al., 2006), and early onset of substance use (Raghavan & Kingston, 2006), childhood sexual abuse (CSA) may be an important distal factor in HIV risk among women with substance use problems. Numerous studies document a relationship between CSA and HIV sexual risk behaviors among women in the community, in clinics, and in schools (For meta-analysis of 46 studies, see Arriola et al., 2005). Far fewer have examined this relationship exclusively among women experiencing substance use problems. While not unanimous (Grella, Anglin, & Annon, 1996;Medrano, Desmond, Zule, & Hatch, 1999), quantitative studies conducted exclusively with women experiencing problematic substance use or heavy substance use patterns generally find statistically-significant relationships between CSA and HIV sexual risk behaviors (El-Bassel et al., 2001;Miller & Paone, 1998;Plotzker et al., 2007;Vaddiparti et al., 2006); however, prior research with this population has not examined ways in which types and characteristics of CSA may be differentially associated with HIV sexual risk behaviors and has yet to focus on a random sample of women in substance use treatment. The absence of such research is notable given prior findings that CSA types and characteristics are differentially associated with long-term risks, including substance use and mental health problems, among women with substance use problems. For example, CSA involving force and family has been found to be associated with increased risk of PTSD among women in methadone treatment (Engstrom, El-Bassel, & Gilbert, 2012). CSA severity has been found to be associated with days of cocaine use among women who recently completed inpatient treatment for cocaine dependence (Hyman et al., 2008). These findings further support the importance of addressing this substantive gap in knowledge in this area. In a review of 73 studies that examined relationships between CSA and sexual risk behaviors, Senn, Carey and Vanable (2008) note that the absence of a common definition of CSA is a major limitation in the body of knowledge regarding relationships between CSA and HIV sexual risk behaviors. Definitional requirements regarding age at the time of sexual activity, type of relationship and age differences between those involved in the sexual activity, type of sexual activity, the presence of force, and the coding of CSA variables (i.e., dichotomous or continuous coding) frequently differ across studies, making it difficult to compare findings in this area. This study aims to strengthen the existing body of knowledge and future efforts to achieve a common definition of CSA by examining ways in which observed relationships between CSA and HIV sexual risk behaviors among women in methadone treatment may differ depending on how CSA is defined and coded. It also aims to strengthen the existing body of knowledge by considering childhood physical abuse in the analyses, as research has shown that this experience is often associated with both CSA and HIV sexual risk behavior; however, it is frequently overlooked in CSA-HIV sexual risk behavior research (For discussion see Senn et al., 2008). --- CONCEPTUAL FRAMEWORK In order to explicate connections between CSA and HIV sexual risk behaviors, several models that conceptualize mediated relationships between CSA and HIV risk have been proposed (Malow, Dévieux, & Lucenko, 2006;Meade, Kershaw, Hansen, & Sikkema, 2009;Miller, 1999;Plotzker et al., 2007;The NIMH Multisite HIV Prevention Trial Group, 2001;Wyatt et al., 2004). For example, Miller (1999) posits that CSA contributes to problems related to substance use, mental health, and sexual risk taking, which, in turn, contribute to increased HIV risk. Wyatt and colleagues (2004) describe the relationship between CSA and sexual risks as mediated by mental health problems and revictimization. The Miller (1999) and Wyatt et al. (2004) models are augmented by Malow and colleagues (2006) who postulate that in addition to substance use, mental health problems, and revictimization, assertiveness and self-efficacy are also important mediators between CSA and HIV sexual risk. These conceptual models and prior empirical findings suggest that mental health concerns, substance use, and revictimization are likely to be key factors, and potential mediators, in the CSA-HIV sexual risk relationship. As such, they should be included in analytic models that examine CSA-HIV sexual risk behavior relationships. It should be noted that although assertiveness and self-efficacy are not available in the current dataset, prior research has found that depression is negatively associated with them (For discussions see Allen & Badcock, 2003;Haaga, Dyck, & Ernst, 1991;Maciejewski, Prigerson, & Mazure, 2000). Including depression in the multiple regression models in the current study facilitates important examination of the role of depression in HIV sexual risk behaviors (Dolezal et al., 1998;Engstrom et al., 2011;Grella et al., 1996;Malow et al., 2006;Miller, 1999;Plotzker et al., 2007;Schilling, El-Bassel, Gilbert, & Glassman, 1993;Schönnesson et al., 2008;Williams & Latkin, 2005), as well as a degree of statistical control for assertiveness and selfefficacy. There are several additional individual, relational and situational factors that are salient among women in substance use treatment and are associated with HIV sexual risk behavior. These factors include one's HIV status, partner HIV risk status, cohabitation with partner, social support, recent incarceration and homelessness (e.g., Corsi, Kwiatkowski, & Booth, 2006;El-Bassel, Gilbert, Wu, Go, & Hill, 2005;Engstrom et al., 2011;Epperson, Khan, El-Bassel, Wu, & Gilbert, 2011;Grella et al., 1996;Miller, 1999;Miller & Paone, 1998;Paxton, Myers, Hall, & Javanbakht, 2004). In recognition of prior empirical and conceptual work related to the CSA-HIV sexual risk behavior relationship (e.g., Malow et al., 2006;Meade et al., 2009;Miller, 1999;Plotzker et al., 2007;The NIMH Multisite HIV Prevention Trial Group, 2001;Wyatt et al., 2004), the current analyses draw upon a multisystemic conceptualization of the relationship between CSA and HIV sexual risk behaviors. This multisystemic conceptualization posits that individual factors, such as HIV status, mental health concerns and substance use; relational factors, such as exposure to intimate partner violence, partner's HIV risk status, and social support; and situational factors, such as recent homelessness and incarceration, are likely to be important covariates, and potential mediators, in the relationship between CSA and HIV sexual risk behaviors among women in substance use treatment. --- STUDY AIMS Informed by this study's multisystemic conceptual framework and the need to address substantive and methodological gaps in research in this area, the primary aim of the current analyses is to examine associations between CSA and HIV sexual risk behaviors and any differences in the observed relationships between CSA and HIV sexual risk behaviors based on CSA coding, types and characteristics among a random sample of women in methadone treatment in New York City. It is anticipated that CSA involving force, family and greater severity will be associated with increased HIV sexual risk behaviors (Beitchman, Zucker, Hood, DaCosta, & et al., 1992;Engstrom et al., 2012;Hyman et al., 2008;Rodriguez, Ryan, Rowan, & Foy, 1996). As a secondary aim, we examine mediation in CSA-HIV sexual risk behavior relationships when indicated by findings from the primary analyses. --- METHODS --- Recruitment of random sample This study involves secondary analysis of baseline data from the Women's Health Project (WHP), which focused on intersections between problematic substance use, intimate partner violence (IPV) and HIV among women in methadone treatment in New York City (El-Bassel et al., 2005). To recruit a random sample of women in a large methadone treatment program, the WHP used random number generation in SPSS 7.0 and selected 753 of the 1,708 women enrolled in the program between November and December 1997. A total of 559 women completed screening interviews to determine study eligibility; 416 women were eligible and agreed to participate in the study. Women enrolled in methadone treatment for at least 3 months and involved in a sexual, dating, cohabitation, childcare or economic relationship with someone described as a boyfriend, girlfriend, spouse, regular sexual partner, or father of their children in the past year were eligible to participate in the parent study. A total of 26 women who reported all female main partners were excluded from the present analyses due to distinctions in their HIV sexual risk behaviors. --- Procedures Data were collected between 1997 and 2000. Following informed consent processes, inperson interviews were conducted in English and Spanish by trained female interviewers, who administered interview questions and recorded participants' responses. The institutional review boards at Columbia University and at the methadone treatment program approved the study. --- Measures Childhood sexual abuse-The Childhood Sexual Abuse Interview (CSAI; El-Bassel, Gilbert, & Frye, 1998) focuses on sexual activities at age 15 or younger and includes 11 items based on interview schedules by Finkelhor (1979) and Sgroi (1982). The full scale of 11 items has a Cronbach's alpha of.87 with this sample (n=366). Six items inquire about touching/exposure (Did anyone ever show you their private sexual parts? Did anyone masturbate or "get off" in front of you? Did anyone ever touch your body, including your breasts or private sexual parts, or attempt to "get you off" or masturbate you sexually? Did anyone try to have you get them off, or touch their body in a sexual way? Did anyone ever rub against your body in a sexual way? Did anyone attempt to have sex with you?). Three items inquire about penetration (Did anyone have intercourse with you? Did anyone ever put their penis in your mouth or put their mouth on your private sexual parts? Did anyone ever put their penis or another object in your butt or behind?). Single items inquire about picturetaking (Did anyone ever take pictures of you while you were naked or having sex with someone?) and other sexual activity (Did you have any other sexual contact other than what I've asked you about?). Consistent with definitions used in prior research (For discussion see Senn et al., 2008), the activity was classified as "abuse" when it involved someone 5 or more years older, force, or a relative. In order to examine ways in which CSA coding, types and characteristics affect the observed relationships between CSA and HIV sexual risk behaviors, CSA was coded in three ways. First, we used dichotomous coding to reflect any experience of childhood sexual abuse across the 11 items of the CSAI. Second, we used continuous coding of two CSAI subscales which emerged from factor analysis with Varimax rotation: touching/exposure (range 0-6, based on 6 items described above, Cronbach's alpha with this sample=.90, n=374) and penetration (range 0-3, based on 3 items described above, Cronbach's alpha with this sample=.70, n=377). Third, we created a 5-level, mutually-exclusive categorical variable that assessed no sexual abuse, sexual abuse with someone 5 or more years older, sexual abuse involving force, sexual abuse involving a relative, and sexual abuse involving force and a relative. In the final category, 95.0% of the cases involved force and a relative simultaneously and 5.0% of the cases involved force and a relative across discrete events. HIV sexual risk-We focused on two types of HIV sexual risk behaviors in the past six months. First, we included inconsistent condom use with up to three main partners. To begin, we dichotomized responses to questions regarding sexual activity in the past six months (In the past 6 months, how often have you had vaginal sex with this partner? In the past 6 months, how often have you had anal sex with this partner?). Categorical response options ranged from not once in the past six months to 6 or more times per week or 150 or more times in the past six months). Next, we dichotomized responses to separate questions regarding frequency of condom use with vaginal and anal sex (In the past six months, how often did you use a male condom with this partner? In the past six months, how often did you use a female condom with this partner?). Responses of "never," "less than half of the time," "about half of the time," and "more than half of the time" were coded as "inconsistent condom use" and responses of "always" were coded as "consistent condom use." We then combined these responses to identify vaginal or anal sex with inconsistent male or female condom use, as applicable, for each type of sexual activity across up to three main partners. The association between consistent condom use and reduced sexually transmitted infections (STI) among men and women seeking STI care supported this dichotomous coding of condom use (Shlay, McClung, Patnaik, & Douglas, 2004). Second, we included substance use with sexual activity across up to three main partners (In the past six months, how often were you high on or had been using any drug when you had vaginal, anal or oral sex with this partner? In the past six months, how often were you high on or had been using heroin when you had vaginal, anal or oral sex with this partner? In the past six months, how often have you had vaginal, anal or oral sex with this partner after you had consumed four or more drinks?). Any report of vaginal, anal or oral sex while high on or using any drug, heroin, crack or cocaine, or after consuming 4 or more drinks was dichotomously coded as "drug (or heavy alcohol) use with sexual activity" for each substance. --- Covariates Sociodemographic characteristics-Because of associations between sociodemographic characteristics and HIV risk behaviors, they are included in the multiple regression analyses in this study (Corsi et al., 2006;Dunlap, Golub, Johnson, & Wesley, 2002;Dunlap, Stürzenhofecker, Sanabria, & Johnson, 2004;Engstrom et al., 2011;Grella et al., 1996;Hoffman, Klein, Eber, & Crosby, 2000;Paxton et al., 2004). Sociodemographic variables included participants' age, race/ethnicity, highest grade completed in school, legal marital status, and annualized average monthly income. We used the log of annualized monthly income in the multiple logistic regression analyses due to the wide range of reported values for this variable (i.e., $480.00-$72,000.00). Posttraumatic stress disorder-To measure PTSD, we relied on the 49-item Posttraumatic Stress Diagnostic Scale (PDS; Foa, 1995) which follows DSM-IV diagnostic criteria (American Psychiatric Association, 2000) and has reported sensitivity of 82.0% and specificity of 76.7%. Depression-To measure depression, we dichotomized scores on the widely-used Brief Symptom Inventory depression subscale (6-item scale with published alpha coefficient of. 85 yields possible range of 0-4; actual range with this sample: 0-3.83, alpha coefficient with this sample:.86, n=390; Derogatis, 1993) to reflect values that were above and below 1.865, the published median value in psychiatric outpatient norms for women. Substance use-Based on responses to substance-specific questions regarding frequency of use in the past 6 months, any reported use of heroin, cocaine, crack, marijuana, nonprescription stimulants, non-prescription narcotics, or non-prescription tranquilizers, hypnotics, or barbiturates was coded as "drug use" in this dichotomous variable. The same coding was applied to any alcohol use in the past 6 months. Years in methadone treatment-A single, continuously-coded question, "For how many years altogether have you been on methadone?," assessed years in methadone treatment. Intimate partner violence-Across up to three main partners, any positive response on the sexual coercion, physical assault, injury, and psychological aggression items of the Revised Conflict Tactics Scale was coded dichotomously to reflect presence of IPV (CTS2; Straus, Hamby, Boney-McCoy, & Sugarman, 1996). Childhood physical abuse-To measure childhood physical abuse, we drew upon two separate questions asking participants if, before they were 18 years old, they were punched, pushed, hit, shoved, kicked, whipped, beaten, or suffered painful physical injuries, all beyond what is considered discipline, by parents, caretaker or guardian; or if they were choked, strangled, or threatened with a knife, gun, or any other weapon by parents, caretaker or guardian. Affirmative responses to either of these questions were coded as "childhood physical abuse." HIV, main partner HIV risk status, and cohabitation with partner-Participants' self-reported responses to a question regarding the result of their most recent HIV test were coded dichotomously. The presence of any of the following factors across up to three main partners was coded as partner risk: HIV-positive status; other sexually-transmitted disease in the past 6 months; sexual activity with other partners; sexual activity with someone who is HIV-positive or uses injection drugs; or sexual activity in exchange for money or drugs. Participants' reports of living with a partner were dichotomously coded. Social support-We dichotomously coded the 12-item Multidimensional Scale of Perceived Social Support (Zimet, Dahlem, Zimet, & Farley, 1998) to indicate agreement or strong agreement (>2.45, range 0-4) with having social support from family, friends, and a significant other. Cronbach's alpha of the 12-item continuous scale with this sample is.88 (n=378). Incarceration and homelessness-Two single-item questions inquired about incarceration or homelessness in the last 6 months. --- Data Analysis Plan Reliability analysis was conducted with fully-observed data using IBM SPSS Statistics Version 20. Multiple imputation of missing data was conducted with the ICE (Imputation by Chained Equations) program in Stata/SE 10.1, which was used for all descriptive and logistic regression analyses (Rubin, 1987;Schafer, 2000;StataCorp, 2007). Univariate analyses were conducted in order to describe the sample and identify prevalence of CSA, HIV sexual risk behaviors, and covariates. Bivariate analyses were conducted to examine relationships between CSA and HIV sexual risk behaviors for each of the three CSA coding schemes described earlier. In order to examine relationships between CSA and HIV sexual risk behaviors while adjusting for potential confounders, multiple logistic regression analyses were conducted; a set of multiple logistic regression analyses predicting each HIV sexual risk behavior was conducted for each CSA coding scheme. While Bonferroni correction is often applied when conducting multiple analyses, its limitations and risk of inflating Type II errors prompted us to retain the conventional p-value of.05 in these analyses (For additional discussion, see Perneger, 1998). Analyses are displayed in Tables 1234. We conducted subsequent path analyses using Mplus Version 7.1 when the comparison of bivariate and multiple logistic regression findings suggested the possibility of mediation in the CSA-HIV sexual risk behavior relationship. When the statistical significance of the CSA-HIV sexual risk behavior relationship at the bivariate level was absent in the multiple logistic regression analyses, we conducted path analysis in which statistically-significant predictors of the dependent variable were entered as possible mediators (Baron & Kenny, 1986). In each of these situations, we examined three models: 1) the direct effect of CSA on the dependent variable; 2) the direct effects of each of the potential mediators on the dependent variable; and 3) a final model of the direct effects of CSA on the potential mediators and on the dependent variable and indirect effects of CSA on the dependent variable through the potential mediators (Kline, 2011). A weighted least squares parameter estimate method (WLSMV) was used due to the binary nature of the variables. This method handles missing data as a function of the observed covariates and not the observed outcomes (Muthén & Muthén, 1998-2012); however, all 390 participants were included in the path models. The following fit indices and values indicating good fit were used: chi-square statistic (p >. 05), Comparative Fit Index (CFI; >.90) and Root Mean Square Error of Approximation (RMSEA; <unk>.06 as good fit, <unk>.10 as cutoff for mediocre fit; Hu & Bentler, 1999;Kenny, 2014). A non-significant chi-square value (p >.05) is generally useful in representing good fit for sample sizes 75-200; however, in larger samples (i.e., sample size of 400 or more), the chi-square value is regularly statistically significant (Kenny, 2014). Additionally, it should be noted that RMSEA estimates with small sample sizes and, in particular, small degrees of freedom often falsely indicate a poor fitting model (Kenny, Kaniskan, & McCoach, 2014). Further, some scholars have argued that there are no "golden rules" for interpreting fit indices (Marsh, Hau, & Wen, 2004). In order to test mediation, bootstrap analyses were conducted, resampling 5,000 samples and examining the standardized confidence intervals (CI) of the indirect effects of CSA on the dependent variable through the potential mediators (Preacher & Hayes, 2008). The indirect effect is significant if zero is not included in the 95% confidence interval (Preacher & Hayes, 2008). --- RESULTS --- Sample Characteristics As displayed in Table 1, the majority of the participants were Latina/Hispanic (46.9%) or African American/Black (31.3%), single, never married (46.2%), involved with one main intimate partner (77.4%), and residing with an intimate partner (61.8%). Approximately half of their main intimate partners were at risk for HIV (50.1%). Participants' mean age was 39.9 years (SE=.34), mean level of education was 11.0 years (SE=.13), and mean level of annual income was $10,228 (SE=491). In the past 6 months, 9.5% of the women were homeless and 5.9% were incarcerated. More than half of the participants experienced CSA (55.8%) and more than a quarter of the participants experienced CSA involving force and family (27.5%). Touching/exposure was the most prevalent type of CSA, affecting 51.8% of the participants. Childhood physical abuse was reported by 37.3% of the participants and most often co-occurred with CSA (27.4%). More than three-quarters of the participants experienced IPV in the past 6 months (77.3%). A total of 27.8% of participants met diagnostic criteria for PTSD and 14.9% experienced depression. Alcohol and drug use in the past 6 months was reported by 49.5% and 63.1% of the participants, respectively. As shown in Table 1, the most common HIV sexual risk behavior in the past 6 months was inconsistent condom use with vaginal sex (66.3%), followed by sexual activity while high on or using any drug (36.6%), sexual activity while high on or using heroin (22.2%), sexual activity while high on or using crack or cocaine (21.3%), sexual activity after consuming 4 or more drinks (19.4%), and inconsistent condom use with anal sex (12.1%). --- CSA Coding Scheme 1: Dichotomously-Coded Childhood Sexual Abuse and HIV Sexual Risk Behaviors Dichotomously-coded CSA significantly predicted just one HIV sexual risk behavior, as displayed in Table 2. At the bivariate level, a statistically-significant relationship was found between dichotomously-coded CSA and drinking four or more drinks prior to sexual activity. Women who reported CSA were nearly twice as likely to report drinking four or more drinks prior to sex (OR=1.84, CI=1.06, 3.19, p=.030). This relationship became statistically insignificant in the multiple logistic regression analysis, which prompted further analysis to test for possible mediation in this relationship. Path analysis was used to examine the following possible mediators: having a partner at risk for HIV, drug use, and exposure to IPV. The direct effect model indicated a significant, direct relationship between CSA and drinking four or more drinks prior to sexual activity (<unk> =.18, p=.02); however, the model was just-identified and model fit could not be assessed (X 2 (0) =0.00, p <unk>.001). Results of the indirect model indicated relatively poor fit with the data (X 2 (4) =16.30, p =.003; CFI =.81; RMSEA =.09), with statistically-significant relationships between CSA and having a partner with HIV risk (<unk> =.16, p =.001) and between having a partner with HIV risk and drinking four or more drinks prior to sex (<unk> =.21, p =.003). There were statisticallysignificant relationships between CSA and IPV (<unk> =.11, p =.02) and between IPV and drinking four or more drinks prior to sex (<unk> =.21, p =.003). While the relationship between CSA and drug use was not statistically significant (<unk> =.10, p =.07), the relationship between drug use and drinking four or more drinks prior to sex was statistically significant (<unk> =.34, p <unk>.001). Figure 1 illustrates the final model (X 2 (3) =14.25, p =.003; CFI =.83; RMSEA =.10). The fit of this final model was poor, and while not unexpected given the sample size and small degrees of freedom (Kenny et al., 2014), results should be interpreted with the fit in mind. When having a main partner with HIV risk, drug use, and IPV were added to the model, the direct relationship between CSA and drinking four more drinks prior to sex was no longer statistically significant (<unk> =.09, p =.20), indicating that full mediation is present. A statistically-significant, positive relationship was found between CSA and having a main partner with HIV risk (<unk> =.15, p =.002) and between having a main partner with HIV risk and drinking four or more drinks prior to sex (<unk> =.18, p =.01). The indirect effect was significant (<unk> =.03, 95% bootstrap CI of.007 to.106), indicating that having a main partner with HIV risk mediates the relationship between CSA and drinking four or more drinks prior to sexual activity. As in the indirect model, a statistically-significant, positive relationship was found between CSA and IPV (<unk> =.10, p =.04) and between IPV and drinking four or more drinks prior to sex (<unk> =.28, p =.001). However, in the final model, the indirect effect was not statistically significant (<unk> =.06; 95% bootstrap CI of -.001 to.117), indicating that IPV is not a mediator. Finally, CSA did not significantly predict drug use; however, there was a significant, positive relationship between drug use and drinking four or more drinks prior to sex (<unk> =.33, p <unk>.001). Thus, having a partner with HIV risk was the only mediator in the relationship between CSA and drinking four or more drinks prior to sex. Together, all of the variables accounted for 25% of the variance in drinking four or more drinks prior to sexual activity. CSA Coding Scheme 2: Childhood Sexual Abuse Involving Touching/Exposure and Penetration and HIV Sexual Risk Behaviors CSA involving touching/exposure was associated with increased risk of heroin use with sexual activity, even when adjusting for potential confounders (OR=1.19, CI=1.01, 1.40, p=. 032), as indicated in Table 3. CSA involving touching/exposure or penetration was not associated with any other HIV sexual risk behaviors we examined. --- CSA Coding Scheme 3: Childhood Sexual Abuse Involving Force, a Relative, or Someone Five Years Older and HIV Sexual Risk Behaviors The only statistically-significant findings in the relationships between this CSA coding scheme and HIV sexual risk behaviors were as follows: CSA involving force and a relative and CSA involving someone 5 or more years older than the participant were both associated with heightened risk of drinking four or more drinks prior to sexual activity, as shown in Table 4. When adjusting for potential confounders, these relationships became statistically insignificant, which prompted further analyses to test for mediation. Path analysis was again used to further examine whether having a main partner with HIV risk, drug use, and IPV mediates this relationship in two separate path analyses examining 1) CSA involving force and a relative and 2) CSA involving someone 5 years or older. In order to test these models, the 5-level CSA variable was dummy coded (0/1) into two separate independent variables which were used in their respective models: 1) indicating whether a participant had experienced CSA involving force by a relative (0=no; 1=yes), and 2) indicating whether the participant had experienced CSA involving someone 5 years or older (0=no; 1=yes). The direct effect model for CSA involving force and a relative was just-identified (X 2 (0) =0.00, p <unk>.001) and indicated a significant, direct relationship with drinking four or more drinks prior to sexual activity (<unk> =.18, p=.02). Results of the indirect model (X 2 (4) =17.87, p =.001; CFI =.74; RMSEA =.09) indicated a statistically-significant relationship between CSA involving force and a relative and having a main partner with HIV risk (<unk> =.17, p =. 001) and between having a main partner with HIV risk and drinking four or more drinks prior to sex (<unk> =.14, p =.006). There was a statistically-significant relationship between IPV and drinking four or more drinks prior to sex (<unk> =.14, p =.006) and between drug use and drinking four or more drinks prior to sex (<unk> =.19, p <unk>.001). CSA involving force and a relative did not significantly predict drug use (<unk> = -.003, p =.96). The final model examining both the direct and indirect paths is presented in Figure 2 (X 2 (3) =15.17, p =.002; CFI =.81; RMSEA =.10). Again model fit was poor, though not surprising, given the sample size and small degrees of freedom (Kenny et al., 2014); however, the fit should be taken into consideration when interpreting the findings. The direct relationship between CSA involving force and a relative and drinking four more drinks prior | Background-Childhood sexual abuse (CSA) is often considered an important distal factor in HIV sexual risk behaviors; however, there are limited and mixed findings regarding this relationship among women experiencing substance use problems. Additionally, research with this population of women has yet to examine differences in observed CSA-HIV sexual risk behaviors relationships by CSA type and characteristics. Objectives-This study examines relationships between CSA coding, type and characteristics and HIV sexual risk behaviors with main intimate partners among a random sample of 390 women in methadone treatment in New York City who completed individual interviews with trained female interviewers. Results-Findings from logistic regression analyses indicate that CSA predicts substance use with sexual activity, with variations by CSA coding, type, and characteristics; however, the role of CSA is more limited than expected. Having a main partner with HIV risk mediates some relationships between CSA and drinking four or more drinks prior to sex. Intimate partner violence is the most consistent predictor of sexual risk behaviors. Other salient factors include polysubstance use, depression, social support, recent incarceration, and relationship characteristics. Conclusions/Importance-The study contributes to understanding of relationships between CSA and HIV sexual risk behaviors and key correlates associated with HIV sexual risk behaviors among women in methadone treatment. It also highlights the complexity of measuring CSA and its association with sexual risk behaviors and the importance of comprehensive approaches to HIV prevention that address psychological, relational, situational, and substance use experiences associated with sexual risk behaviors among this population. |
partner with HIV risk (<unk> =.17, p =. 001) and between having a main partner with HIV risk and drinking four or more drinks prior to sex (<unk> =.14, p =.006). There was a statistically-significant relationship between IPV and drinking four or more drinks prior to sex (<unk> =.14, p =.006) and between drug use and drinking four or more drinks prior to sex (<unk> =.19, p <unk>.001). CSA involving force and a relative did not significantly predict drug use (<unk> = -.003, p =.96). The final model examining both the direct and indirect paths is presented in Figure 2 (X 2 (3) =15.17, p =.002; CFI =.81; RMSEA =.10). Again model fit was poor, though not surprising, given the sample size and small degrees of freedom (Kenny et al., 2014); however, the fit should be taken into consideration when interpreting the findings. The direct relationship between CSA involving force and a relative and drinking four more drinks prior to sexual activity remained statistically significant (<unk> =.13, p =.04). There was a statistically-significant relationship between CSA involving force and relative and having a main partner with HIV risk (<unk> =.17, p <unk>.001) and between having a main partner with HIV risk and drinking four or more drinks prior to sex (<unk> =.18, p =.01). The indirect effect was significant (<unk> =.03, 95% bootstrap CI of.011 to.129), indicating that having a main partner with HIV risk partially mediates the relationship between CSA involving force and a relative and drinking four or more drinks prior to sex. Additionally, a statistically-significant, positive direct relationship between IPV and drinking four or more drinks prior to sexual activity was found (<unk> =.05, p =.04) and between drug use and drinking four or more drinks prior to sex (<unk> =.28, p =.001). Thus, having a partner with HIV risk was the only mediator in the relationship between CSA involving force and a relative and drinking four or more drinks prior to sex. Together, all of the variables accounted for 26% of the variance in drinking four or more drinks prior to sex. Next, we examined whether CSA involving someone 5 or more years older was directly related to HIV sexual risk behaviors and whether that relationship was mediated by having a main partner with HIV risk, substance use, and/or intimate partner violence. In contrast to the findings of the bivariate logistic regression analyses, findings of the direct path model found no statistically-significant direct relationship between CSA involving someone 5 or more years older and drinking four or more drinks prior to sexual activity (<unk> =.10, p =.14). Thus, further path analysis to test for mediation was not pursued. --- Post-Hoc Analyses: Childhood Physical and Sexual Abuse and HIV Sexual Risk Behaviors The analyses yielded unexpected findings regarding reduced risk of inconsistent condom use and nonspecific drug use with sexual activity (i.e., "any drug use with sexual activity") among women who reported childhood physical abuse, as displayed in Tables 2 and3. To further understand these findings and the potential that childhood physical and sexual abuse may interact to influence them, we conducted post-hoc analyses to examine relationships between childhood physical abuse and sexual abuse, alone and in combination, and inconsistent condom use and drug use with sexual activity. Using a 4-level categorical variable (0=no childhood physical or sexual abuse, 1=childhood sexual abuse without physical abuse, 2=childhood physical abuse without sexual abuse, and 3=childhood physical and sexual abuse), we found no statistically-significant relationships between childhood physical and sexual abuse, alone or in combination, and inconsistent condom use during vaginal sex or anal sex, in the bivariate or multiple logistic regression analyses. As with the models predicting inconsistent condom use with anal sex displayed in Tables 2, 3 and 4, this post-hoc model also remained statistically insignificant. Additionally, there were no statistically-significant relationships in the bivariate or multiple logistic regression analyses between childhood physical and sexual abuse, alone or in combination, and nonspecific drug use with sexual activity. --- DISCUSSION This study is the first to our knowledge that examines ways in which CSA coding, type and characteristics may affect observed relationships between CSA and HIV sexual risk behaviors among women in substance use treatment. Although it finds statisticallysignificant, often mediated, relationships between CSA and HIV sexual risk behaviors with main intimate partners, the findings regarding associations between CSA and HIV sexual risk behaviors are more limited than anticipated, particularly given the scope of analyses conducted. There are multiple ways to understand these unexpected findings. First, prior research with women in methadone treatment also found no association between CSA and condom use or number of male sex partners; however, it found that other factors, including race, alcohol use, residing with a partner, suicidality, and HIV status predicted sexual risk behaviors (Grella et al., 1996). Similarly, our findings point to the significance of residing with a partner, alcohol use, HIV-negative status, IPV exposure, and lack of social support as key predictors of inconsistent condom use during vaginal sex with main intimate partners. Other individual, relational, and situational factors, including depression, alcohol and drug use, having a partner with HIV risk, and recent incarceration, also differentially predicted having sex with main partners while under the influence of drugs or alcohol. Together with prior research with women in methadone treatment and women recruited from the community who used drugs (Grella et al., 1996;Medrano et al., 1999), our findings indicate that the role of CSA in most of the HIV sexual risk behaviors examined in this study may be less salient than current psychological, substance use, relational, and situational factors. Second, it is possible that methodological issues influenced the findings. This study focused only on sexual risk behaviors with main partners and considered these risks across three main partners. It may be that associations between CSA and sexual risk behaviors differ between main and secondary partners and that this differential association was not captured in our analyses (Sangi-Haghpeykar, Poindexter, Young, Levesque, & Horth, 2003). Additionally, this study relied on dichotomously-coded sexual risk variables. It is possible that continuously-coded sexual risk variables may yield different findings regarding relationships between CSA and sexual risk behaviors with main partners. Finally, this study collapsed all drug use into a single category which may have obscured the specific roles of different drugs in CSA-sexual risk behavior relationships (Miller, 1999). In contrast to expectations based on prior research (Beitchman et al., 1992;Rodriguez et al., 1996), this study found that touching/exposure, and not penetration or other CSA measures, predicted increased risk of heroin use with sexual activity. This unexpected finding suggests that unobserved contextual aspects of these experiences, which may include age, relationship, frequency, duration and circumstances of the touching/exposure, have important bearing on the long-term sequelae of CSA. Similar to findings by Wyatt and Peters (1986a) that different definitions of CSA result in variations in prevalence estimates, this study indicates that definitional differences also affect findings regarding observed associations between CSA and sexual risk behaviors. Further, this study suggests that childhood sexual and physical abuse may interact in ways that are important to further understand in relation to HIV sexual risk behaviors among this population. The most consistent finding regarding the CSA-HIV sexual risk behavior relationship was the statistical significance of the association between CSA and increased likelihood of drinking four or more drinks prior to sex with main partners. This finding held with dichotomously-coded CSA and with CSA involving force and a relative in both logistic regression and path analyses. When dichotomously coded, the CSA-heavy alcohol use prior to sex relationship was mediated by having a partner with HIV risk. Having a partner with HIV risk also partially mediated the relationship between CSA involving force and a relative and heavy alcohol use prior to sexual activity. Drug use and IPV were not mediators, but they were associated with drinking four or more drinks prior to sexual activity; and the total combination of variables explained a considerable portion of the variance in consuming 4 or more drinks prior to sex. There are several ways to understand the links between CSA, involvement with partners at risk for HIV, and drinking four or more drinks prior to sex. Involvement with an intimate partner at high risk for HIV may reflect engagement in a high-risk social network and hindered ability to identify and address risks, sexual or otherwise, among women with histories of CSA (Miller, 1999). It may also reflect continuation of a high-risk sexual trajectory that was initiated through early sexual abuse (Browning & Laumann, 1997). Additionally, women may use alcohol prior to sex as a form of avoidant coping with their partner's risks, particularly when the threat of violence is present (Lazarus & Folkman, 1984;Schiff, El-Bassel, Engstrom, & Gilbert, 2002). While these possibilities can facilitate understanding of the mediated relationship between CSA and heavy alcohol use prior to sex, there remains a need for additional research to further understand the relatively consistent associations between CSA and heavy alcohol use prior to sex and the relatively limited associations between CSA and drug use with sex among women in methadone treatment. In this study, IPV was the most consistent predictor of HIV sexual risk behavior. This finding is consistent with prior cross-sectional and longitudinal research among women in methadone treatment (El-Bassel et al., 2005;Engstrom et al., 2011). In the context of violence, women may fear retaliation for requests to use condoms (Wingood & DiClemente, 1997). Additionally, they may use drugs and alcohol with sexual activity to manage psychological and physical trauma associated with victimization (Briere, 1992;Kilpatrick, Acierno, Resnick, Saunders, & Best, 1997). The findings underscore the critical importance of ongoing efforts to design and test interventions to address co-occurring substance use and IPV as part of HIV prevention (For additional discussion, see Amaro et al., 2007;Gilbert et al., 2006;Morrissey et al., 2005). This study makes novel contributions to understanding relationships between CSA and sexual risk behaviors with main partners among women in methadone treatment; however, it is not without its limitations, as discussed earlier and further addressed here. While the multiple questions regarding types of sexual activities were a strength of the CSA measure (Wyatt & Peters, 1986b), it relied on retrospective recall. Although events that occurred at sufficient age are likely to be recalled (Brewin, Andrews, & Gotlib, 1993), the personal nature of such disclosure may have resulted in underestimated CSA prevalence in this study. Further, emerging trends in reported HIV diagnoses among women in New York City indicate an overall decrease in the annual number of HIV diagnoses reported, with a ten-fold decrease in the number of women whose diagnoses were attributed to injection drug use between 2001and 2011(New York City Department of Health and Mental Hygiene, 2008, 2012). This study's data, which were gathered between 1997 and 2000, may reflect higher HIV risks when compared to current data. Interpretation of study questions may have also affected the study's findings. In particular, the CPA items may have resulted in underestimates of CPA prevalence as one of the items involved participants' making a determination regarding experiences that exceeded discipline. Finally, the cross-sectional data suggest caution when making causal inferences regarding the correlates of sexual risk behaviors and mediators in the CSA-HIV relationships (Kazdin & Nock, 2003). --- CONCLUSION This study finds that CSA type and characteristics are differentially associated with consuming four or more drinks prior to sexual activity and using heroin with sexual activity. Additionally, the study finds that having a main partner with HIV risk mediates relationships between both any CSA and CSA involving force and a relative and drinking four or more drinks prior to sex. Although women with histories of CSA are at heightened risk of having sex under the influence of alcohol, and depending on the CSA characteristics, having sex under the influence of heroin, the associations between CSA and sexual risk behaviors are more limited than expected, especially in light of the numerous analyses conducted in this study. The findings suggest that IPV, polysubstance use, depression, social support, recent incarceration, and relational contexts are salient factors in HIV sexual risk behaviors. As such, they highlight the critical importance of further research to develop and test multifaceted, comprehensive approaches to HIV prevention among women in methadone treatment. Dr. Louisa Gilbert is a licensed social worker with 25 years of experience developing, implementing and testing multi-level interventions to address HIV/AIDS, substance abuse, trauma, partner violence and other co-occurring issues among vulnerable communities in the U.S. and Central Asia. She has served as the Co-Director of the Social Intervention Group since 1999 and the Co-Director of the Global Health Research Center of Central Asia since 2007. Her specific area of research interest has concentrated on advancing a continuum of evidence-based interventions to prevent intimate partner violence among drug-involved women and women in the criminal justice system. More recently, her funded research has also focused on identifying and addressing structural and organizational barriers in harm reduction programs to implementing evidence-based interventions to prevent overdose among drug users in Central Asia. Katherine Winham received her doctoral degree from the Kent School of Social Work at the University of Louisville, where she was awarded the John M. Houchens Prize for Outstanding Dissertation. She is a practicing social worker and licensed marriage and family therapist and holds master's degrees in both fields. With the goal of developing interventions, her research focuses on investigating relationships between victimization experiences and physical and mental health outcomes and high risk behaviors (substance use, HIV risk behaviors) among vulnerable and underserved populations, especially women involved with the criminal justice system. Zimet GD, Dahlem NW, Zimet SG, Farley GK. 1998; The multidimensional scale of perceived social support. Journal of Personality Assessment. 52(1):30-41. --- Figure 1. Final fitted path model with estimated regression coefficients for the direct path between CSA and four or more drinks prior to sex and as mediated by main partner with risk, drug use and intimate partner violence (standardized estimates are in parentheses). Final fitted path model with estimated regression coefficients for the direct path between CSA involving force and a relative and four or more drinks prior to sex and as mediated by main partner with risk, drug use and intimate partner violence (standardized estimates are in parentheses). --- Author Manuscript Engstrom et al. --- Author Manuscript Engstrom et al. | Background-Childhood sexual abuse (CSA) is often considered an important distal factor in HIV sexual risk behaviors; however, there are limited and mixed findings regarding this relationship among women experiencing substance use problems. Additionally, research with this population of women has yet to examine differences in observed CSA-HIV sexual risk behaviors relationships by CSA type and characteristics. Objectives-This study examines relationships between CSA coding, type and characteristics and HIV sexual risk behaviors with main intimate partners among a random sample of 390 women in methadone treatment in New York City who completed individual interviews with trained female interviewers. Results-Findings from logistic regression analyses indicate that CSA predicts substance use with sexual activity, with variations by CSA coding, type, and characteristics; however, the role of CSA is more limited than expected. Having a main partner with HIV risk mediates some relationships between CSA and drinking four or more drinks prior to sex. Intimate partner violence is the most consistent predictor of sexual risk behaviors. Other salient factors include polysubstance use, depression, social support, recent incarceration, and relationship characteristics. Conclusions/Importance-The study contributes to understanding of relationships between CSA and HIV sexual risk behaviors and key correlates associated with HIV sexual risk behaviors among women in methadone treatment. It also highlights the complexity of measuring CSA and its association with sexual risk behaviors and the importance of comprehensive approaches to HIV prevention that address psychological, relational, situational, and substance use experiences associated with sexual risk behaviors among this population. |
INTRODUCTION One of the significant characteristics of employment in Russia is a fairly large share of people employed in the informal sector of the economy. According to researchers, it is about 20-30 % of the employed population. The self-employed in Russia find themselves in a zone of informality, working without registering their relations with the state. This is a significant group of informally employed people. Currently, the state is trying to take this group under control, formalize its relations with this group in order to receive taxes from its representatives in exchange for providing a number of social guarantees. The state's initiatives find a contradictory response among the selfemployed, which makes the success of state initiatives questionable. All this raises the problem of building a mutually beneficial dialogue for both the state and the self-employed, in which it would be possible to find common ground between the interests of the state and the self-employed. For this purpose, it is necessary to specify the attitude of the self-employed to state initiatives based on their (self-employed) interests. The aim of this paper is to determine the attitude of the self-employed in Russia (on the example of St. Petersburg) to the formalization of their relationship with the state on the basis of an empirical sociological *Address correspondence to this author at the Saint Petersburg State University, Saint Petersburg, Russia; E-mail: t.personality21@mail.ru study. The practical significance of the paper is that this knowledge will be useful for the formation of the state socio-economic policy in relation to the self-employed in St. Petersburg, which is already implementing the initiative of the state to build a dialogue with the selfemployed. Knowledge of the socio-economic characteristics and interests of the social group of selfemployed in Saint Petersburg, which will be targeted by the state initiative, will help to organize optimally the process of interaction between the state and this social group. --- LITERATURE REVIEW The problem of informal self-employment in Russia has been most actively developed over the past 1.5-2 years in the Russian scientific literature, which is associated with the preparation and implementation of state reforms in this segment of employment. At the same time, attention is paid to such aspects of this problem as a comparative analysis of self-employment in developed and developing countries (Vishnevskaya, 2013), highlighting the factors that form informal employment in Russia (Kaufman, 2018;Masterov, 2019). Special attention is paid to the aspect of how effective taxation of the self-employed should be (Gudyaeva, Korunova and Prygunova 2019). At the same time, it is noted that the activity of the state in relation to the self-employed is aimed only at increasing tax collection, while in Western countries, this aspect is primarily about increasing the flexibility of the labor market (Baygorova, 2019). It is also indicated that social guarantees for the self-employed should be more clearly defined if they are formalized (Orekhova, 2018). As the reform process has already its own results, these results are also evaluated by experts (Tonkikh & Babintseva, 2020). In the world literature, the study of the informally self-employed goes in two directions -in the study of self-employment and in the study of informal employment and the informal economy. Researchers of self-employment note its unstable nature for the security of income and stability of the well-being of the family of the self-employed (Conan & Schippers, 2019;Warr, 2018). The author studies the preference of selfemployment for certain social groups in different countries (von Bonfsdorff, Zhan, Song and Wang 2017;Bridges, Fox, Gaggero, & Owens 2017;Halvorsen & Marrow-Howell, 2017;Wu, Fu, Gu & Shi 2018). Informal employment, in turn, interest's researchers in many aspects, such as its scale (Imamoglu, 2016), the features of its existence in cities compared to the countryside (Bunakov, Aslanova, Zaitseva, Larionova, Chudnovskiy, & Eidelman 2019;Rigon, Walker & Koroma 2020), the comparison of the welfare of the formally and informally employed (Perez Perez, 2020), the role of informality in the deployment of business cycles (Leyva & Urrutia, 2020). Studies of informal employment in Russia are also presented in the world scientific literature and relate mainly to the comparison of the situation in Russia with the situation in developed countries, the comparison of welfare in terms of formal and informal employment in Russia (Karabchuk & Soboleva, 2020), the existence of informal employment in Russia in terms of global trends in employment development (Dudin, Lyasnikov, Volgin, Vashalomidze & Vinogradova 2017). At the same time, both in the world and in the Russian scientific literature, there is a lack of research on informal self-employment in Russia, which would study the attitude of various groups of informally selfemployed to formalization. In this work, we tried to fill this gap. --- PROBLEM STATEMENT A significant negative effect of informal employment on the informally self-employed is their alienation from social guarantees provided by the state. This alienation, as our research has shown, is quite disturbing for the informally self-employed, forcing them to look for ways to overcome it. At present, the Russian state offers such selfemployed people an effective way to join the social guarantees provided by society and the state -this is the formalization of relations with the state. Moreover, a simplified registration procedure and a preferential tax scheme are offered for the self-employed. The state initiative was launched in 2016 in four pilot regions -Moscow, the Moscow and Kaluga regions, and the Republic of Tatarstan. However, as of 01.01.2019, only 2.8 thousand people were officially registered, which gave the experts reason to talk about the failure of this experiment. Experts say that the informally selfemployed do not want to register officially, ignoring government initiatives (Gudyaeva et al., 2019). In Saint Petersburg, the state initiative was launched on 01.01.2020. Based on the analysis of the experience of the pilot regions, it was expected that 16 thousand people would register in Saint Petersburg during the first year of the project. But as of the end of February 2020, about 12 thousand people have registered, which indicates that in St. Petersburg there is a greater interest of the self-employed in the state's proposals in comparison with the pilot regions. However, this figure is not so large, as we are talking about near 1-1.5 million people employed in the informal sector of St. Petersburg, among which there are more than 100 thousand people who are self-employed (Pokida & Zybunovskaya, 2020). It seems to us that the real situation with the attitude of the informally self-employed to the formalization of their relations with the state is quite complex. The social group of informally self-employed is heterogeneous, and subgroups are distinguished in accordance with the weakest point of such selfemployed -alienation from social guarantees provided by society. Different attitudes and intentions regarding state initiatives form differences in this, and this attitude is more diverse than simply accepting or not accepting the state's offer to formalize its status. --- METHODOLOGY We followed the approach proposed by the ILO, which refers to informal employment as activities (work) that are not regulated by labour law and that are outside the scope of tax, statistical and insurance accounting. This approach is called the legalist approach (Gimpelson & Kapelyushnikov, 2014;Veredyuk, 2016). This approach postulates that informality and formalization can also be combined in the functioning of the formal sector. In our study, we used a legalistic approach that allows considering as informally employed not only those for whom selfemployment is the only source of income, but also those who combine self-employment with employment in the formal sector. Another methodological problem was the need to determine the empirical object of the study, namely the group of self-employed that will be studied in the framework of the study. The fact is that the selfemployed in Saint Petersburg are a very heterogeneous group in terms of their social characteristics. Thus, the self-employed in Saint Petersburg can be classified according to permanent residence in the city (a formal sign of this can be permanent registration in Saint Petersburg without temporary registration in any other region of the Russian Federation) and temporary residence in the city. In the latter case, we are talking about migrants who come to the city for work and, as a rule, provide various services to the population of a productive and non-productive nature. In our study, we focused on the self-employed who live permanently in Saint Petersburg as a relatively stable social group. At the same time, based on the theoretical grounds we have adopted in defining the informally employed, we believed that self-employment can take place not only for the informally employed, but also for the formally employed. In the latter case, informal employment is present in their spare time from their main work, but these people are also informally selfemployed. Therefore, in our study, we decided to cover both these groups of informally self-employed and compare their attitude to formalization, assuming that it will be different for them (see Table 1). The study was conducted in February-March 2020 using semi-structured in-depth interviews. Respondents were selected using the network method and the snowball method. In total, we interviewed 36 people, of which 18 were women and 18 were men. In the interview guide, there were 32 questions related to various characteristics of the work activity of self-employed, with special attention paid to questions about the readiness of the self-employed to formalize their activities. --- RESEARCH RESULTS The main problem of the study was to determine the basis for the allocation of subgroups in the social group of informally self-employed, which has a key influence on the attitude to formalization. We found such a reason, as it was the presence/absence of the fact of formal employment and the corresponding presence/absence of access to social guarantees provided by society. As our research has shown, the attitude of informally self-employed people in Saint Petersburg to formalization is negative, which confirms the opinion of experts, but it is negative in different ways for different groups of such self-employed people. First of all, this is a group of informally self-employed people for whom self-employment is the only source of income and who are not associated with employment in the formal sector. Our research has shown that this group of selfemployed people is generally wary of government initiatives and takes up a waiting attitude towards them. These self-employed do not intend to register in the near future, but they will consider registering in the future if the state can offer them conditions that suit them or put them in a position where they will not be able to refuse official registration (see Table 2). The self-employed say that the state now has no effective levers to force them to register. There is no concept of "illegal self-employment" in the Criminal Code, which means that their activities are not criminally punishable. In addition, the state has only limited capacity to track their activities. But representatives of this group say that the state can --- Table 1: Groups of Informally Self-Employed Identified as Part of an Empirical Sociological Study Presence/absence of formal status Description of the group There is no formal employment status Informally self-employed, whose only source of income is their informal self-employment and who do not have employment in the formal sector Informally self-employed, whose additional source of income is their informal self-employment, and who are actually employed in the formal sector Informally self-employed, whose only source of income is their informal self-employment, and who are fictitiously employed in the formal sector --- Informally self-employed students There is formal employment status Informally self-employed pensioners start tracking advertising of their services in the media and check whether advertisers pay taxes on the income received from their activities. The results of the study show that there is a direct correlation between the ways of finding customers and attitude to formalization, taking into account the government's ability to track the offer of services in this sector. So, those self-employed who have been working in this field and has acquired an extensive clientele, have rather negative attitude to the formalization of their relationship with government, and those self-employed who work recently forced to offer their services through the media, tend to take up a waiting position (See Figure 1). Thus, those informally self-employed who have been working for a long time intend to continue working informally. Therefore, state initiatives should be aimed at young representatives of this social group or those who are in middle age (see Table 3). Self-employment is the only source of income, not employed in the formal sector 2 3 13 Self-employment is the only source of income, fictitiously employed in the formal sector 0 5 0 Self-employment is an additional source of income, and there is a stable income from formal employment 0 3 0 Self-employed students 0 3 3 Self-employed pensioners 0 4 0 Thus, the group of young and middle-aged selfemployed is potentially ready to formalize, so the state should think through its initiatives primarily in accordance with its interests, among which the leading place is occupied by the interests of access to social guarantees. Among the latter, the greatest interest relates to pension provision and a decent amount of pension in the case of official registration of activities. This group is also interested in the stability of government decisions. Currently, such self-employed people in St. Petersburg say that the institutional field of their activities, formed by the state, is unstable due to the fact that the state adjusts its decisions. On the one hand, such an adjustment is necessary due to the need to regulate rationally those aspects of the selfemployed who for any reason has not received this regulation or regulation have proved to be insufficiently rational, on the other hand, the adjustment means instability for self-employed in their interaction with government. In this situation, the state needs to think carefully about its initiatives in relation to the selfemployed, as they weigh all the "pros" and "cons" of their registration, and, as our research has shown, this process is quite relevant for this group of selfemployed. At the same time, we did not find any significant differences in the attitude to formalization between the interviewed men and women (see Table 4). We would like to focus in more detail on four groups of informally self-employed, which can be combined on the basis of categorical rejection of state initiatives. First of all, they are informally self-employed, who combine their informal self-employment with real employment in the formal sector. This group of informally self-employed people has access to social guarantees, and the formalization of their relationship with the state in terms of their self-employment will mean the withdrawal of part of their net, even gray income, which such self-employed people strongly oppose. This group of self-employed people noted that if the state takes steps to force them to register, they will respond by looking for ways to avoid it. So, for example, the state now has a limited lever to track noncash payments to the bank cards of such citizens, to which such citizens respond by shifting the focus from non-cash to cash payments, which the state currently does not have the ability to track. Another group of informally self-employed is the informally self-employed, who are fictitiously employed in the formal sector, but their only source of income is their self-employment. This group of self-employed people also has access to social guarantees provided by their official employment. Among the respondents we surveyed, there were only five such self-employed people, which suggests that the opportunities for the self-employed to find fictitious employment are now significantly narrowed. Such employment should be beneficial to the formal employer, and not to the organization as a whole, but to specific responsible persons in the organization who derive their private benefit from the fact of fictitious employment. These self-employed people are also categorically against formalizing their self-employment status. Their designated behavior strategy is the same as that of the previous group. They noted a weak link between the amount of contributions to the Pension Fund and the size of the pension, saying that if they deduct funds from their self-employment in addition to what their official, even fictitious employer deducts for them, their pension will increase only by a very small amount, and they will lose more than they deserve. The next group of informally self-employed is informally self-employed pensioners. They are united with the two previous groups by their categorical rejection of state initiatives. The main motive of this group is that they already receive their small pension, and if they contribute funds to the state and to insurance funds, the size of their pension will practically not change -they will only lose, without gaining anything in return. We identified another group among the informally employed -self-employed part-time students. They were found to have a negative waiting attitude to the possibility of their registration. Firstly, there is uncertainty as to whether they will continue to be selfemployed, or go to work in the formal sector. Secondly, it is the instability of their earnings, which they must combine with their studies. Thirdly, it is a reluctance to have relations with government agencies and difficult forms of reporting on the financial side of their activities. Fourthly, this group of self-employed people almost has not thought about retirement yet, so they do not see any sense in making contributions to the Pension Fund. Fifthly, it is a reluctance to give away part of their earned income. The greatest weight among the above reasons for refusal of registration is the reason for the uncertainty of the future of self-employed students. This group is probably the most unstable among the informally selfemployed, as its representatives are very often actually employed in the formal sector after graduation. --- DISCUSSION In the works that address the problem of informal employment in Russia and St. Petersburg, the problem of this phenomenon is seen in the fact that the state wants to take control of representatives of this social group in order to expand its tax base, and selfemployed citizens do not want to be controlled by the state, which is expressed in extremely low activity in relation to their official registration (Kaufman, 2018;Kusheva, 2016). In some works, attempts are made to analyze at least partially the reasons for such rejection of state initiatives (Pokida & Zybunovskaya, 2020). It is assumed that the social group of informally selfemployed is a monolithic, unified group with a common opinion and the intention not to register (Korunova & Prygunova, 2018;Kritskaya, 2018). The novelty of our paper is that we assumed and in our research confirmed this assumption that the informally selfemployed are a heterogeneous social group, in which we can distinguish a number of subgroups, each of that in its own way relates to the possibility of being officially registered. So, it turned out that a number of informally self-employed people do not reject this possibility, but have taken up a waiting attitude towards the ongoing reform of the relationship between the state and the self-employed. We also found that young representatives of this social group and middle-aged people are waiting for events to develop and are potentially ready to register. --- CONCLUSION The study of the attitude of the informally selfemployed to formalization is very relevant for the development of public policy measures to build relations between the state and this social group. We have proved that the group of informally self-employed is a set of different groups that are distinguished on the basis of access/exclusion from access to social guarantees provided by the state, as well as that the division of informally employed on this basis is significant, as such access is one of the essential interests of representatives of this social group. We also studied the attitude of each of the selected groups of informally self-employed to formalization. The most interesting and unexpected result of the study was the conclusion that there is a group of informally self-employed people who take up a waiting attitude towards the process of launching the reform of the institutional field of interaction between the state and the self-employed, and who are generally quite positive about the possibility of official registration. As a recommendation for state bodies engaged in developing measures that form the institutional field of interaction between the state and the self-employed, it should be indicated that the focus is primarily on a group of informally self-employed, who are waiting for the results of the reform in order to make a decision to formalize them or not. This is a group of young selfemployed and middle-aged self-employed. In addition, we would like to recommend that we be more consistent in the reform process and adhere to the promises made to this social group in order to preserve the stability of the institutional environment for the selfemployed as much as possible. | The paper examines the attitude to the formalization of informally self-employed in Russia on the example of the city of St. Petersburg. The authors proceeded from the position that this social group is heterogeneous, and different characteristics of representatives of this social group affect the attitude to the formalization of their economic activity. The negative attitude to formalization of representatives of this social group was revealed on the surface. However, this negative attitude among different subgroups of informally employed people turned out to be different. The results of the study show that different age groups of informally self-employed people react differently to government initiatives regarding registration of such activities. The presence or absence of social status in the sphere of formal employment, which many self-employed people combine with informal economic activity, proved to be a significant social characteristic in forming the attitude of the informally self-employed to formalization. Thus, the great value has stability of the institutional framework of formal self-employment generated by the state, and the state's determination to follow its promises given to informally self-employed, so that this social group formalized its economic activity. It was found that a fairly large proportion of the informally self-employed took up a waiting attitude towards the state's initiatives to formalize the economic activities of this social group. This paper will be useful for representatives of Russian state authorities who are developing measures of socio-economic policy in relation to informally self-employed citizens. |
Introduction Gender is the differentiation of roles, status, and division of labor made by society based on sex. There are other forms of differentiation, for example, based on class color, caste, skin color, ethnicity, religion, age, and so on (Saguni, 2020). Each of these distinctions often gives rise to injustice, including gender. Gender is also an analytical tool that can be used to dissect cases to understand more deeply the cause-and-effect relationships that produce reality (Puspitawati, 2013). Gender analysis analyzes the power relationships and roles between men and women in human life. Through gender analysis, we can examine the injustice between women and men caused by the building of human civilization and culture (Probosiwi, 2015). Accelerating the reduction of stunting rates in Indonesia remains a priority development program until 2024. In 2024, the government is targeting a stunting prevalence of 14 percent (Angela et al., 2022). This target is achieved with two holistic interventions, namely specibic interventions and sensitive interventions. Specibic interventions are aimed at children in the birst 1,000 days of life (HPK) and at mothers before and during pregnancy, which are generally carried out in the health sector. Meanwhile, sensitive interventions are carried out through various development activities outside the health sector and constitute cross-sector collaboration (National, 2018). WHO estimates that the total number of disease cases that occur at a certain time in an area (prevalence) of stunting (stunted toddlers) throughout the world will be 22 percent, or 149.2 million people in 2020 (Fitriani et al., 2023). In Indonesia, based on data from the Asian Development Bank, in 2022, the prevalence of stunting among children under 5 years of age in Indonesia will be 31.8 percent. This number causes Indonesia to be in 10th place in the Southeast Asia region. Furthermore, in 2022, based on data from the Ministry of Health, Indonesia's stunting rate will decrease to 21.6 percent (Laela et al., 2023). Stunting, according to Tsaralatifah (2020), is a condition where the growth of toddlers is disrupted, resulting in their height being shorter than their estimated age. This chronic nutritional problem is caused by various factors, including poor nutrition, maternal nutrition during pregnancy, economic conditions, a lack of nutritional intake for babies, and other causal factors (Listiana, 2016). Standard height measurements below the WHO growth median for children are usually used to assess stunting. For example, if a two-year-old boy's height is 87 cm, then the minimum expected height is 81 cm. Causes of stunting include direct factors such as inadequate nutritional intake and infectious diseases, as well as indirect factors such as maternal care practices, family food insecurity, and environmental health services (Ruaida, 2018). The root causes of stunting are limited access to adequate health services, poor family economic conditions, and various social, cultural, economic, and political factors that inbluence the surrounding environment. All of these factors interact with each other and contribute to the occurrence of stunting in toddlers (Nugroho et al., 2021). If stunting in this country is not taken seriously by the government, it will have an impact on the development and dignity of the country due to a decrease in productivity, an increase in the number of children under bive with weight and height below average in the future, and an increase in the risk of disease that accompanies the process. aging (Saputri, 2019). This kind of impact can increase poverty in the future and will automatically affect family food security. When they grow up, children who experience stunting will have the potential to earn income from their work that is 20% lower than the income of healthy children (Rahmadhita, 2020). The Indonesian government has set a target to reduce the prevalence of stunting, but these achievements still need to be strengthened through effective and integrated solutions (Sari & Montessori, 2021). The proposed solution includes a comprehensive approach, including increasing public awareness about balanced nutrition, broader nutrition education, increasing access to nutritious food, and improving sanitation and hygiene (Ridua & Djurubassa, 2020). Apart from that, strengthening public policies, effective intervention programs, and the use of modern technology are also part of the solution to facing the challenge of stunting in the modern era (Tampubolon, 2020). The city of Bandung is one of the 100 priority cities/districts for dealing with and overcoming stunting, with the number of toddlers with stunting conditions in Bandung City decreasing to 5,660 children in 2022 from 7,568 in 2021. " According to the SSGI, in 2021, the prevalence rate will be 26.4 percent. In 2022, it will decrease by 7 percent to 19.4 percent (SSGI, 2023). In 2023, it is hoped that the prevalence of stunting will decrease to 14 percent. This is a factor that plays a very important role in reducing the stunting rate in the city of Bandung: fast action by various Regional Apparatus Organizations (OPD) involved in resolving stunting, which are members of the Stunting Reduction Acceleration Team (TPPS). TPPS carries out activities that are divided into two types: specibic actions related to health and intervention actions outside of aspects of health, one of which is through data unibication with the e-Penting application (electronic stunting recording). The e-Penting application emerged in line with the implementation of convergence to accelerate stunting reduction through six strategic actions, one of which is a data management system, responding to challenges that arise related to stunting data problems. In its application, e-Penting covers various elements, starting from providing questions to standard operating procedures (SOP) related to managing stunting data. This includes data integration, data cleansing, and veribication processes, as well as the transformation of data into digital forms that are easier to access and manage. Apart from its data management function, e-Penting also acts as a one-stop data publication medium, providing integrated and easy access to information related to stunting. Not only that, this application is also equipped with data analysis tools, making the policy-making process more effective and efbicient. Thus, e-Penting not only responds to the problem of stunting data but also provides a comprehensive solution to support convergence efforts in overcoming chronic nutritional problems in the city of Bandung (bandung.go.id). The e-Penting program is a genderresponsive innovation designed as a concrete step in encouraging gender equality and women's empowerment in the city of Bandung. By providing freer space for women, this program becomes a forum for raising women's rights and access to public services. The existence of e-Penting is not only a stunting data collection tool but also an inclusive tool that takes into account the special needs and contributions of women in I I I I Journal of Governance Volume 8, Issue 4, December 2023 608 overcoming this chronic nutritional problem. Thus, this program not only measures the overall impact of stunting but also empowers women in the process of monitoring, evaluating, and formulating policies, thereby creating a fairer and more equitable environment for all citizens of Bandung City (bandung.go.id). The aim of this research is to explore gender mainstreaming in the implementation of the Electronic Stunting Data Collection Program (e-Penting) in Bandung City, with a special focus on policy, monitoring, and evaluation aspects. It is hoped that the research results will provide an in-depth understanding of how gender aspects are integrated into epenting and contribute to the overall effectiveness of the stunting program. It is also hoped that the resulting policy recommendations can serve as a guide for policymakers in strengthening the gender dimension in similar programs in the future. More broadly, it is hoped that this research can provide conceptual and practical contributions to the literature regarding the implementation of health programs with gender mainstreaming, so as to enrich knowledge and understanding in this bield. --- Method This research uses a qualitative approach with descriptive methods. According to Yulianah (2022), qualitative research is: A qualitative researcher develops theory during the data collection process. This more inductive method means that theory is built from data or grounded in data. Many researchers use grounded theory. It makes qualitative research blexible and lets data and theory interact. Qualitative researchers remain open to the unexpected, are willing to change the direction or focus of a research project, and may address their original research question in the middle of a project. Meanwhile, according to Sugiyono (2011), the descriptive method is a search for facts with the correct interpretation. Descriptive research studies problems in society as well as procedures that apply in society and certain situations, including relationships, activities, attitudes, views, and ongoing processes that inbluence a phenomenon. Data collection techniques consisting of semi-structured interviews were selected based on purposive sampling techniques: observation, where the author was directly involved in activities in the bield, as well as documentation, where the author took data in the form of documents related to the e-penting program in Bandung City and gender mainstreaming. In analyzing the data that has been obtained, the author will analyze the data using the data analysis proposed by Miles and Huberman in Sugiyono ( 2011), namely data reduction, data presentation, and drawing conclusions. --- Results And Discussion --- Gender Mainstreaming Development programs that are normatively declared as an effort to achieve a level of prosperity for society are often delivered with the assumption that development is neutral, impartial, and provides equal opportunities for all groups in society to gain benebits. However, this view needs to be examined further because development actually produces different impacts on each individual or group who accesses development results (Sudirman & Susilawaty, 2022). Development cannot be considered neutral because it is able to reblect dominant interests and even contains certain ideological elements. As time goes by, the gap between groups that benebit from development and those that do not becomes increasingly visible, creating inequality that can strengthen this nonneutrality (Prasetyawati, 2018). A deeper understanding of the nonneutrality aspects of development is important in designing and implementing development policies that are more inclusive and fair (Salihin, 2019). Awareness of the different impacts produced by development programs can be the basis for designing strategies that reduce social and economic disparities. Therefore, efforts need to be made to strengthen the justice dimension in development so that the benebits can be felt equally by all levels of society. In line with previous understanding, the gender perspective shows that development cannot be considered neutral. Development programs designed with the aim of accommodating public interests often have an unequal impact on men and women in practice (Abdullah, 2013). Although it aims to achieve justice and prosperity for the general public, there are often gaps in the distribution of benebits and accessibility between the sexes. At a certain point, a development paradigm that is considered ideal by accommodating public interests and fulbilling basic economic, social, and cultural rights (ecosophy) can actually cause a decline in the status and welfare of women's groups (Ruslan, 2010). It is important to identify the differential impacts of development on men and women in order to design more inclusive and gender-equitable programs. Aspects such as access to education, employment opportunities, and participation in decision-making need special attention (Muhartono, 2020). Gender-oriented development requires strategies and policies that take into account the different social and economic contexts for men and women. Thus, efforts to understand the impact of gender in the development context are an important step in creating a more equitable and sustainable development transformation (Afni et al., 2022). The difference in receiving development impacts is due to inequality in the level of access capacity between men and women. Until now, socio-economic relations have placed women in a position that tends to be left behind, creating inequality in access to resources, employment opportunities, and education. In fact, this condition is one of the main factors causing the impact of development to be uneven between the two sexes. Even though development efforts were launched with the aim of creating equality and prosperity for society in general, ironically, this paradigm can strengthen existing domination, especially towards women (Rahayu, 2016). The neutrality perspective applied in the development process is often unable to overcome socio-economic relations that still restrict women. In fact, when development is directed at providing equal treatment to all communities without considering existing gender inequalities, this actually results in irony in the form of increasingly sharp domination. The continued understanding of neutrality in development indirectly widens social disparities and injustice, conbirming the position of women as a vulnerable group who are still not fully receiving equal benebits from development efforts. Therefore, it is important to initiate a more gender-oriented development approach in order to overcome inequality, which is still an obstacle to achieving overall social welfare. Gender is not simply interpreted as a social category but rather as a perspective that opens up alternative spaces for countries to understand and overcome various social problems that become obstacles in the development process. This perception embraces a deep understanding of the role and impact of gender in various aspects of society, opening the door to the creation of policies that are more inclusive and responsive to the needs of all citizens. In this context, the public budget plays a crucial role as an instrument that represents and realizes development interests. Public budgets are not only a tool for measuring the state's commitment to gender empowerment but also reblect the extent to which the state is willing to accommodate diversity and respond to various social challenges faced by its society (Yusrini, 2017). Budget audits are a very important tool for assessing the extent to which public budgets reblect gender justice and are responsive to social problems. By conducting budget audits, development projections can be anticipated earlier, so as not to give rise to new paradoxes that might result in marginalization and injustice. This audit process is an important mechanism for ensuring that budget allocations are not just limited to writing numbers but truly accommodate the needs and rights of every individual, regardless of gender. Thus, through this approach, the country can ensure that the development process not only produces economic growth but also reduces gender disparities and creates a more equal and inclusive society. The To strengthen the implementation of the Presidential Instruction, the government established the Minister of State for Women's Empowerment as a special institution responsible for analysis and supervision related to gender mainstreaming. These steps create a solid foundation for the integration of gender perspectives in the national development agenda, reblecting the Indonesian government's serious commitment to realizing a more just and inclusive society (Wiasti, 2017). The functions of analysts and controllers in the context of gender mainstreaming illustrate the essence of conceptual understanding contextualized in practice. The challenges that arise relate to institutional adaptation at the regional level and the national mechanisms tasked with overseeing gender mainstreaming. Several regions responded by formulating institutional adaptation through the formation of new institutions that specibically focus on activity programs that are considered to represent the interests and needs of women. For example, the PKK (Family Welfare Empowerment) in several regions adopts this approach by orienting all programs and activities exclusively for women's groups. However, it should be noted that there are challenges to this approach, where PKK is often limited to implementing traditional programs such as cooking and beauty training. This highlights the need for further reblection to ensure that these institutions are able to cover broader and deeper aspects related to gender mainstreaming in every dimension of community life. Gender mainstreaming is often interpreted in an afbirmative way through policies that are "female" in nature, which tend to accommodate women's issues specibically. In this context, gender is considered a separate dimension that focuses on women, not a perspective that summarizes the entire process of community development beyond the exclusive boundaries of sectoral institutions. As a result, gender has become a separate space that is treated differently compared to other sectors. The conclusions from this model indicate that the understanding of gender is still ambiguous, resulting in policy translation that is less relevant at the operational level. This creates challenges in achieving true gender mainstreaming in every aspect of public policy, as an overly "female-oriented" focus can obscure the need for broader and more comprehensive gender equality in society. A more holistic understanding is needed so that gender can become an integral part of the entire development process, encompassing and embracing the roles and interests of all individuals without gender boundaries (Irianto, 2011). Two important things that form the formula for gender mainstreaming are: birst, development programs and activities are not separated between men and women. As a social construction, the gender perspective does not differentiate dichotomously into specibic programs for men or women. The activity program is designed to allow men and women to have an equally representative and fair space to participate, contribute, and gain benebits. A representative and fair space is important in every program and activity to ensure accessibility for all development stakeholders. Second, the activity program projected as gender mainstreaming afbirmation is not oriented towards obtaining calculative results but rather as a target or medium for achieving gender justice and equality. Afbirmative activity programs function to support the process of achieving balanced gender capacity between men and women in the implementation of development. The Family Planning (KB) program, for example, is not projected to increase women's capacity to control births but is a means for women to have a balanced bargaining position in planning family welfare. With the above framework, the concept of guard institutions, which is conbirmed based on Presidential Instruction No. 9 of 2000, is different from institutions that are formulated as executors of women's programs. The monitoring institution assumes the reachability of all cross-sectoral development processes, while the program-implementing institution is a special sector that does not necessarily have a gender perspective. --- Gender in the Stunting Recording Program in Bandung City The issue of stunting not only reblects a public health problem but also has a broad impact on a country's social and economic development. Children who experience stunting tend to experience obstacles in their physical and cognitive development, which in the end can affect their learning abilities and productivity in the future. Thus, stunting is not only a health problem at the individual level but also a serious challenge in a nation's sustainable development efforts (Archda & Tumangger, 2019). Women's or mothers' lack of access to nutritious food is a serious problem that can be triggered by a number of complex factors. One of the main factors is economic limitations, where healthy food and ingredients become expensive or unaffordable for some women. The high cost of living can hinder women's ability to meet their nutritional needs adequately (Imani, 2020). Apart from that, local culture also plays an important role in inbluencing women's access to food. Customs that dictate that women or mothers have to eat later after other family members can be a real obstacle to meeting women's nutritional needs because they sometimes result in less nutritious leftovers or even a lack of healthy food choices (Suminar, 2020). Distorted understanding of diet in adolescents and women can also exacerbate this situation. Social and cultural factors can create norms that support unbalanced eating patterns, often with a focus on foods that are low in nutrients. This can inbluence the way girls, especially teenagers and women, choose and consume food. Lack of or inadequate nutritional education can also be a cause of unhealthy diet understanding. Therefore, efforts to increase women's or mothers' access to nutritious food need to involve a holistic approach that includes economic, social, and cultural aspects, as well as increasing understanding of nutrition among teenagers and women to support healthy lives and prevent nutritional problems such as stunting. The e-penting program launched by the Bandung City government has the main aim of making it easier to collect data on stunting in the community. This application summarizes various features, such as a collection of questions (questions) related to stunting conditions, standard operating procedures (SOP) for managing stunting data, data integration from various sources, data cleaning, and data veribication stages. Apart from that, e-Penting also acts as a single data publication tool (one door data) and as a data analysis tool that facilitates an effective and efbicient policy-making process. Through this application, it is hoped that e-Penting can realize comprehensive data management, starting from the planning stage to monitoring and evaluation. The entire series of processes, including data collection, policy analysis, publication, and outreach, are integrated into one platform. So, stakeholders, from posyandu cadres to sub-district, subdistrict, and regional government heads, can make optimal use of this data. In this way, it is hoped that e-penting can become an effective instrument in supporting stunting reduction efforts in Bandung City, as well as a model for managing similar data at the local level throughout Indonesia. The e-penting program in Bandung City is closely related to the principle of gender mainstreaming, as reblected in its commitment to always promote women's rights and access to public services. Gender mainstreaming is not just about ensuring women's participation in every aspect of development but also ensuring that programs such as e-Penting specibically consider and integrate gender needs and perspectives in their design, implementation, and evaluation. One way in which e-Penting reblects gender mainstreaming is by ensuring that all questions or modules contained in the application cover issues of a gender nature, such as women's reproductive health or nutritional needs for girls. The stunting data SOP contained in the application can also be designed to take into account certain aspects that are more relevant to women, such as monitoring the nutrition of pregnant women. In terms of data publication, epenting can be a medium that supports transparency and accessibility of information for all, including women. The use of data analysis tools can also help identify gender inequalities in stunting rates or access to health services. Thus, epenting is not only a tool for administrative efbiciency but also an instrument that strengthens gender mainstreaming by describing and analyzing the program's impact on women's well-being. In addition, managing stunting data, which involves participation from the posyandu level to heads of regional apparatus, ensures that women's voices and perspectives are accommodated and respected throughout the development chain. Thus, the e-penting program in Bandung City can be considered a concrete and committed step towards realizing gender equality in the dimensions of public services and public health as a whole. Gender mainstreaming in the electronic stunting recording program (e-Penting) in Bandung City is reblected in several aspects of policy, monitoring, and evaluation. The following are details of several forms of gender mainstreaming in the program: --- Policy Alignment with Gender Equality Principles The e-penting program in Bandung City embraces the principles of gender equality in its design, afbirming its commitment to creating a positive and equal impact for women and men. In developing this program, the principle of gender equality was the main basis for ensuring that every aspect took into account gender diversity and responded to the unique needs of both sexes. From a policy perspective, the program has detailed steps to ensure that every policy related to e-importance creates balanced benebits for women and men. In its implementation, e-penting is not only a technological tool for recording stunting but also a means of ensuring that access to this service is equally open to women and men. By ensuring active participation from both gender groups, the program not only records stunting data but also creates opportunities to understand the specibic impact on women and men. This creates a strong foundation for further policy development that can have a balanced positive impact on all citizens, regardless of gender. By integrating gender equality principles throughout the program cycle, e-Penting proves that information technology can be an effective tool in realizing gender inclusion and justice. --- Inclusion of gender issues in application modules In an effort to ensure gender mainstreaming, questions or modules in the e-Penting application in Bandung City are carefully structured to cover highly relevant gender issues. One aspect that receives special attention is the nutritional monitoring of pregnant women, which ensures that the health and nutrition of mothers during pregnancy can be monitored in more detail. This is an important step to ensure that pregnant women receive adequate health care and support throughout their pregnancy. Apart from that, e-penting also pays attention to women's reproductive health issues. By including modules that monitor aspects of reproductive health, such as antenatal and postnatal care and family planning, the program ensures that women's health services cover the entire reproductive life cycle. In this way, e-Penting is not only a stunting recording tool but also an instrument that supports the prevention and treatment of women's reproductive health problems. The focus on aspects specibic to women and girls creates opportunities to collect more in-depth and relevant data. By understanding these special needs, epenting encourages the role of women in health and development services and creates a quality data basis for better decision-making in stunting management. In this way, e-penting is not only an effective tool for recording stunting data but also a means of improving the welfare of women and girls at the local level. --- Monitoring --- Gendered data analysis The monitoring process integrated into e-penting in Bandung City reblects a progressive approach by including gendered data analysis. This step is key to identifying and overcoming gender inequalities in stunting levels and access to health services in the community. By collecting sexospecibic data, e-Important enables a deeper understanding of how the impact of the program differs between women and men. Gendered data analysis allows researchers and policymakers to dissect every aspect of the program with a gender lens. For example, such data could provide insight into the extent to which women have equal access to health services provided by e-Penting or the extent to which the impact of stunting is more signibicant on girls. Thus, this analysis not only helps measure the overall effectiveness of the program but also details the specibic impact on certain gender groups. In addition, the data collected also provides an opportunity to identify gender inequalities in stunting rates, which can help develop more targeted policies. If data shows signibicant inequalities between girls and boys in stunting rates, corrective steps can be taken to ensure that programs are more effective in addressing the problem with an inclusive and gender-equitable approach. Thus, epenting is not only a conventional monitoring tool but also an important instrument in the struggle to achieve gender equality in monitoring and handling stunting at the local level. --- Women's participation in monitoring In the context of e-penting in Bandung City, the monitoring system implemented reblects a commitment to involve women's active participation in the entire data collection and analysis process. Inclusion of women in posyandu, subdistrict, and sub-district is key to ensuring that women's perspectives and experiences are directly taken into account in evaluating the effectiveness of this program. Women's participation in the data collection process provides a more complete and accurate dimension regarding the program's impact on stunting levels and public health. By involving women directly at the posyandu level, where they have direct access to the local community, e-Kenya ensures that the data collected reblects the realities and challenges faced by women in the context of children's health and nutrition. In addition, the involvement of women in sub-districts provides an opportunity for them to provide direct input and perspectives related to program effectiveness. This ensures that proposed policies and changes are not based solely on quantitative data but also take into account qualitative aspects that might be overlooked without direct contributions from women. In this way, e-penting is not only a technical instrument for recording data but also a participatory tool that supports the inclusion and empowerment of women in development and handling stunting at the local level. --- Evaluation Gender impact assessment The evaluation process for the use of e-penting in Bandung City focused on gender impact assessment, paying special attention to the way the program affects women and men differently. By conducting evaluations that examine program effects in a sexospecibic manner, e-Penting enables an in-depth understanding of the program's contribution to gender mainstreaming and efforts to reduce gender disparities in stunting management. The gender impact assessment in the evaluation includes various indicators, such as the level of women's participation in the program, increasing women's access to health services, and the program's impact on women's economic empowerment. By looking at differential impacts between women and men, this evaluation helps identify successes and challenges that may be related to the gender aspects of the program. Furthermore, the evaluation reblects the extent to which e-Penting realizes the goal of gender mainstreaming in the context of stunting. If the evaluation shows that the program is successful in reducing gender disparities and improving women's welfare by managing stunting, this conbirms the effectiveness and relevance of the program in the context of gender equality. Thus, e-penting is not only considered an information technology tool but also an instrument that makes a real contribution to improving women's conditions and leads to inclusive and gender-equitable development at the local level. --- Women's participation in evaluation The e-penting evaluation process in Bandung City marked a strong commitment to the active participation of women as direct users and stakeholders in analyzing the program's impact. By integrating women's perspectives, opinions, and experiences, evaluations not only measure program effectiveness but also create a more complete and richer narrative about their impact on women in the context of stunting management. Women's active participation as direct users ensures that the evaluation includes their views as direct recipients of program benebits. This provides in-depth insight into how e-Essential impacts women's daily lives, including their access to health services, the ease of use of the app, and the extent to which the program empowers women in family health management. The opinions and experiences of women as stakeholders enrich the evaluation perspective by involving them in assessing the broader impact of the program. By integrating women's voices, evaluations become more holistic and take into account factors that may not be directly visible in quantitative data. In this way, e-Important ensures that the representation of women's voices is reblected in analyses of program successes and shortcomings, supporting the strengthening of women's roles in decision-making and the design of more inclusive policies. --- Conclusion The e-Penting program in Bandung City is not only an information technology tool for recording stunting data but also an instrument that consistently implements gender mainstreaming in policy, monitoring, and evaluation. In designing and implementing this program, there is a clear commitment to achieving gender equality and empowering women as an integral part of efforts to address stunting. In the policy aspect, e-Penting emphasizes the principles of gender equality by ensuring that every step and policy related to the program provides equal benebits for women and men. The modules and questions in the app are specibically designed to cover gender issues, such as maternal nutritional monitoring and women's reproductive health, recognizing the unique needs of both gender groups. Furthermore, in the monitoring process, epenting uses gendered data analysis to identify gender inequalities in stunting levels and access to health services. The program not only notes general impacts but also pays particular attention to how the effects differ between women and men. The monitoring system also ensures the active participation of women from the posyandu to the sub-district level, giving them a signibicant role in data collection and analysis. In the evaluation phase, e-Important highlights the active participation of women as direct users and stakeholders, ensuring that the evaluation covers women's perspectives and experiences as a whole. Women's voices are integrated into analyses of program successes and shortcomings, ensuring representation of women's voices in broader impact assessments. Overall, e-Penting in Bandung City has succeeded in becoming a model for implementing information technology that is not only effective in recording stunting data but also plays a role in realizing gender equality and women's empowerment. This program provides a new perspective on how technological innovation can support government efforts to achieve inclusive and gender-equitable development goals. --- Community Empowerment, 5(2), 120-123. Imani, N. (2020) | Collection Program (e-important) in Bandung City, with a focus on policy, monitoring, and evaluation in preventing stunting. This research uses a qualitative approach with descriptive methods. The research results show that the e-penting program in Bandung City has had a positive impact on recording stunting data with a strong gender mainstreaming approach. The data collected involved the active participation of women at various levels, from posyandu to sub-districts, producing more in-depth information about the impact of the program on women and men. This program successfully integrates gendered data analysis into the monitoring process, providing more comprehensive insight into gender inequality in stunting rates and access to health services. The evaluation showed that women's participation as direct users and stakeholders supported the success of the program, with women's opinions and experiences signiHicantly integrated in the analysis of the program's successes and shortcomings. In this way, e-penting is not only an effective tool for recording stunting data but also a pioneer in realizing gender equality and women's empowerment through information technology innovation. |
Introduction Lifestyle encompasses the whole set of practices undertaken by individuals in their daily life. Among these practices, we may find some which foster a healthier life and others which could be considered risky behaviors [1]. Each individual's lifestyle will determine the health-illness process, where individual responsibility is essential, and bad habits like tobacco use, poor diet, or physical inactivity contribute to the development of chronic noncommunicable diseases (CNCDs) such as diabetes, high blood pressure, or obesity. From the holistic biopsychosocial perspective, it is crucial to examine lifestyles in a contextualized manner, taking into account the socio-cultural influence in both behaviors and lifestyle sets [2], highlighting the importance, at a clinical level, of the role of those people surrounding us and the context we live in [3]. Having a balanced lifestyle when this disease is diagnosed is crucial to achieve a good diabetes management. Type 1 Diabetes Mellitus (T1D), a paradigmatic case, is a disease caused by a lack of insulin which generates persistent hyperglycemia and energy wasting [4]. At a global level, T1D has a prevalence of 1,110,100 people aged 0-19. In Spain, 15,467 people are affected by this disease [5]. Specifically, in Andalusia its prevalence and incidence have progressively increased, with a total estimated between 30 and 60 thousand people affected [6]. Seville, with a prevalence of 2.06 cases per 1000 population and an incidence of 2396 cases per 100,000 population registered in 2014 is the province with the highest rate [6]. T1D has a multimodal treatment, comprising insulin intensive therapy, diet therapy, physical exercise, self-monitoring, and diabetes education. The complexity of the treatment lies in the continuous requirements that should be incorporated to the individual's lifestyle. As diagnosis usually takes place at an early age [7] -childhood or adolescence-these individuals are in developmental stages, and therefore still creating their living habits. This is why achieving good glycemic control taking into account the child/adolescent environment from the very beginning is crucial to ensure an optimum quality of life. Failure to adhere to treatment involves a metabolic disorder which may cause serious, chronic complications, risk of high morbidity and mortality, and disabilities [8]. Other chronic diseases, such as obesity, asthma, or epilepsy, may appear in different ranges (mild to severe) [8,9], and though they may require of basal-exposure therapy, it is common for this type of treatment to be occasional and discontinuous [9,10] according to the development of the symptomatology, as it is the case of allergic asthma [10]. In contrast to these chronic diseases, T1D requires of a continuous baseline treatment: constant daily record in which the individual has to make decisions, strict and multiple self-management of blood glucose level, and taking different insulin shots [11]. All this generates emotional stress as a consequence of an active role in self-management [12]. Other diseases which may appear during childhood or adolescence, such as cancer, require of much more aggressive treatments than those related to T1D, though these treatments are applied during long hospital stays [13] and there is a high rate of success in both treatment and cure in developed countries [14,15]. However, in T1D long hospitalizations may be only to the disease debut [16], but treatment requires constant attention within the social and familiar spheres, and nowadays there is no cure [16]. The first years of T1D evolution are a critical stage, significantly more if they overlap with the adolescent stages, when young people struggle to gain autonomy and independence from their families [17][18][19]. In the early stages, adolescents with diabetes are highly motivated to learn about the adequate management of their treatment and comply with adherence [20]. This is driven by their desire of achieving independence from families and gain integration among their peers in the social sphere [21]. However, the adolescent must face the ambivalence between the support and pressure to feel part of the peer group [22], as peer group support, of all socio-cultural influences, has proven to be a key lifestyle aspect at this stage. The adolescent wants to comply with expectations, adopting behaviors which are socially accepted, in order to feel part of the peer group [23], a whole process which leads them to generate a sense of peer group belonging and identity validation [24]. In this sense, adolescents with diabetes may adopt behaviors that diminish previously gained adherence, avoiding treatment requirements in order to obtain a solid integration in the group [25][26][27][28]. --- Theoretical Framework and Background Adolescence is a period of development and evolution for individuals aged 10 to 21 approximately, though nowadays there are no tacit agreements on age range [29]. This stage of constant development can be divided in three age brackets according to physical changes (ages 10-13), acquisition of abstract thinking (ages 14-17), and the quest for self-identity (ages 17-21) [30]. During the whole stage of adolescence there is a reduced perception of risk and a distorted feeling of power [31]. This leads adolescents to believe that they must live in accordance to their feelings instead of accepting advice on the part of health professionals or their own families [32]. Though it may be considered that the strongest support is provided by their families, adolescents may or may not perceive this support, and consider it overprotective [33] or as a lack of understanding on the part of the families [34]. For adolescents, peer groups are very powerful support networks, though they may cause conflict due to the ambivalence between the received support and the social pressure adolescents must face in order to feel part of the group [35]. This is why the current literature points out the importance of peer role and the need of further research in this field [36]. Chronic diseases during adolescence may pose different implications according to the stage affected by the disease. In an early stage, (ages 10-13) adolescents are not concerned about their disease as they are fully focused on adapting to their peers and become part of the group. During an intermediate stage, (ages 14-17) there is a strong concern about self-image [37] and how it is affected by the disease [38,39]. In the late adolescence, (ages 17-21), there is a strong concern about the disease and its complications with regard to social relations and finding a partner [38]. Moreover, a chronic disease in adolescents affect their self-esteem, which is low, while anxiety and depression levels are high [38,40]. In the context of chronic diseases, peer support edges up to backing and emotional support, thus being crucial in the adaptation of adolescents with chronic diseases such as diabetes, to their own disease [41]. This support is essential both for their emotional well-being and their compliance and remains a protective factor in adolescence [42]. In the case of adolescents with diabetes, constant compliance may affect their social relations when acute complications, regular medical consultations, or negative impact in the self-image appear [38]. These situations contrast sharply with their peer's reality, posing a risk for discrimination. It may be considered that adolescents with diabetes show a potential high risk of not following an adequate self-monitoring of the disease due to two crucial aspects: sense of invulnerability and the competing daily demands of the treatment [43]. For this reason, it is crucial to analyze how peer relations affect group affiliation and self-care of adolescents with diabetes. As background literature on the relationship between adolescents with diabetes and their peers, there are qualitative studies [22,44,45] with results describing an erratic social support, i.e., sometimes these adolescents receive this support and sometimes they do not. In this respect, the study by Greco et al. [46] offers a joint intervention of an adolescent with diabetes and a friend or peer, showing a considerable improvement in both knowledge and support offered by this peer. However, in spite of the efficacy of the intervention, the author [46] highlights the need of examine adolescents with diabetes from the role of their peers. Some interventions aimed to help adolescents with diabetes to overcome different social barriers have proved effective [47]. Nevertheless, this type of interventions is nowadays limited, as the traditional model of diabetes healthcare, mainly focused on an adequate blood glucose control, prevails. This fact has pushed forward the challenge of acquiring social skills on the part of adolescents with diabetes [48]. Research on how peers influence in the behaviors, both healthy and risky, of adolescents with chronic diseases such as T1D is still not precise enough [24,25,42]. Consequently, the study of peer roles is crucial to survey the group affiliation and self-care of adolescents with diabetes [49]. The objective of this research is to know in depth the social support perceived by adolescents with diabetes from their peers, identifying the roles adopted by peers regarding diabetes, and their influence both on the integration and self-management of the adolescent with diabetes. --- Methodology --- Sample and Procedures A pilot study was based on a descriptive, phenomenological, retrospective, crosssectional design. With the aim of analyzing in depth the feelings, perceptions and experiences of participants, qualitative methodology with in-depth interviews to key informants was applied. The duration of the interviews was not limited. Interviews were audio recorded and field notes were registered to encompass non-verbal communication aspects. The study population consisted in adults aged 18-35 living in Andalusia (Spain) with T1D of at least 4 years after diagnosis at the time of the interview. However, as it is a pilot study, the sample was targeted to the province of Seville, the capital of the Andalusian region, benchmark regarding healthcare services, and the province with the highest rate of incidence of T1D in the whole region. Besides, Seville hosts a flagship association of people with diabetes with a youth branch which provided the sample purposely. The field research was accomplished between April and June in 2019. The first contact among participants and researcher was facilitated by a trusted figure familiar to them: the president of the association youth branch, who acted as key informant. Thus, a snowball sampling was built through this key informant, allowing access to the rest of participants to create a chain-referral, non-probability sampling, so that participants could comply with the sampling inclusion criteria. Once the interview was agreed with the participant, they were informed of the nature of the study and the informed consent was signed. In-depth interviews were semistructured, covering the following dimensions: relationship with peer group (number of groups, usual dynamics), Diabetes and social relations with peers (social support perceived), Perception of behaviors of different friends in daily situations (eating, consumption of toxic substances, school/work, leisure moments), and Perception of conflicts with peers regarding diabetes. Interviews were conducted by the lead researcher, who has a three-year experience as academic intern, plus a year as honorary assistant at the University of Seville, a period in which she worked conducting semi-structured in-depth interviews. This researcher had previous experience in the subject of study after having accomplished different nursing roles with children and adolescents with T1D (Pediatric Nursing consultancy in May 2017, and a Diabetes camp in June 2017). The sample consisted of 15 individuals with T1D aged 18-35. Data saturation was partially reached with the sixth participant in the dimensions Peer relations and Diabetes and social relations with peers. Data saturation was fully reached with the eleventh interviewee, though four more interviews were conducted in order to bring more consistency and solidity to the study. Given the risk of circumventing sensitive information, intentionally or not, in the presence of a health professional, retrospective research was accomplished working from memory. Though there was the risk of oblivion, the more critical period of adolescence (ages 12-17) was bypassed, resulting in a more holistic vision of the period. In order to bypass the forgetting curve, a maximum age of 35 years old was fixed, as this age marks the consolidation of maturity. Besides, peer relations during adolescence convey highly affective bonds, facilitating their recall [49]. For this reason, the minimum age for participants was set in 18 years old. T1D diagnosis entails significant changes in both affected individuals and their environment, requiring an adaptation period of even a whole year [50]. Participants have at least a 4-year T1D diagnosis at the time of the interview, thus allowing them to feel fully adapted to their new routines and feel part of the therapeutic regimen. In summary, the selection criteria were: participants aged 18-35, T1D diagnosis during their childhood or early adolescence, and a minimum of 4 years of evolution. Gender influence was not subject to this study; 6 men and 9 women took part in the interviews. Exclusion criteria were the following: being affected by a psychiatric pathology, being affected by any difficulty which could block the interview communicative process, cognitive impairment or intellectual disability, T2D diagnosis, or gestational diabetes. --- Procedure and Ethical Considerations Each participant was provided with an accessible, voluntary informed consent for them to read and sign, together with an insightful verbal explanation. Interviews were recorded, and therefore they took place in quiet places which guaranteed the comfort and confidentiality of each participant. This study was validated by the Research Ethical Committee of the Hospitals Virgen Macarena-Virgen del Roc<unk>o in the session held on 30th April 2019 (CEI 08/2019). In order to guarantee their confidentiality, participants' quotations appear codified in this manuscript. --- Data Analysis The lead researcher accomplished the interviews transcription, including field notes, and the discourse analysis through those transcriptions. The first step, once the interviews were transcribed, was to extract all the codes which allowed a comprehensively codification of the different discourses according to their similarities. Right after the codified data were grouped according to the phenomenon similarities, the following labels were established: Social Support Perceived and Perceptions. The category "Social Support Perceived" covers peers' instrumental or emotional support as perceived by adolescents with diabetes and was analyzed using two variables. The first of these variables, "Peer Role" encompassed behaviors, attitudes, and specific relations with regard to diabetes adopted by the adolescent's peers. The second one, "Perceptions" comprises the emotions and feelings generated in participants with regard to their peer relations. Individual differences and the contribution of these peculiarities to the information global significance were taken into account where appropriate. Once each variable was individually analyzed, they were interrelated according to their meanings in order to build a general discourse which corresponded to the study category defined, from which study results could be drawn. Relaying on the list of references cited [51][52][53], the validity of this qualitative study is achieved in the first place through a critical aim during the exhaustive discourse analysis accomplished by the lead researcher. Secondly, a triangulation regarding the credibility of participants' perceptions through their discourses was carried out by the different authors. Finally, the authors' triangulation set the basis to establish the composition of results. Faced with the impossibility of isolating variables from a complex, holistic reality resulting from a qualitative study [51][52][53], the reliability of the study was accomplished through the triangulation of the recordings together with the field notes (body position, gestures, and speech tone), cross-checked by the different authors. This process substantiated the results found. --- Results The average age in the sample was 20.33 years, being 8.93 years old the average age of debut of this disease at the time of the diagnosis, with a maximum of 13 years and a minimum of 0 (perinatal period). Ostensibly, the relationship established by adolescents with diabetes with their peers is similar to the one established by any other adolescent with his or her group of friends. This implies that they can benefit from different types of support (emotional, instrumental, and informative) offered by the social network shaped by the peer group, and therefore improving the adolescent well-being while helping to cope with stressful situations. But when focusing on specific support regarding this chronic disease, we may observe how certain roles emerge in the attitudes of the peers resulting from diabetes management: capillary blood glucose testing, carbohydrate counting, insulin injections, and complication management in case of hypoglycemia, for instance. When a group member has T1D, peers adopt specific behaviors in different scenarios. Depending on how these attitudes are perceived by adolescents with diabetes, three types of specific peer roles can be distinguished in relation to diabetes: protective role, indifferent role, offensive role. This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn. --- Protective Role The protective role embodies a set of peer attitudes and behaviors which foster selfmanagement, favoring peer affiliation. In general, a protective role fosters behaviors which promote healthy habits in adolescents with diabetes, favoring self-control and fostering appropriate behaviors towards therapeutic compliance. Sometimes when they saw me... I don't know, eating ice-cream, they've asked: 'Hey, Are you allowed to eat that?' Or they've said: 'Don't you need an injection?' or 'Are you watching your dose?' Then yes, maybe I didn't get it right, so they were trying to help to control it even better. (Interview -I-8) With regard to nutrition, peers try to help adolescents with diabetes to avoid unhealthy food or those which are not recommended, without breaking the group dynamics, that is, peers also consume these types of food. Though food dynamics continue in the group, supportive attitudes arose among these peers: intake restrictions, previous warnings, food planning specifically devised for the adolescent with diabetes, etc.: As they are aware of my disease, they don't prepare high-carb food, and use much more greens. For example, the other day I was having lunch at a friend's and they prepared me a minced salad dish with tomato, onion, green pepper, vinegar and tuna... I ate it all. They ordered some pizzas...he he. (I14) A delicate situation for adolescents with diabetes is related to alcohol consumption. In general, affiliation is fostered through responsible drinking either lead by adolescents with diabetes themselves or their closest circle. Though these attitudes may be questioned as protector roles because they encourage alcohol consumption, adolescents with diabetes identify them as supportive. Two people interviewed acknowledge that, though their friends tried to prevent them from drinking alcohol, they rarely succeeded. 'Test yourself, yo.' 'Hey, eat something so you don't get sick.' 'Girl, check your sugar...' But it was not like 'Don't drink at all' but 'If you do it, do it right.' And that's support; my friends are supportive with this issue. (I11) I always said: 'I can't drink' But then I always did it, so they say: 'Next week no alcohol because it's bad for her.' But then the weekend came and they drink again, and I did the same. (I13) Peers adopting the protective role during these moments try to suggest the type of alcoholic and non-alcoholic beverage (i.e. light soft-drinks or Coke Zero ® ) in order to adapt consumption to the requirements of their friend with diabetes. Well, my friends care, right? And if they have to buy Coca-Cola they bring Zero ®, but when... a Mojito bottle is five euros and gin is seven, and if you mix mojito and 7UP we had a bowl like this [arms wide open] and gin is just for two... I cannot impose to buy what I want. (I11) In the context of self-care, adolescents with diabetes need to perform capillary blood glucose testings, take insulin injections or use an insulin pump, all of them procedures that often have certain impact on both self-esteem and self-image. It is common for adolescents with diabetes to hide these practices their peers, out of shame or embarrassment, but there is usually a close friend to share these situations with, who plays a protective role offering company and emotional support. I was ashamed of giving myself a shot in front of them, I don't know. When I needed my insulin dose, I used to wait until one of my girlfriends needed to go to the bathroom [... ] I normally used to go with a friend who knew... A close friend. (I10) Besides the closest friend, the rest of peers also remind adolescents about the need of monitoring insulin administration, and they even offer themselves to perform the procedure thus learning about treatment management. Though adolescents with diabetes prefer to undertake themselves their self-management, this type of attitudes on the part of the peer is really appreciated. Peers who adopt a protective role get genuinely involved and actively ask for information about how to proceed. My friend asks me to show her everything, you know, testing, insulin, Glucagon, whether if she needs to call an ambulance.. ---. (I7) The fear of hypoglycemia is perceived in most of discourses, and specifically peers show concern about glycemic control and about how their diabetic friend feels. As they become familiar with the situation, they are able to anticipate hypoglycemia episodes, warning their friend or alerting people around, or they act directly providing sugary foods. All my groups of friends have been informed about this, so they have been helpful. So, when they feel I'm acting weird they always realize and tell me that my sugar is low. (I3) Also, if I felt dizzy when I was with my classmates, they told our teachers about it [... ] They even explained them everything, I didn't have to tell my teacher that I needed to eat something. (I9) --- Indifferent Role The indifferent role involves lack of actions or supportive behavior with respect to diabetes, therefore letting adolescents to be autonomous and self-caring. In contrast to the protective role, peers adopting indifferent roles do not try to avoid certain behaviors on the part of the adolescent with diabetes, which could translate into lack of healthpromoting attitudes. However, this is perceived by adolescents with diabetes as inclusive, as no difference with respect to the peer group is highlighted. Moreover, adolescents with diabetes perceive indifferent roles as an opportunity of feeling independent, in contrast with attitudes perceived from their families. If they have to help me they would, but I was my business, they cannot be like 'Did you take your insulin?' [silence] No. [... ] But it is something they were clear about, and they knew I had to take full responsibility. It was mine and no one else's. (I2) Though indifferent roles imply the absence of direct actions, adolescents with diabetes describe these peers as informed and aware of procedures in the case of complications, while respectful of their autonomy: Normally they don't say anything about it, but I knew they were... watchful, you know? They were observing me, watching this or that, but saying nothing... Well, except if something happened, and then they did something. But in general, I could be on my own, and they were ok. (I6) --- Offensive Role Offensive roles adopt a discriminatory behavior towards adolescents with diabetes merely because they suffer this disease and need to undertake self-care. Consumption of food identity markers such as hamburgers and pizzas represents a source of conflict for these adolescents. However, food with high levels of simple carbohydrates becomes the major source of discrimination, thus perpetuating all diabetes myths and beliefs about not consuming any sugar. Some peers adopting the offensive role are self-assured about the impossibility of sugar consumption on the part of adolescents with diabetes. They take for granted the fact that these adolescents are not going to consume this type of food, and therefore they do not have to share it. These situations are usually managed in a humorous context, but adolescents with diabetes may perceive them as a mockery. Then they started with cruel jokes associated to diabetes, typical of people at that age, right? [...] If we decided to go to a sweet shop, and maybe I did not want to buy anything, but just hang with them, as we did every Friday. But then one of them told me that I could not go because I was diabetic. And I was thinking like 'is he making fun of me or is he trying to protect me? And I said him: 'Don't you worry, I'm not buying candy.' And he was like: 'No, I say you are not allowed in because you are diabetic.' And then he said: 'Look, the diabetic wants to enter there...' [mocking tone] I was crossed with him, like forever. (I1) Besides nutrition, other situations that lead to discrimination of adolescents with diabetes are those related to insulin therapy and glycemic control. Many times, peers adopting the offensive role deliberately compare adolescents with diabetes to drug users, due to injection practice. This connection is established both inside and outside the school context and usually takes the form of scorn and derision, displaying a whole range of offensive terms to address them. When I'm in the street and I need a shot, they shout to me: 'Don't smash yourself here, you crackhead', and that kind of stuff, I felt embarrassed. But also [clears her throat] at the beginning, I was afraid. I was afraid of being different, you know? What really worried me, mmmh... was that people gave me weird looks when I needed my insulin injection, or that they stared at me. (I11) In this situation, the adolescent with diabetes claims to have felt bewildered, as she cannot fathom that kind of comparison with regards to a therapeutic procedure (insulin injection) essential for her health. Frequently, these experiences may be potentially painful for adolescents with diabetes, who confess feeling scared and embarrassed. Besides, these situations may lead to a conflict, direct or indirect, or to avoid social contact with peers adopting these attitudes. Thus, it can be concluded that offensive roles adopted y peers do not favor group affiliation and do not offer social support, on the contrary, it may lead to situations of verbal or physical abuse. Such is the emotional impact of the offensive role on adolescents with diabetes that it may endanger their health due to self-care negligence. A common illustration of this occurs when peers ridicule hypoglycemia symptoms and manifestations, causing adolescents to feel embarrassed and therefore avoiding any action to reverse this situation, in order to avoid subsequent mockery. Definitively, offensive role favors situations of exclusion and undermines psychosocial well-being of the adolescent with diabetes. I have had to fake how I was feeling when my sugar was dipping low. So they don't say... [sad tone] so they don't laugh at me, as some of them did when I was all sweaty and white, because this is how you look, pale as a sheet. (I12) --- Discussion In their process towards social integration, adolescents with diabetes aim to follow group dynamics trying not to feel different with respect to their peers [54,55]. Up to this point, previous researches [56,57] try to relate peer influence to treatment adherence, being this relation still limited [58]. In this sense, peer support does not appear clearly linked to an optimal glycemic control, but peer conflict is closely linked to a worsening of glycemic control and self-care [58]. Thus, conflict may be considered to cause a deeper impact than support [59]. De Wit et al. [60] systematic review claims that family support to adolescents with diabetes is nowadays a well-defined contribution. However, it is not clear if peer influence is negative or positive as, on the one hand, peers may complement the support offered by the families, but also, social conflict among peers (incongruity between the behaviors of the adolescent with diabetes and those in the group) may lead to negative results regarding diabetes. Further evidence on how peers influence health and risk behaviors in adolescents with CNCDs [25,56] is still needed. In this sense, La Greca et al. [25], highlight the need of an inside appraisal on how and why adolescents with a CNCD succeed or fail in their social relations, and here the study of role is crucial. With respect to these roles, Rankin et al. [61] and Kyngäs et al. [62] highlight the presence of similar role to the ones observed in this study (Table 1). In the study by Rankin et al. [61] peer roles are classified into three types of support: normalizers, monitors and prompters, and helpers. Those who do not offer support are labeled as insensitive and unsupportive peers. For their part, Kyngäs et al. [62] distinguish three supporting roles in their study: dominating, silent, and irrelevant. Silent support, described by Kyngäs et al. [62], consists in a group dynamic change towards a healthy lifestyle, avoiding some types of food such like sweets, normalizing diabetes and facilitating peer integration. This study has failed to observe significant changes in group dynamics regarding the protective role, but in food-related contexts they tried to reduce the intake of adolescents with diabetes, or even try to prepare something more appropriate for healthy eating. In second instance, the roles of normalizers, monitors and prompters, and helpers proposed by Rankin et al. [61] come into play in self-management situations (capillary blood glucose testing and insulin administration), complications (basically, potential or real hypoglycemia), and in general as emotional support and backing. Similarly, qualitative studies by Comisariado et al. [22] and Marshall et al. [44] highlight that, in some cases, when adolescents reveal their diabetes diagnosis, they receive social support on the part of their friends and this foster positive attitudes on the part of the peers. All these support actions are also encompassed by the protective role and foster group integration. This definition of indifferent role differs from the irrelevant support proposed by Kyngäs et al. [62], which contemplates no direct peer influence according to the perceptions of adolescents with diabetes. On this regard, this research cannot deny the possibility of this influence because although it is not implicitly manifested in any discourse, certain degree of gratitude towards the peer group is observable, and therefore some influence may be deducted. Pendley et al. [45] observed that peers' lack of specific knowledge about daily management of diabetes, what may result in two types of behaviors: absence of support or neutral support. Neutral support, coincident with the indifferent role presented here, would consist on not establishing a differentiating barrier between the adolescent with diabetes and the peer group, therefore fostering inclusion, but with bringing a duality: the adolescent with diabetes may perceive this behavior as a form of emotional support which encourages risk behaviors or behaviors which do not favor self-care. In this sense, Marshall et al. [44] highlight that, according to the perception of adolescents with diabetes, the support perceived is limited due to their peers' lack of training. I don't know, I never felt they were doing a fuss around my issue, I mean, they maybe asked me 'How are you?' 'Can you eat this?'. Those are things I've been asked... But just once at a time, when I answered them, that was the end of it. (I15) The insensitive role proposed by Rankin et al. [61] is similar to the offensive role in this study: peers show no empathy and discrimination appears. In contrast to last analysis [61], this research fund evidence of humiliating attitudes and insults, resulting in self-care restraint in order to avoid peer rejection. This situation is also described by Gürkam et al. [63] showing that distress, frustration, and helplessness lead adolescents to hide their diagnoses. Finally, the dominant role described by K<unk>ngas et al. [62] involves peer pressure so adolescents with diabetes feel 'forced' to follow group behaviors ignoring their disease in order to comply with social integration or treatment adherence. Although there is no direct correlation with the roles proposed in this study, the dominant role [62] is to some extent present in the behaviors adopted by both protective and indifferent peer roles here described. On the one hand, peers encourage the adolescent with diabetes to carry out group practices, including the consumption of toxic substances, -in a controlled way-under the protective role. On the other, if peers adopt an indifferent role and do not get involved, self-caring on the part of the adolescent may be neglected in favor of group dynamics. Ultimately, offensive roles may cause adolescents compliance with peer group by means of a passive behavior, for fear of reprisals or rejection. Under this assumption, this may cause a dominant behavior such as the role suggested by Kyngäs et al. [62]. Most studies on peer influence in children or adolescents with diabetes do not cover the use of alcohol and other drugs, as they focus on the treatment requirements (exercise, diet, insulin therapy, and capillary blood glucose testing), glycemic control, and adherence and quality of life achievements [56,59,61,62]. Finally, with respect to the offensive role and discrimination, there is a lack of scientific evidence [56,59,62]. The insensitive role observed by Rankin et al. [61], shows similar behaviors, although analyzing an underage sample (preadolescents) results are slightly different. According with the results, adolescents with diabetes ascribe great importance to the fact of feeling 'normal' and not different from their peers and, unfortunately, they show a limited social success [22]. Continuous self-caring and hypoglycemia disruptions easily cause stigma, becoming bullying targets and conflict situations all practices associated to disease management (capillary blood glucose tests, insulin injections and dietary restrictions) [58]. One of the stigmas more commonly highlighted by adolescents with T1D is the lack of information around the disease, which consequently it is linked to the impossibility of consuming certain foods or performing certain actions [64]. According to Browne et al. [65] adolescents with diabetes suffer from a lack of knowledge and misinformation of the rest of the population, which may be caused by wrong media outreach. Consequently, during friend meals, adolescents with diabetes have to cope with social prohibitions, resulting in a negative impact on their identity caused by their peer's failure to differentiate the recommended guidelines for the disease. Therefore, inner conflict arouses and some adolescents avoid disclosing their disease [22,65,66] as revealed by participants. People sometimes are a pain in the neck [...] It's a shame that due to misinformation, uhm... they are bothering you and in the end making you feel really bad. 'Why are you eating that?' And then you have to explain everything, all the time... It's like, I eat this because I want, damm. 'But you can't.' Well, I can, and that's the end of it. It's exhausting. Ultimately, the results obtained allow an overview of both social support as perceived by adolescents with diabetes and the identification of peer roles. The influence of these roles, rather vague in previous literature [22,56,59,60,62] has been clarified in this pilot study. In contrast to results obtained in other studies [44,45,62], the present wok observes that the protector role not only foster healthy and self-care behaviors which facilitate integration, but also it may develop a crucial role in common scenarios of adolescence (toxic substances consumption, for example) encouraging a 'controlled' consumption to facilitate and improve the integration of the adolescent with diabetes. However, though this attitude may favor integration, it has a negative impact on the adolescent's self-care. | The aim of this study was to examine, through the roles of peers with regards to diabetes, the relationship between the support perceived by adolescents with diabetes and their peer-group affiliation. This is a descriptive, phenomenological and retrospective study based on a qualitative methodology. In-depth interviews with 15 people aged 18-35 with type 1 diabetes mellitus diagnosed in their childhood or adolescence were carried out. Data was analyzed through the interpretation of general discourses. Peers have considerable influence on adolescents and provide them social support from different roles. The protective role basically offers emotional support and sends reminders of different aspects of the treatment, while the indifferent role does not meddle in any aspect related to the diabetes. Both roles can foster social integration of adolescents with diabetes into the peer group. The offender role creates social conflicts through discrimination and stigma of adolescents with diabetes. These roles appear during the process of socialization of adolescents with diabetes, where commensality and situations of self-monitoring or administering insulin, key aspect of diabetes treatment, are crucial. Peer groups, depending on the role adopted, may offer support or bring a specific conflict regarding diabetes to their adolescent peer. The combination of roles that friends and peer group play with regards to diabetes will determine the degree of socialization and integration of adolescents with diabetes. |
in the end making you feel really bad. 'Why are you eating that?' And then you have to explain everything, all the time... It's like, I eat this because I want, damm. 'But you can't.' Well, I can, and that's the end of it. It's exhausting. Ultimately, the results obtained allow an overview of both social support as perceived by adolescents with diabetes and the identification of peer roles. The influence of these roles, rather vague in previous literature [22,56,59,60,62] has been clarified in this pilot study. In contrast to results obtained in other studies [44,45,62], the present wok observes that the protector role not only foster healthy and self-care behaviors which facilitate integration, but also it may develop a crucial role in common scenarios of adolescence (toxic substances consumption, for example) encouraging a 'controlled' consumption to facilitate and improve the integration of the adolescent with diabetes. However, though this attitude may favor integration, it has a negative impact on the adolescent's self-care. This duality is also observable in the indifferent role, matching the results of the study by Pendley et al. [45]. By contrast, this role duality has not proved to be so far the origin of the discussion on whether or not social support from peers is a positive influence for adolescents with diabetes. --- Limitations The limitations of this study should be acknowledged. Firstly, that the research is a pilot study, and it has only realized in Seville. Despite we have reached the saturation point, our sample is geographically limited. Secondly, we chose our sample on the basis of convenience, which makes it difficult to extrapolate wider conclusions from the results obtained. Thirdly, this study has focused on the perception of adolescents with diabetes, and no peer interviews were conducted. An analysis of peers' perceptions would provide a contrasting and therefore in order to provide further conclusions to the study of peer affiliation in the case of adolescents with diabetes. Finally, during some interview, participants acknowledged expectations about what the ideal behavior of their peers could be. However, this aspect requires of another in-depth study given the myriad possibilities conveyed by participants' subjectivity. --- Conclusions Peer influence through specific roles is crucial for the group affiliation of adolescents with diabetes. On the one hand, both the protective and indifferent roles facilitate the integration of the adolescent with diabetes. The protective role also fosters controlled consumption of food identity markers and/or alcohol. The indifferent role ignores the disease, without realizing the personal consequences of these practices. Though seen as supportive on the part of adolescents with diabetes, these peers pose a dilemma for adolescents, who have to choose between following common social practices in order to feel part of the group or lead a healthy lifestyle. On the other hand, the offensive role generates stigma and social conflict, which it is not conducive to the integration of adolescents with diabetes, even jeopardizing their physical and emotional well-being. This type of offensive behaviors, according to adolescents with diabetes, may be the result of society's lack of health information and education regarding T1D. The number of studies evaluating the how adolescents with diabetes perceive their peers' behaviors is very limited, and in no case this literature shows a result categorization as simple and comprehensive as the one in this pilot study, which highlights the importance of the duality conveyed by both indifferent and protector roles in the behavior of adolescents with diabetes. Thus, the study offers a possible explanation on why qualitative studies have failed so far in completing the overview on peers' influence in terms of positive or negative influence. The innovative approach focuses on the results regarding the protector and indifferent roles and the possibility that they may introduce, simultaneously, both a positive and negative influence. This fact enables our study as a foundation for further, more extensive research which may not only confirm these roles duality, but also provide an in-depth, extended analysis of the offensive role and its consequences. The practical implications of this research may be observed at several levels. In the field of research, it offers a possible explanation to a phenomenon hitherto unclear. Expressly, the acknowledgment of specific peer roles facilitates precise health care interventions which will help to improve not only the affiliation of adolescents with diabetes, but their coping with social conflict scenarios, therefore improving their psychosocial well-being. In the field of education, it is possible to offer a conceptual benchmarking framework on the type of behaviors that may be generated in the class context towards the student with diabetes. This could facilitate the planning of prevention strategies on the part of teachers in order to avoid discriminatory attitudes posed, for example, by the offensive role. Therefore, these professionals could foster the integration of adolescents with diabetes in the class. Together with teacher training, the applications of this study could facilitate understanding of parents and families of adolescents with diabetes. Adolescence is a critical time, where knowing what the adolescent is really doing or how they cope is very difficult for parents, since adolescents rely on the group of friends for confidence and comfort. Therefore, observing the possible behaviors that these friends may adopt will help parents to guide them, understanding some behaviors and the outcomes regarding diabetes at this stage. With respect to adolescents with diabetes, knowing in advance the roles of peers within their social sphere provides them with the possibility of working in advance on coping strategies for the different role behaviors. When approaching this knowledge from a multidisciplinary perspective (educational sciences, health, and psychology) together with their family, adolescents with diabetes may achieve a considerably more effective group integration. Finally, it is important to be realistic when appraising the adoption of self-care behaviors in adolescents with chronic diseases. Even in the best scenario cases of social integration, and when peers are well informed about the disease, adolescents with diabetes can easily adopt risky behaviors perceived as "controlled" by them and their peers. --- Data Availability Statement: The data are not publicly available due to privacy or ethical considerations. --- Funding: The APC was partially funded by financial support for the consolidation of research groups by the Andalusia Regional Government. --- Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and approved by Ethics Committee of the Hospitals Virgen Macarena-Virgen del Roc<unk>o in the session held on 30th April 2019 (CEI 08/2019). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. --- Conflicts of Interest: The authors declare no conflict of interest. | The aim of this study was to examine, through the roles of peers with regards to diabetes, the relationship between the support perceived by adolescents with diabetes and their peer-group affiliation. This is a descriptive, phenomenological and retrospective study based on a qualitative methodology. In-depth interviews with 15 people aged 18-35 with type 1 diabetes mellitus diagnosed in their childhood or adolescence were carried out. Data was analyzed through the interpretation of general discourses. Peers have considerable influence on adolescents and provide them social support from different roles. The protective role basically offers emotional support and sends reminders of different aspects of the treatment, while the indifferent role does not meddle in any aspect related to the diabetes. Both roles can foster social integration of adolescents with diabetes into the peer group. The offender role creates social conflicts through discrimination and stigma of adolescents with diabetes. These roles appear during the process of socialization of adolescents with diabetes, where commensality and situations of self-monitoring or administering insulin, key aspect of diabetes treatment, are crucial. Peer groups, depending on the role adopted, may offer support or bring a specific conflict regarding diabetes to their adolescent peer. The combination of roles that friends and peer group play with regards to diabetes will determine the degree of socialization and integration of adolescents with diabetes. |
I. Introduction This issue of OxREP is concerned with economic disparities between regions in Europe and in the United States. Significant regional inequalities of income and wealth exist in every Western European country and in North America, but their extent varies from country to country. Exactly how one country compares with another depends upon the spatial unit of analysis studied, as well as upon the measures of economic performance and inequality used. On virtually all measures, regional inequality is particularly high in the UK, higher even than in the US. Regional inequalities change across time. In both Europe and the US it is generally thought that they tended to narrow from the early 1900s until about 1980, since when they have increased. The articles in this issue are concerned with: why we should care; what exactly it is that we are measuring; how and why regional inequalities have evolved over time; and what policy-makers have done and should do to address the problem. --- II. Why do spatial inequalities matter? A simple classical or neoclassical view of the world would predict that regional disparities were transient. In time, markets would adjust to bring different areas closer to equality. Labour would leave poorer areas for richer areas and capital would move in the opposite direction. However, in regional economics, as in in other relevant disciplines, there are contrasting strands of literature. As Van Dijk and Edzes (2016) put it: 'In economic geography circles, the debate is between space-neutral theories, where labour is seen as highly mobile, and place-based approaches that emphasize the underdevelopment traps associated with location-specific externalities and (the) potential market failures.' The latter approach is illustrated by Patricia Rice and Tony Venables (2021, this issue), who remind us of a number of reasons why convergence forces regarding labour and capital mobility might be weak to non-existent. Because labour markets tend to be national, there is relatively little scope for wage adjustment and this lack of adjustment will dampen the willingness of investors to move into the poorer areas. The people who move out are likely to be the young and the skilled, meaning that the remaining workforce is relatively unattractive to inward investors. Rice and Venables go on to argue further that: places that have experienced negative shocks may have adverse skill and demographic characteristics, and also weak fiscal positions, poor public services, and social and health problems associated with low employment rates. Many of these are cumulative, involving vicious feedback mechanisms with multi-generational effects. Thus, firms are reluctant to move into such areas. In some senses, this is the inverse of the agglomeration argument. Successful areas are sustained by clusters of firms, comprising a mutually beneficial ecosystem, that are resilient after economic crises, while poorer areas can be trapped in a low-productivity, low-income equilibrium, in particular when they are hit by national macroeconomic recessions. Disadvantaged regions fall into three categories: those which have been relatively poor in the very long term; those which failed to adjust to structural change; and those disproportionately affected by a macroeconomic shock. The first category would include the likes of the Highlands of Scotland or large parts of southern Italy. The second would include many former centres of manufacturing in Europe and North America. The third usually (but not always) contains already disadvantaged areas whose disadvantage is exacerbated by a macro shock. The articles in this issue by Joan Rosés and Nikolaus Wolf (2021) on Europe and by Trevon Logan et al. (2021) on the US encompass all three types. Thus, regional inequalities can be persistent and self-sustaining. Some regions become and remain 'left behind'. Apart from the economic consequences for many individuals across the generations, there are broader social and political implications. These areas often exhibit poorer health, higher mortality rates, lower educational attainment, and greater crime. In some countries, deprivation has become associated with various forms of political extremism. Recent studies on the geography of discontent (De Groot, 2019;Dijkstra et al., 2020) suggest that the rise of populist political parties associated with anti-establishment voting, anti-EU voting, and Brexit are concentrated in places that face population and industrial decline, have low land rents, high unemployment rates, and low levels of education of the workforce. It might also be the case that too much regional divergence acts as a drag on the growth of the national economy (de Dominicis, 2014). For example, it is likely that labour force capabilities are under-utilized in low-productivity regions. Because of limited individual mobility, for both economic reasons (high rents and property prices in high-productivity regions, for instance) and non-economic reasons, this means that human capital is being wasted (Holmes and Mayhew, 2015). All these negative associations with regions that are lagging or in decline or impoverished have led to the coining of the phrase 'places that don't matter' (Rodr<unk>guez-Pose, 2018). --- III. Measurement The variables upon which we concentrate are GDP per capita and disposable income per capita. The first is measured at the workplace. The second is a household measure and therefore relates to the place of residence. Different authors in this issue employ different spatial units of analysis depending on the specific phenomena they are studying. The OECD has a standardized spatial classification system, the intricacies of which are well explained by McCann (2020). TL2 is the highest level of disaggregation, describing large regions. In the UK, for example, there are 12 of them. TL3 digs down into areas within these large regions-the UK has 173 of them. The third, residence-based, measure is of metropolitan urban areas containing more than half a million people and based on 'commuting flows and contiguity'. There are 17 such areas in the UK. Eurostat and the European Commission employ a slightly different classification, labelled as NUTS 1, NUTS 2, and NUTS 3. McCann (2020) compares these with the OECD classifications. He shows that for the UK and three of the EU countries (Germany, France, and Belgium) NUTS 1 corresponds with TL2. For another 14 countries NUTS 2 is not much different from TL2. NUTS 3 and TL3 more or less correspond for all EU countries. Unless one defines spatial areas very narrowly, then the issue of intra-area inequalities becomes potentially important; and these inequalities vary across countries. As McCann writes: inequalities within the UK are also across such short distances with enormous local productivity variations evident within just a two-hour driving time, whereas within Spain comparable variations would only be evident across a seven-hour driving time, and in Italy and the United States across a 10-hour driving time. This observation is reinforced in the case of the UK by the 2070 Commission: The long-term patterns of inequalities are reflected at a neighbourhood level. This is highlighted in the research by the Geographic Data Science Lab, University of Liverpool. There is considerable intra-regional variation in the distribution of struggling neighbourhoods within more disadvantaged regions. The local patterns in neighbourhoods mirror regional disparities, illustrating the way inter-and intra-regional inequalities are reinforcing. (UK 2070 Commission) Reflecting such observations, Enrique Garcilazo et al. (2021, this issue) develop what they describe as a 'functional typology' of the OECD's TL3 for Europe and the US. They sub-divide TL3s into five categories: large and medium metropolitan regions and three regions differing according to the size of the metropolitan areas to which they have access. This enables them to conduct a fine-grained analysis of the contribution of different types of regions to national economic growth as well as of the impact of the 2008 recession on different types of spatial entities. IV. The changing patterns of regional inequality Rosés and Wolf (2021) give a nuanced picture of the patterns of convergence in 16 countries in Europe from the beginning of the twentieth century until 2015. Initially there was little change from high levels of dispersion in the inter-war years. The significant decline in dispersion came in the years after the Second World War until about 1980. Rosés and Wolf argue that in many areas this was not driven by the classical forces of convergence, but rather by post-war reconstruction and structural change in those regions that had suffered physical destruction and massive population movements-parts of Germany, Austria, Belgium, Italy, the Netherlands, and eastern France. From 1980 there was a significant increase in regional inequality. Importantly, however, they find that many islands of prosperity have emerged within otherwise lagging regions. Within this general picture, there was diversity of regional experience. Taking snapshots in 1900, 1950, 1980, and 2015, Rosés and, Rosés and Wolf describe what they term a core-periphery pattern in 1900. The regions of England, north-western Europe, and Switzerland were richer (in terms of GDP per capita) than average. The regions of France and central Europe were close to the average, while Scandinavia and southern Europe contained many poorer than average regions. Over time the spatial correlation has declined and a more complex picture has emerged. By 2015 there were metropolitan areas and islands of prosperity, such as Paris and Madrid, which were surrounded by regions with relatively low average GDP per capita. Most regions of England had experienced a relative decline. Ireland (mainly Dublin) and many parts of Scandinavia had become richer than average. Logan et al. (2021) paint a similarly detailed picture for the US. General convergence has halted in the last three or so decades. Some of the dynamics of regional inequalities are driven by major cities (see, for example, an earlier issue of OxREP on urbanization in developing countries (2017, vol. 33, no. 3)). Nevertheless, there are cities that buck the trend. The south-east of the USA contains cities which are 'among the most innovative and dynamic regions in the country'-Raleigh-Durham, Nashville, Atlanta, and Richmond. At the same time there are struggling cities in more prosperous areas-Oakland, Milwaukee, Detroit, and Baltimore. Some disparities develop over time, others are abrupt structural breaks. With respect to the first, Rosés and Wolf describe how European regions from 1900 to date experienced a gradual and steady decline of agricultural employment shares, a rise in industrial employment shares until the 1970s followed by a decline, and a rise in the employment shares of services over the whole period. The expansion of industrial and services employment was very uneven across regions, while agricultural employment became significantly more concentrated. Today's regional disparities in income and job characteristics, as well as massive differences in agglomeration effects and human capital endowments, are the consequence of these historical developments. Logan et al. describe similar long-run developments in the United States, where in the early twentieth century some frontier regions had high levels of GDP per capita. They put particular stress on the consequences of long-run changes for the current geographical distribution of human capital. As they put it: 'Regional success is now a story of higher education, human capital, and the rising tech and service sectors.... Understanding regional equalities today requires understanding these dramatic differences in human capital across space in the United States.' Not only structural trends but also structural breaks may lead to new disparities between regions that impact regional development. Examples are the abolition of slavery in the United States, the Second World War, and economic shocks such as the oil crises in the 1970s and the 2008 global financial crisis. These breaks may come with both opportunities and threats. With regard to the developments in the post-slavery era, Logan et al. state that 'the South would not develop the educational, civic, and financial institutions needed to promote innovation and diversify away from cotton.' By contrast, as is shown by Rosés and Wolf, reconstruction after the large-scale destruction during the Second World War stimulated regional economic growth on the western European continent. Any classical convergence forces at work would have had little impact but for the influence of a stable political environment and the massive Marshall Aid programme. Ironically, lack of access to Marshall Aid may have been a reason that the UK fell behind after the Second World War. Rice and Venables (2021) explore the impact of adverse economic shocks in the 1970s. Using UK local authority district (LADs) data, they investigate the impact of the large and rapid fall in the share of the secondary sector in national output in the UK from 40 to 30 per cent in the 15 years from 1966 to 1981. They argue that if the classical forces of convergence had been at work, we would have expected to observe a negative relationship between the size of the shock in employment rates in the LADs and the subsequent growth of employment. They do not find any such relationship. Two-thirds of the Local Authority Districts with the highest deprivation rates in 2015 had experienced large negative shocks about 40 years before. They also found that 'the places that experienced negative shocks were not, on average, drawn from atypical starting points'. In other words, some previously fairly prosperous regions shared the pain. This is but one example of how fairly prosperous areas can succumb to fundamental shocks and of how difficult it can be to recover. V. The anatomy of regional inequalities Rosés and Wolf (2021) distinguish between 'geographical factors' and 'institutional factors' that account for regional advantage or disadvantage. Disparities in geography had large and persistent effects on past regional economic developments. Although these so-called 'first nature' disparities may not be as important as they used to be, reminders of their past influence can still be very much present nowadays. Using data on 173 European regions in 16 countries between 1900 and 2015, Rosés and Wolf divide geographical factors into two types-natural and man-made. Favourable natural factors include climate, soil quality, access to coal fields, and proximity to large seaports. In similar vein, in their long-run analysis of regional inequalities of the USA from the eighteenth century to date, Logan et al. (2021) describe the geographic advantages of waterways and soil suitability for cotton as examples of regional endowments of natural resources that once led to fast growth in some areas. Institutional or second nature factors are more the consequence of previous action by economic and governmental agents. Rosés and Wolf demonstrate how these'second nature' disparities between regions also affect regional development. These disparities may relate to institutional differences, such as simply the country to which a region belongs, whether the region is a capital region, whether the country is part of the European Union and/or the Euro-zone. A second nature disparity of particular importance is market access-in other words, the size of nearby regional markets since they reflect purchasing power not just dependent on the size of the population but critically on its employment patterns and income. Agglomeration effects and increasing returns to scale also fall into this category. Increasing returns to scale were important for the rise of manufacturing industries but were also crucial, as Logan et al. argue, in the institutionalized slavery system of the South of the US before the Civil War. As the terms suggest, second nature disparities were frequently the consequence of first nature disparities. For example, proximity to coastlines or coal fields were often associated with the emergence of metropolitan regions. The huge regional disparities in access to cities of different population sizes and densities is the starting point of the analysis by Garcilazo et al. (2021). They explore the contribution of different sized regions to GDP growth, categorizing five types of region: (i) regions with a city of more than 1 million people, (ii) regions with a city of more than 250,000 people, (iii) regions near a city of more than 250,000 people, (iv) regions near a city of less than 250,000 people, (v) remote regions. Countries differ immensely in how their populations are distributed across these regional types. Different densities are related to agglomeration economies and to regional inequalities in productivity, wages, and living standards. Disparities in population densities and sizes determine to what extent the contribution to aggregate growth is more concentrated in metropolitan regions or more distributed across regions of different sizes. They find evidence for 'agglomeration economies in regions with large cities: in the US, EU-15, and EU-25 their contribution to aggregate growth is higher than their population share'. However, medium-sized cities play a larger role in Europe than in the US, where conversely the regions with the largest cities (of more than a million people) make a greater contribution than in Europe. Interestingly, the contribution of cities to growth is less volatile in the new member states of the EU than in the older member states. They go on to suggest two different country types: (i) countries with'metro-dominated growth contributions' in which regions with large cities dominate the contribution to national economic growth-Finland, France, Estonia, Greece, Lithuania, Italy, and the US; (ii) countries with'mixed growth models', of which there are two varieties. The first comprises those countries with 'decreasing size-monotonic growth contributions' where regional growth contributions decrease with the main city sizes of the regions-Austria, the United Kingdom, Germany, the Netherlands, and Slovenia. The second have'mixed growth regimes', 'where all regions contribute to growth in a roughly balanced way'-the Czech Republic, Denmark, Hungary, Belgium, Latvia, Portugal, Slovakia, Sweden, Poland, Spain, and Norway. They conduct a similar exercise for regional contributions to national productivity growth and find two broad patterns: (i) Concentrated countries where most productivity growth is contributed by the 'top productivity regions'-as in the Czech Republic, Belgium, Slovakia, Sweden, France, the UK, Greece, Lithuania, and the Netherlands. (ii) Distributed countries where 'catching up regions contributed the most to aggregate productivity growth'-as in Austria, Denmark, Germany, Estonia, Spain, Finland, Hungary, Italy, Latvia, Portugal, Slovenia, and the United States. Such results raise two questions. The first is what impact a levelling of regional performance might have on a nation's overall economic growth. Clearly any government would hope that the productivity of a poorer region can be enhanced via policy interventions without any cost to the productivity of more successful regions, but this might in fact not be achievable. The second is where future national economic growth will come from. Noting the sort of evidence presented by Garcilazo et al., McCann (2013), inter alia, argues that the dominance of core cities and therefore of core regions may well fall in the future quite independently of any policy initiatives. Although'modern globalization' had made geographical proximity important for high-value knowledge activities and for service activities reliant on trust. He suggests that in the future there will be many more opportunities for non-core regions. We explore this issue further later in this article. It is not only disparities in population size that matter. So do disparities in population characteristics, not least its human capital broadly defined. Böhm et al. (2021, this issue), studying West Germany between 1975 and 2014, provide one specific example. Their starting point is that Germany as a whole has seen rapid population and workforce ageing. Using a panel of labour market regions, they find that 'workforce mean age has considerable negative effects on the wage returns to age', which are arguably stronger in markets with more non-routine jobs. They also find that the employment rates of older workers also tend to fall with mean age. These effects vary significantly across German regions and Böhm et al. explore this further. Workforce ageing can be driven by both demand and supply influences. It may be that the demand for older workers falls in a region or that the supply of younger workers falls because of declining birth rates or outward migration. Low-income regions and those in relative decline tend to lose younger people who leave in search of better jobs but also in search of a more appealing lifestyle in the more vibrant urban centres. Böhm et al. also find a significant role for increased relative demand for younger workers, but only in these urban centres. As far as declining or left-behind regions are concerned, in most countries their working populations get older and, if Böhm et al.'s results are true beyond Germany, there are harmful consequences for these older workers. An important dimension of human capital are leadership skills and capabilities. The more devolved responsibility for regional strategies and their implementation becomes, the more important are the qualities of local leaders. Paul Collier and David Tuckett (2021, this issue) discuss one aspect of this. They compare the political economies of Wales and the West Midlands of England. They consider the role of narratives in 'forming investment expectations', how a particular set of expectations can trap regions in 'low income equilibria' and limit 'the scope for regional leaders to reset those expectations'. Whether an area has a low income equilibrium or a high income equilibrium, it will be the consequence of a whole set of interdependencies between firms operating in the tradable sector and those in the non-tradable sector, between firms operating internationally, nationally, and locally, between the decisions made by the commercial sector and those made by the education sector, and by local government affecting things like infrastructure and local taxation. However, as Collier and Tuckett put it,'resetting a low-income equilibrium may require a coordinated change in the narratives prevailing' in these different interest groups 'that have only limited interaction'. They pursue these ideas by interviewing representatives of the business communities in Wales and the West Midlands. The main difference they found between the two regions was that narratives were overwhelmingly negative in Wales-'narratives of identity suggest that identities are not merely fragmented, but actively oppositional. A predominant explanation for economic failure is normative: others are blamed within and outside the society, resulting in a passive mentality of victimhood'. Under what conditions, they ask, can a local leader'reset' attitudes and actions in the local economy. The first requirement is that he must have the trust of the different parties. The second is that he has a clear, flexible, and resilient approach. Only then can what they call a Conviction Narrative be achieved. The importance of leadership is also emphasized by Ties Vanthillo et al. (2021, this issue) in their assessment of the changing nature of regional policy in Europe. They discuss the evolving features of regional policies in four periods from the 1950s to date. The current period is characterized by place-based policies, in the construction and implementation of which they argue that political leadership (among other factors such as institutional coordination and strategic intelligence) is essential for the quality and implementation of effective measures to stimulate local development. Populist politicians in a number of European countries have blamed membership of the EU in general and membership of the Euro area in particular for rising inequality among regions and households. There is a well-developed literature (see, for example, Beetsma and Giuliodori (2009)) considering the impact on member countries of being members of a common currency area. Not having control over one's own exchange rate deprives a country of an important tool of macro policy. The implications for its economic fortunes are uncertain, depending as they do on other policies adopted by the country itself and by other members of the common currency area. Thus, for example, the impact of the euro on the distribution across countries of GDP per head is highly uncertain. Less studied is the impact of the euro on inequality between households and by implication between regions within the member countries. This is what Florence Bouvet (2021, this issue) investigates for the first time. She uses a synthetic counterfactual methodology, which matches individual euro countries with non-euro countries possessing similar characteristics in the period before the introduction of the euro. She then compares their trajectories after the introduction of the common currency, investigating how income inequality (gross and net of taxes and transfers) has changed within each of the Euro Area countries studied-Austria, Germany, Luxembourg, Belgium, Greece, the Netherlands, Finland, Ireland, Portugal, France, Italy, and Spain. She finds that, in the absence of the euro, gross income inequality would have been lower but net income inequality would have been higher in most countries. In other words, any'market' effects were more than offset by transfer payments. Such in-country transfer payments were doubtless enabled directly and indirectly by monies from the Social Fund and other EU sources. --- VI. Policy The advocates of place-based policies argue that general redistributive policies through the tax-transfer system, while necessary to alleviate hardship wherever it exists, provide no long-term solution for individuals in the left-behind and disadvantaged regions (Neumark and Simpson, 2015). They are also sceptical of any suggestion that in implementing place-based policies there is necessarily a trade-off between equity and efficiency. This scepticism is, in part, the consequence of their doubts about the merits of arguments for agglomeration. In the urban economics and new economic geography literatures, agglomeration effects are typically supposed to be strong in densely populated areas because of'sharing, matching and learning' (Duranton and Puga, 2004). This is related to the highly concentrated pools of labour and suppliers, to the excellent infrastructure with low costs of transportation and mobility, and to the easy diffusion of knowledge and innovations. The elasticities of productivity with respect to employment density are estimated to be in the range of 0.01 to 0.10 (Neumark and Simpson, 2015;De Groot, 2019). Subsidizing people and firms to encourage them to locate in agglomerated, high-density areas can be justified on the grounds that the social returns are higher than the private returns. This policy may come to the benefit of the whole society since people will move to places with high productivity rates from where economic activity, growth, and prosperity will eventually spread or filter to the lagging and peripheral areas. However, it is far from clear that policies exploiting agglomeration economies are beneficial to society as a whole. First, it is not certain, from historical evidence, whether the filtering effects are large enough to compensate for the adverse effects of declining employment rates and brain drain in the lagging areas. Second, policy-makers probably do not have sufficient knowledge about the magnitude and the distribution of elasticities across regions and economic activities to optimally target their investments. Third, if there is not much geographic variation in elasticities, relocating economic activities will not increase aggregate production (Neumark and Simpson, 2015). Poorly designed regional policies could result in a zero-sum game whereby high investments in dense and prosperous regions come at the expense of regions in which people already feel 'left behind'. Fourth, diseconomies of agglomeration may emerge when real living standards in the prosperous areas are reduced by rising disamenities related to air and water pollution, traffic congestion, and more local crime. Thus, advocates of place-based theories are usually critical of those who put their faith in agglomeration economies and they deny any necessary trade-off between equity and efficiency. They refer to the rise and fall of big cities with large agglomerations in the past, contending that high returns on public and private investments in metropolitan areas are not self-evident. They also argue in favour of tailor-made policies that seriously explore the untapped potential as well as the threats to progress in each place. Thereby they emphasize the underestimation of the economic potential of many noncore, less developed, or declining regions. In short, there is every reason to focus on the potential of a region to achieve a situation where it has a sustainable resilient regional economy in combination with acceptable levels and distributions of wellbeing for all its inhabitants without social exclusion. (Van Dijk and Edzes, 2016, p. 178) However, as far as the lagging regions are concerned, there is the risk of failure of supply-led interventions since so many of them have attempted to boost sectors and activities that do not match local economic strengths and which become in perpetual need of assistance to survive. Furthermore, welfare-and support-based measures specifically aimed at sheltering inhabitants of poorer areas can have pitfalls. For example, place-based policies designed to stimulate local employment for the people 'left behind' incorporate the risk that residents from elsewhere profit from new economic activities, while raising rents, house prices, and land prices and increasing the share of in-commuters in local employment instead of lowering unemployment rates for people at the low end of the local labour market (Neumark and Simpson, 2015). In Europe regional policy is said to have changed radically in recent decades. These changes are described by Vanthillo et al. (2021). Traditional policies varied somewhat from country to country but were essentially top-down interventions. These interventions involved tax incentives and subsidies both to encourage firms to remain and grow in the poorer regions and also to attract new enterprises, not least multinationals. In many countries, government offices and parts of state-owned enterprises were moved from the centre to the periphery. As Vanthillo et al. put it, 'the policy focus of the range of instruments used in these regions was essentially centred on influencing economic activity through industrial location'. In fact, regional policy was inextricably bound up with traditional industrial policy, which can be defined as policies to stimulate growth and productivity and to rebalance the economy by altering the sectoral mix of production. Some policies were horizontal and others vertical. The former applied to all firms, whether nationally or in a particular region. The latter were applied differentially across sectors or even firms. Many horizontal policies came to be seen as often ineffectivevarious forms of investment tax incentives, for instance. But it was the vertical policies which attracted particular criticism. They often involved significant public expenditure with little return. In the British context, Crafts (2010) and others described them as 'picking losers' rather than 'picking winners', too often engaging in the forlorn task of propping up ailing industries such as shipbuilding and arguably, in the longer run, making it more difficult to cope with structural change. From the 1980s the focus of policy started to change. Vanthillo et al. (2021) argue that there were three reasons for this. The first was broader political developments leading to devolution and decentralization in many countries. There were many complex explanations for these developments, but one was a belief that central governments had not served lagging regions well and that more could be achieved by locally led initiatives; and indeed this belief was supported by emerging research evidence that decentralized systems had been associated with less inequality in regional growth rates (McCann, 2016). The second was a perception that policies centred on tax incentives and subsidies had failed-in part because they had led to competition for assistance between regions. Certainly, in England there is evidence that the North-west (a relatively lagging region) suffered from the fact that funds were poured much more profusely into other, and more severely, lagging regions like the North-east. Third was a shift of emphasis towards policies that were more tailor-made for individual regions. As we have already intimated, to these might be added a fourth-the realization that in the past too much public money had been wasted on trying to halt irreversible structural change. Influential in these developments was the European Commission and, in particular, the reform of the Structural Funds in the late 1980s. As Vanthillo et al. argue, the European Regional Development Fund (ERDF) had been 'complementary to national regional policies'. Now Brussels started to take a more central role. Emerging from this were'smart specialization strategies'. Interventions were based on enhancing local competitiveness and not necessarily tied to conventional administrative regions. Critically, initiatives were placed (at least partially) in the hands of local actors. Key here was the requirement that, in order to receive funding, a region needed to articulate a'strategy' for development. Vanthillo et al. describe how'more than 120 regions in the European Union (EU) have recently designed a smart specialization strategy to receive funds from the ERDF in the 2014-20 programming period'. Similarly, some form of regional strategy was required to access EU Structural Funds. Key to formulating a strategy was to look forward rather than backwards and to take a realistic view of what the competitive strengths of a locality might be. At the same time, domestic spending on regional policies diminished in most countries, which came to rely ever more heavily on European funding. In recent years these initiatives have fallen within the EU Cohesion Policy whose declared aim has been 'to strengthen economic and social cohesion by reducing disparities in the level of development between regions'. The Policy accounted for no less than 32.5 per cent of the EU budget between 2014 and 2020. If it is the case that strategy is to be devolved locally, then a necessary, but not sufficient, condition for success is the competence of those devising the strategy. This seems to be almost taken as given by national policy-makers. But when the competence of national policy-making generally cannot be taken for granted, assuming local competence across several geographical areas in a country seems dangerous. There is also the question of central funding of local initiatives. Is sufficient resource provided to allow local initiatives to flourish, or do the local entities have sufficient local revenue raising powers? (See also De Groot, 2019.) Any significant shortage of funding is likely to dictate sub-optimal strategies or sub-optimal implementation of optimal strategies. Inevitably there is an unresolved tension between the roles and powers of the centre and local administrations and, at least in some countries, it is evident that the national authorities find it difficult to let go. In describing the eco-systems of poorly performing regions in the US, Logan et al. (2021) remind us of the dangers, as well as merits, of devolved powers and decisiontaking that are part of the federal structure of the country. They write: Modern-day social and economic inequality is rooted in a combination of factors, including geographic endowments, agglomeration economies, regional differences in human and physical capital investments, and, importantly, persistence of past policy decisions, investments, and choices.... [a] broad set of sub-national policy and expenditure decisions falling within the domain of economic development, including education, social safety net transfer programmes, and labour market supports, which have helped to shape the inequality we observe today. They argue that fiscal federalism has meant that many policy decisions have harmed sections of the population and regions. In particular, racially motivated actions against black communities in the South have had long-lingering consequences. For example, discrimination in education and restricted school funding have damaged the human capital of large swathes of the country. The example of the southern states may seem to be an extreme one to European eyes. However, the possibility of the unhealthy dominance of vested interest groups and of various forms of local corruption cannot be ignored. This is where the issue of striking an appropriate balance between national and local control becomes an important issue. Lagging regions are more often than not in a self-reinforcing, self-sustaining equilibrium and, because of this, specific changes designed to improve performance can be ineffective since other elements of the eco-system which remain unchanged drag the local economy back to the undesirable equilibrium. Collier and Tuckett (2021) remind us of this in their arguments that Conviction Narratives are essential for buoyant local investment and the construction of a buoyant local ecosystem. Colin Mayer et al. (2021, this issue) examine one particular aspect of the local ecosystem-banking-which is vital for financing investment by small and medium-sized enterprises (SMEs). Without vibrant local banking arrangements, they argue, devolution of economic policy would be limited in its effectiveness. They compare the British banking system with banking in Germany, Sweden, and the US. The British system, they contend, became over time highly centralized and transactional with 'weak relationships between banks and borrowers'. They contrast transactional banking with relationship banking and define decentralized banking as providing'relationship-based banking services to its customers by operating in close proximity to them and via a business model that relies on cultivating and utilizing the strong relationship it establishes with its customers to gather and build soft information'. Banking centralization increased over a fairly long period of time in the UK but was exacerbated by the sector's response to the 2008 financial crisis. As a consequence, smaller firms in peripheral regions find it more difficult to get credit than those in London and the South-east. Mayer et al. describe the long historical evolution of the three-pillar German banking system and demonstrate that, for all its twist | Significant regional inequalities of income and wealth exist in every Western European country and in North America, but their extent varies from country to country. In both Europe and the US, it is generally thought that they tended to narrow from the early 1900s until about 1980, since when they have widened. This widening has become associated with the rise of populism, while the Covid-19 crisis has thrown regional disadvantage into sharp relief. This article discusses measurement issues, traces developments over time, and explores the social and economic consequences of regional disparities. It describes the evolution of regional policy, and in particular the move to more localized approaches in Europe, analysing their strengths and weaknesses. |
. (2021, this issue) examine one particular aspect of the local ecosystem-banking-which is vital for financing investment by small and medium-sized enterprises (SMEs). Without vibrant local banking arrangements, they argue, devolution of economic policy would be limited in its effectiveness. They compare the British banking system with banking in Germany, Sweden, and the US. The British system, they contend, became over time highly centralized and transactional with 'weak relationships between banks and borrowers'. They contrast transactional banking with relationship banking and define decentralized banking as providing'relationship-based banking services to its customers by operating in close proximity to them and via a business model that relies on cultivating and utilizing the strong relationship it establishes with its customers to gather and build soft information'. Banking centralization increased over a fairly long period of time in the UK but was exacerbated by the sector's response to the 2008 financial crisis. As a consequence, smaller firms in peripheral regions find it more difficult to get credit than those in London and the South-east. Mayer et al. describe the long historical evolution of the three-pillar German banking system and demonstrate that, for all its twists and turns, it serves the SME sector better than does British banking. So, they contend, does community banking in the US, though the authors recognize that its viability is under some threat. In yet another very different financial system, Sweden's Handelsbanken serves local business communities well. While acknowledging that effective regulation regimes would be needed, Mayer et al. conclude that strong local banking, based on tacit as well as codified relationships, is essential for significant improvement in the economic fortunes of lagging regions. --- VII. Conclusions It is perhaps ironic that, at a time when devolution of strategy is all the rage for regions in Europe, it has been the local misuse of opportunities offered by fiscal federalism which arguably has hampered regional development in the US. This reminds us that decentralization is not a magic bullet. Nevertheless, at least in Europe, policy-making towards regions has made some progress; but there is still much to be done. There has been greater recognition of the need to move away from emphasis on broadly defined administrative regions. Problems and their solutions are now seen to be far more spatially specific. The ability to address these problems has been massively enhanced by the emergence of robust disaggregated data. However, it is not always clear that national politicians make sensible decisions about what constitutes a locality for action. For example, some commentators argue that an obsession with city regions often overlooks the wider regional context. Indeed, if decentralization and devolution are to be the answers, then the UK 2070 Commission (2020) points to some of the difficulties in the British context: Barriers to progress arise from: 1. Conflicting National Policies arising from an over-centralised administrative system; 2. Strained Central-Local Relationships arising from the desire for central accountability of local decision-making; 3. A Flawed Strategy for Growth that assumes the benefits of growth in London and the Wider South East will spill over to the rest of the UK; 4. Low Levels of Investment which result in under-resourced programmes of action, create a competitive project-based culture, and hold back ambition; 5. Constant Change in Policies and Delivery Agencies which does not allow sufficient time for any programme of action to have real impact; and 6. Narrow Short-Term Measures of Success that do not take account of longer-term generational and well-being impacts. Clearly these are issues not just for the UK but also for many other countries. This is highlighted by a recent OECD (2019) report. The report makes the case for decentralization and devolution in regional policies, covering the transfer of powers and responsibilities from central to lower level authorities in three dimensions: political, administrative, and fiscal. The report shows a positive correlation at the country level between GDP per capita, public investments, and education outcomes on the one hand, and the extent of decentralization on the other. It also argues that decentralization can promote local democracy and citizen engagement, reduce corruption, stimulate efficient public service delivery, and improve regional development. Therefore, it could be a powerful instrument for reducing the 'geography of discontent'. However, echoing some of the points made by the UK 2070 Commission, the OECD also emphasizes that decentralization is not a guaranteed recipe for regional growth and development because the positive impact is very much conditional on the design and implementation of the decentralization policies themselves. Motivated by such concerns, the OECD records the risks associated with decentralization. First, there is the risk of insufficient administrative, technical, or strategic capacities at the subnational levels. Building these capacities takes time and requires long-term commitment from central and subnational government. Then there is the risk of lack of sufficient resource-unfunded or underfunded mandates, as the OECD puts it. De Groot (2019) illustrates this for the Netherlands where municipalities are faced with growing responsibilities as a result of the country's decentralization strategy, but with hardly any ability to increase financial resources due to small local tax bases. Furthermore, governmental bodies at different levels may have overlapping responsibilities and powers, which can cause lack of clarity, conflict, and a democratic deficit. Finally, decentralization may lead to the loss of economies of scale and fragmented public policies. Policies that initially may look successful can reveal perverse consequences if the full picture is taken into account. The Dutch legislation on work and assistance in 2004 intended to provide activation and employment services that were better tailored to both the needs of the local labour market and the unemployed by decentralizing services from central government to municipalities (Van Berkel, 2006). This implied more autonomy in the design and delivery of services to cope with local and regional circumstances and policies, but also implied a transfer of financial responsibility for the social assistance scheme. A lower enrolment in social assistance was the consequence of municipalities being incentivized to be more prudent in its allocation. This may have led to under-provision of municipal services for the unemployed, unequal treatment of individuals in similar circumstances across municipalities, and a rapidly increasing inflow of people into the centrally administered disability insurance scheme (Roelofs and Van Vuuren, 2017). Furthermore, if the emphasis of new policy is a region pursuing its comparative advantage, then this presumes that the region has a potential comparative advantage in something. For some localities it may be difficult to uncover exactly what this might be. It will be difficult to break out of the low income/low productivity equilibrium. Logan et al. (2021) make the case for the self-reinforcing nature of regional problems in the US. They write: When comparing regions in the United States, a set of steady-state initial conditions, in large part shaped by the nation's pattern of economic development, and its legacy of slavery and racial exclusion, continues to shape modern-day economic and policy outcomes, helping to reinforce observable regional inequality today. Nor can it be taken for granted that there is sufficient local political and administrative competence. Even if there is, Collier and Tuckett (2021) argue that it may be insufficient if there is no Conviction Narrative. At the same time, even if strategy is sensible and public funding appears sufficient, it may fail because of weak local institutions, as Mayer et al. (2021) contend as far as the provision of private finance is concerned. The fundamental problem is that low productivity/low income regions are experiencing systems failure. In these circumstances, attempts to improve one aspect of local performance may flounder because the other unfavourable characteristics of the locality may act like a magnet and drag it back to the original equilibrium. Indeed, it may be that a benignly intended policy has perverse effects. In the UK, for instance, the highspeed rail project is designed to cut travel times from London to the Midlands and North of the country. It is meant to stimulate these regional economies, but there is the possibility that it will simply enable more skilled workers to commute to London and further centralize economic activity. Despite increased activism in policy, regional disparities have generally widened in Europe in the last 40 years. The balance of academic research suggests that this is mainly the consequence of the impact of globalization and changes in the sectoral composition of economies. We can be confident that this widening would have been greater but for the intervention of regional policies. This gives some reason for cautious optimism in the face of deep-seated, but not necessarily, intractable problems. In many countries Covid-19 has had a disproportionately harmful impact on poorer areas and this may well increase the focus of governments on the underlying problems of these areas. Furthermore, it is encouraging that there is some evidence for the benefits of place-based policies that build particularly on infrastructure expenditure as well as on higher education and university support (Neumark and Simpson, 2015). Nevertheless, we still need to learn more about the long-term, redistributive, and heterogenous effects of these types of intervention. We also need to know much more about the strengths and weaknesses of devolved regional strategies more generally-about what works and what does not work. | Significant regional inequalities of income and wealth exist in every Western European country and in North America, but their extent varies from country to country. In both Europe and the US, it is generally thought that they tended to narrow from the early 1900s until about 1980, since when they have widened. This widening has become associated with the rise of populism, while the Covid-19 crisis has thrown regional disadvantage into sharp relief. This article discusses measurement issues, traces developments over time, and explores the social and economic consequences of regional disparities. It describes the evolution of regional policy, and in particular the move to more localized approaches in Europe, analysing their strengths and weaknesses. |
Introduction In the course of engaging with women's stories and affects while exploring memories, dreams, and associations on the subject of delayed motherhood, two analytical ideas-Jung's mythopoetic tension between symbolism and enactments with the feminine and Freud's [1] "Repudiation of the Feminine" attracted my attention to the realm of womanhood as a social problem, in particular the way in which themes of psychic bisexuality produced a feminine that is "thereby displaced from its forced equivalence to the object and from its inevitable localization in the woman" ( [2], p. 87). What kept coming up as both privation and deprivation across affective behavior and narrative among eight participants was the existence of a male sibling who had more privilege, encouragement and engagement with mother (and father if he was around) than the daughter. I realized these participants were demonstrating the very bones of this research, distinguishing the making of a complex between personal experience, cultural and collective contexts. The affects before me at micro level were emerging into a macro view of how feminism emerged when the feminine could no longer quietly accept being thwarted to favor the masculine. Like the Sumerian goddess Inanna, participants had taken their procreative desire underground until the clamor of mid-life beckoned them to reclaim the right to enjoy an ordinary life. My aim in this paper is to examine the plural definition and uses of the feminine in Analytical Psychology and Psychoanalysis in particular against Western culture at large in order to define a Feminist ethos for this research. Though Jungian by qualification and perspective I must include my own reflexivity on theoretical problems such as the anima and animus in Analytical Psychology so that I do not unconsciously analyze the subjectivity of participants to Jungian or Freudian grand narratives on what it means for a woman to desire and experience motherhood in the fourth decade. But more so, not only does it appear the first analytical fathers offered us a useful theory of patriarchy [3] along with other documented effects of 'the mind doctors' on women [4,5], their androcentric frames of feminine reference becomes an important epistemology for delayed motherhood. Female diseases, such as depression, promiscuity, paranoia, eating disorders, self-mutilation, panic attacks, and suicide attempts, whether reported/treated or not, are all female role rituals ( [5], p. 110) to which I'd like to add one more: the expectation of fertility after forty years of age. --- Discovery Process What is determined to be masculine and feminine behavior, expression, and choices continues in post Jungian psychotherapies as a question regarding development, even when these are attached to archetypes [6,7]. The biological difference in women with an implied imperative to reproduce opens the depth question of a woman's unconscious use of her body as a means of separation, individuation and psychic growth ( [8], p. 83). Delayed motherhood in a bio-technological age may be yet another form of power and control [9][10][11]. To consider late motherhood in a technological age begins with a review of Jung's [12] early working through his ideas on the contra-sexual other of anima and animus, drawing from his real world experience of what a lack of procreativity means for a woman. "...then you get into a special kind of hell...for a woman there is no longer any way out; if she cannot <unk> does not > have children, escape into pregnancy, she falls into hellfire...she discovers that she is not only a woman, she is a man too" ( [12], p. 794). Before the myths and terms of feminine and femininity are unpacked there is something very important to register about the finding of a favored male sibling in this research. Across all participants' stories deep wounds to do with early gender learning of the superior value placed on the masculine in a brother, whether or not he was younger or older, while the good things of the feminine in the daughter were difficult to see by parental caretakers, were present. In effect these women had been groomed to feel inferior to the masculine, by being less considered, desired and entitled, resulting in a view they might be less capable in life than a male. That most of the eight participants enjoyed engagement in the world long past many of their peers due to onset of pregnancy around the fourth decade, goes some way to suggesting how their choice of delaying motherhood resonates, at minimum, with having to prove something to themselves and others regarding the very definition of what embodying the feminine is about; normative, predictive generative identity via motherhood was not going to be enough. "The difference in a mother's reaction to the birth of a son or daughter shows that the old factor of lack of a penis has even now not lost its strength. A mother is only brought unlimited satisfaction by her relation to a son; this is altogether the most perfect, the most free from ambivalence of all human relationships. A mother can transfer to her son the ambition which she has been obliged to suppress in her-self, and she can expect from him the satisfaction of all that has been left over in her of her masculinity complex." ( [1], pp. 112-113). The feminine principle equating to female inferiority by the founders of both Analytical Psychology and Psychoanalysis, appears along a continuum ranging from Freud's perspective of causation, for example, his penis envy/castration theory was grounds for hysteria based on a phallo-centricity [2] to Jung's invisible realm of the collective unconscious through the use of mythopoetics as if to rationalize logos as the sole propriety of men and Eros to women as a universal structuring element of psyche conceptualized as animus and anima, respectively. Jungian Analyst Polly Young-Eisendrath [13] frames these ideas as androcentric in their ignorance of the woman's experience, her social context, and the nature of her female gender identity in context to traditional sex roles. Without conscious feminine experience "an anxious middle-aged woman, identified with the idea that she is inferior intellectually, may be called 'animus-ridden' by a Jungian psychotherapist because she speaks in an opinionated and insistent manner about a general or vague idea" ( [13], p. 23). --- Feminine Riddles into Myths Image, emotion, enactments, projection, rituals and fantasies emerging as beliefs in early Psychoanalytical theories reify mental phenomena, blurring the lines between illusion and reality. Jung and Freud appear as early social scientists looking to explain the split between matter and mind. Once Freud's descendents opened the gate to allow for the impact of culture on phenomena observed by the analytical founding fathers, the groundwork was laid for Feminist inspired Psychoanalysis to evolve into psychosocial research, including embodied subjectivity. "For example, for Lacan, the Oedipus complex becomes not simply the exclusion of the child from the mother-infant dyad and parental couple which is thought by Freudians to be crucial for developing personality, but more a depiction of the beginning of the acculturated individual-that is, the entry into, and the reproduction of, culture itself repeated in the development of each human being" ( [14], p. 294). Culture reproducing itself also extends to mothering [15]. What follows is the effect these analytical ideas can have on society. "...some psychoanalytic concepts have taken on the quality of myths. I define myths as symbolic representations of cultural ideologies, reflecting unconscious dynamics. As with individuals, sometimes stale and outgrown myths persist, sustained by inherent societal forces even beyond their point of usefulness, resistant to change and often obstructing growth and creativity. Most psychoanalytic concepts originate as explanatory hypotheses. However, once formulated and disseminated, they become rooted both in theory and in society, acquiring an explanatory force, generating self-fulfilling prophesies and remaining unchanged as long as the myth serves a purpose...even when there have been changes in phenomena upon which the initial observations were made, the original hypothesis, reified and elevated to the proportion of a myth, remains immutable, sustained for the social, economic, political or psychological purpose it now serves." ( [16], p. 8). Though Freud is credited with asking the question, "What do women want?" he never found an answer to the "riddle of femininity" [16] and neither did Jung except through personal foibles [17]. The favoring of Jungian Psychology I had intended for this research was discovered to be insufficient to reflect on an emerging cultural problem with the feminine. There was danger of falling into Jung's earliest reifications of gender on archetypal and functional levels underpinned by his interest in alchemical processes of the solar king meeting the lunar queen ( [18], pp. 282-284). Jung's ( [19], para. 4-46) identification of two kinds of thinking along gender lines of masculine and feminine, classified as "direct" and "indirect" (feeling) thinking, is a case in point where early psychological typology function is confused with gender function. Indirect thinking was deemed to be intuitive, irrational, pictorial, diffuse and symbolic. Jung assumed it was the foundation of feminine psychology ( [20], p. 54) under the principle heading of Eros, to include psychic relatedness, love and soul which also put women under pressure to perform as such in the activities of wife, consort and mother. Direct thinking, logical, goal oriented, rational, differentiated, and spoken skills, gathered together under the principle of Logos became the expectation of the masculine principle and ergo for men. Jung assigned words like judgment, discrimination and insight as well as spirit to'maleness' ( [19], para. 87). My sense of Jung is that he read into the reproduction of gender performance and culture as if his identification of its' contents was fact, confusing fears and fantasies with real women [13]. Not all post-Jungians read gender the way he did, but of those women clinicians presenting themselves as Jungian Feminists, such as Cowan [17], Douglas [6], Kulkarni [21], and Anthony El Saffar [22] few other than Young-Eisendrath [23] are known and published within the larger context of Psychoanalytically inspired feminism, I believe, because she draws from social constructivism to assert the 'feminine archetype' which is a product of patriarchy [24]. Yet Kulkarni [21] was among the first to lay down a paradigm for a research that "marries Jung's respect for psyche with feminism's insistence on context" ( [21], p. 218), an ethos this research on late motherhood endeavors to achieve. In addition, two academics, Demaris Wehr [25] and Susan Rowland [7,26,27], have made breakthrough and remarkable contributions. In particular is Rowland's ( [7], p. 135) view of Jung's connection to feminism through his concept of the subtle body, a union of mind and body in his alchemical writings, which includes "the abject and excluded body to reveal it as the constituting boundary of heterosexuality that must be renegotiated" ( [7], p. 144). In a parallel but different language, de Beauvoir's "One is not born, but becomes, a woman" ( [28], p. 301) was a favoring of lived experience which inspired emerging feminism to make the distinction between sex and gender, an idea meant to "secure internalization of contrasting patterns of behavior... thus to displace the role of biology in determining'masculinity' and 'femininity" ( [29], p. 39). Psychoanalytical theorists have gone further than Freud's ideas of the feminine, contributing to and developing Feminist theory aligned with clinical and social psychology theorists. Raphael-Leff's [30] inquiry into femininity, the unconscious, gender and generative identity in a bio-techno age argues that a basis of psychoanalytic theory in place throughout Freud's life was the limitation of femininity and masculinity on original bisexuality. The perception of Freud's bisexual fluidity concept was ultimately eroded by occluding "reification of body-based dichotomies" ( [30], p. 500) leading to multilayered views of fantasies/relational configurations/identifications proffered by Harris [31], Dimen [32], Benjamin [33], and Sweetnam [34] allowing Raphael-Leff [30] to frame Freud's notion of bisexuality as the dichotomy of conscious unity twinned with unconscious diversity attributable to Person [35], based on Goldner's [36] notion of culture as authorizing agent. Thus Raphael-Leff's ( [30], p. 501) synthesis of'sex' as an accommodation between chromosomes present at birth and gender as a self categorizing psychosocial construct produces new categories for 'gender role' and'sexual orientation': "'Embodiment' (femaleness/maleness), 'Gender Representation' (femininity/masculinity) and 'Desire' (sexuality)." Can Jungian Feminist literature ever be on par with the impact Psychoanalysis has had on mainstream feminism? Jung's dichotomous idealization of the feminine as a man's anima while denigrating the masculine in a woman (animus) as a character flaw, at first blush creates a problematic for the researcher who wishes to use Analytical Psychology as the theoretical basis for emergent feminine Feminist psychosocial dilemmas, until we shortly come to discussing his alchemical works. Jung's mythopoetical views, theories, imaginations, foibles and proclivities regarding the feminine, along with Freud's fluid notions of bi-sexuality, are both offered as evidence; acceptance of the feminine as different but equal remains a long standing difficulty for both genders, inspiring perhaps the intra-psychic and inter-subjective cultural phenomena of a pregnant pause [37] on the way to late motherhood, to revision the feminine out of patriarchal paradigms. --- The Feminine and Feminism By emphasizing the feminine within feminism, I am including ways of incorporating agency and nurturing through the holistic union of Jung's two kinds of thinking [19] in addition to Feminist concerns of equality with men such that procreative identity does not become equated to essentialist gender norms nor to performance in male terms. Holding on to the feminine within feminism allows for sexual difference and keeps in mind the ways the feminine has long been suppressed in culture [22], her wound the subject of myths and fairy tales ( [38], pp. 193-194). Without this view it would be all too easy to see women who fell into delayed motherhood as 'father's daughters' who abandoned the archetypal feminine to pursue career rather than respect the body Marion Woodman [39] likens to the Mother in us. What happens to women who like Inanna must go underground with their procreativity is far more complicated than being 'father's daughter'. Late motherhood does not appear as a sin against the feminine by the woman who has delayed, but as a'repudiation of the feminine' preceding adult choices necessitating a late search for the mother within. Hence feminism and the feminine as Great Mother is a vital link to re-balancing humankind. While aspects of Analytical Psychology are relevant to this study Feminist inspired Psychoanalytic perspectives help to make two halves of analytic history a whole view of psyche's discontent with patriarchal views of the feminine. Analytical psychology has a proud history of finding truth in the cosmos through archetype and image "rooted in the unconscious as transcendent of knowledge" ( [7], p. 143) while Swartz reminds us that "Feminism has a proud history of interrogating the truth claims of psychiatric science, and of foregrounding the ways in which the machinery of psychiatric diagnosis and treatment has been used to obscure or amplify the psychological effects of patriarchies" ([9], p. 41) for which she credits Chesler [5], Smith [40] and Ussher [41]. In particular, in reviewing psychiatric diagnosis from a Feminist perspective, Swartz ([9], p. 41) gives credit to Jessica Benjamin's [33] work concerning the long history of patriarchal domination where Feminists have challenged Freudian psychoanalytic diagnostic premises and opened up new ideas on the formation of female identity such that experience as mother, sister, wife, or daughter can no longer be automatically synonymous with a lack of agency. My purpose is not a rapprochement between Jungian and Freudian theorists and clinicians, but observation early views of Jung and Freud on the feminine provide grounded evidence their theories continue to reflect a problem for and with women. Given the nature of this study, to explore delayed motherhood and its connection to individual and collective complexes, and the long history of women being diagnosed as "prone to depression" ( [9], p. 23) it is important to clearly differentiate the identification of a complex from a diagnosis. In a diagnosis the root of the disorder is placed within an individual while social, cultural, political and collective contexts remain as background or in ignorance [9]. Delayed motherhood in the 21st century begins to appear more as an emerging 'epidemic' with plural longitudinal gender roots between the sexes [37] rather than a disorder (though it may have been viewed so by Freud and Jung at one time). Identifying a complex through the study of affective behaviors provides a way to see into emotional rupture as phenomena, which does not originate in the individual alone, but through a network of associations involved in memories with others. These 'others' do not only contribute to personal complexes, as they may be unknown to the individual, because they occupy a place in the social through the cultural unconscious [42,43]. When these impersonal contexts are included in what happens when a woman is unconscious toward her body, we must consider the feminine in context to patriarchy, and by extension Feminist ideas. It must also be noted that patriarchy does not always have a penis, nor do Feminists always come with a vagina, and shortly I will elaborate on this further. --- Defining Problems Both Analytical Psychology and Psychoanalysis have framed woman as subject, object, abject, Mother, other, caregiver, mirror, animus ridden, anima woman, receptive, castrated, empathic, relationally oriented, envious of a penis, a uroboros for renewal and imaged as the contra-sexual unconscious. When the female is not referred to as part object and part symbol, we find a purpose for her existence as "another subject whose independent center must be outside her child if she is to grant him the recognition he (she) exists" ( [44], p. 24). The use and relationship to the 'feminine' in all its variations, including 'femininity' emerged as the 'last straw' turning Freud and Jung from sparring partners on 'universal principles' to 'warring opposites'. Both men were caught in the prejudices of patriarchal culture to do with rights, roles and conduct of women in relation to men, pleasure and becoming a mother, until the mother-son incest taboo provided grounds for their ultimate parting of ways ( [22], pp. [46][47]. The difference between sparring over the existence of an underlying universal principle and the mother-son incest taboo may seem to be intellectually far apart until we discover how each of these men interpreted their necessity. For Jung mother-son incest functioned as a mythopoetic in intra-psychic life. It was seen as an enactment within his counter-transference dynamics with patients, such as Spielrein, while his wife Emma and consort Toni Wolff, and collection of female colleagues known as the Jungfrauen, all allowed him to be convinced "that the father's law against incest is regularly broken on the symbolic level, and that regression to the womb is also part of the hero's journey to rebirth" [22]. Whereas in Freud's [1] thinking a girl's cure for narcissism is not only founded on the discovery she does not have a penis, but on the move from mother to father to husband where her triumph and cure is the production of a son with whom she can "transfer to her son all the ambitions she has been obliged to suppress in herself..." ( [1], p. 133). Freud's thinking is a natural wellspring for feminism. While Jung's psychology continues to entice women into believing they could be a man's muse and inspiratrice, just as Echo helped Narcissus to continue looking at his image, believing it to speak to him in his favor [45]. One of the first Jungian Analysts to question the masculine psychologies of Jung and Freud, James Hillman ( [46], pp. 291-292), finds in Freud ( [47], p. 219) a definition of the conditions under which an analysis may end, based upon the achievement of "feminine inferiority', finding it to be 'the root of repression and neurosis... bringing about both our psychic disorders and method of analysis aimed at these disorders" [46]. "...one reaches the 'bedrock', the place where analysis could be said to end, when the'repudiation of femininity' both in a man and a woman has been successfully met. In a woman the repudiation of femininity is manifested in her intractable penis envy; in a man his repudiation does not allow him to submit and be passive to other men" ( [47], p. 219). Thus for Hillman [46], Freud's [47] "repudiation of femininity" is biologically founded and part of the natural psychical world in contrast with his own view "the end of analysis coincides with the acceptance of femininity" ( [46], p. 292). Here Hillman takes on misogyny by undermining Freud's basis as "biologically given and thus 'bedrock' to the psychical field" ( [46], p. 292), finding instead a psychological basis of 'Apollonism' as the 'bedrock' of the "first-Adam-then-Eve" perspective. This Apollonic archetype seeks physical form through "an objective and detached selfhood, a heroic course of... quest and search... above all the ego-Self as its carrier, and analysis as its instrument" ( [46], p. 293). With Freud we must put aside the feeling and relational aspect of the feminine; biology rules. Re-creation of the myth 'first-Adam-then-Eve' appeared in the earliest memories of research participants in the triangulation with parents and male siblings. As young women, they purposely chose to use their minds and make non-uterine choices tending to put them more in the world of men, such that the structure of their lives begins to suggest an extended Apollonic phase. From just this small glimpse into Freud's thinking of the feminine through one of his last writings in Vienna, it may be possible to see the necessity of Feminist thought to salvage Psychoanalysis from Freud's complaint "psychology cannot solve the riddle of femininity" ([1], p. 149). For Jung the analytic process reaches its ultimate goal in conscious bisexuality through the alchemical image of the coniunctio/the conjunction [46,48,49]. Rowland [7] redeems Jung for Feminists in analyzing his work as a whole, and in particular on alchemy where there is "recognition of the limitations of heterosexual opposition... what is cast out, what is structured as an abject body, must be reconfigured within" ([7], p. 145). This is the maddening aspect of Jung, saddling Analytical Psychology with his biases of appropriating the feminine as a hidden virtue of men with the anima concept only to find him projecting onto women the worst attributes of the masculine with the concept of animus, opposite and not equal yet destined for bilateral unity. What is required here is a slow careful reading of Jung as a trickster [27] writer to be read for multiplicity as an evolving narrative rather than authority [26]. "Jung's writings are characterized by an entwined dual purpose in which an acknowledgement of the roots of his ideas in his individual experience (personal myths) work with, and against, a drive to universalize and construct a comprehensive psychological scheme" ( [26], p. 25). Nowhere is this more evident than in his move from the oppositional neurotic on gender to alchemy's subtle body and external reality to social discourses ( [7], p. 145). Samuels [50] questioned whether Jung's concept of anima and animus/femininity and masculinity, entwined in the syzygy to endure the alchemical processes of differentiation in an effort to re-unite as an androgynous pair of opposites, was a bonafide work on gender. "Jung often spoke as if he were unaware of the distinction between gender and sex, which is, by contrast, biologically determined" ( [20], p. 60). The feminine as an aspect of men and the masculine as an aspect of women became tangled up in Jung's reflections between biological bodies, the embodiment of archetype and effects of culture and the collective unconscious. This is no different to what happens to anyone when the principle of'masculine' and 'feminine' is concretized as first Adam then Eve. A false adaptation to compensate for psychic wounds to sexual identity, aroused by conformity to cultural stereotypes can sublimate the feminine such that men find they want babies and women are afraid to have them [49]. When the feminine in either gender is denigrated things go wrong, a link to the alchemical subtle body becoming physically and psychically blackened, precipitating a sulfuric decay to rise so that the problem as it is felt can dissolve [49]. In Feminist inspired Psychoanalytical literature longitudinal consideration has been given to self-images of feminine and masculine internalized through separation-individuation rituals within family as part of an evolving acquisition of gender-role identity commencing with "differential permutations of mother/father-boy/girl interactions, with the 'feminine' situated in the historical fact primary caregivers were invariably women" ( [30], p. 503). Raphael-Leff [30] offers the observation of mother frustrating dependency, thus becoming the confusing feared and desired catalyst for counter denigration of all that is designated female [51]. In Rapahel-Leff's view it is the mother that carries reproduction of the patriarchal social order of inferior social position, through unconscious same-sex identification with their daughters [30]. This identification can be seen later in threats to reproductive body integrity [52], preferred female relatedness [53] and an ego with porous boundaries like a mother [54] compelling a daughter to give into/ resign herself to the patriarchal social order [3,55]. --- Confounding Gender It is essential to return now to amplification of Jung's alchemical opus, as a psychic process which involved extracting the gold and liquefying the dung within primal matter, including elevating the 'opposites' to the regal status of Sol King (conscious) and Luna Queen (unconscious). Appearing in every culture, these motifs were intuitively drawn over millennia to signify psychic renewal, forecasting how dominant factors in the psyche undergo processes of decomposition and clarification by fire, out of which emerges the 'new king' or new consciousness [49]. This alchemical process may also serve as a paradigm for developmental processes within the pregnant pause of midlife [37]. The emergent new conscious of desire for a baby becomes the new king after years of licking the wounds inflicted upon the feminine within procreative possibility due to modern cultural conditioning to favor the masculine over the feminine for economic performance. Thus women's lives take on the appearance of a two-part structure: first Adam then Eve. This is perhaps the basis of Jung's division between the Logos of a monotheistic God whose "essential separation from nature sponsors rationality as dependent upon a division from matter and body" ( [56], para. 29,41) and the need of Eros to be connected and related as the Mother Earth [28]. "Jung's early disposition for gendering opposites, with varying degrees of denigration and idealization, though evidence of extraordinary early work on identifying contradictions in nature seeking reconciliation" [49] similarly to Freud, appears to be reinforced by the mythopoetics of misogyny and female inferiority in the collective unconscious ( [46], pp. 215-298). "Jung's entire project, I am suggesting, is, in mythical terms an attempt to re-balance modernity that has been brought to crisis by an over-valuing of Logos at the expense of Eros-relating...by essentializing the creation myths, he is able to stabilize the masculine signifying he wants to retain it, while insisting upon its re-formation to include the feminine, which remains marginal" ( [27], pp. 290-291). --- Queer and the Feminine Hero Queer theory emerges in personal identification and political organization as non-normative performance in a range of experiences of being and doing, inspiration for intra-psychic unions where achieving and nurturing, penetrating and receiving, are un-assigned to gendered bodies but co-exist in any body [49]. Citing Queer theorists Elizabeth Freeman and Judith Halberstam, Emanuela Bianchi [57] presents a movement "From Feminine Time to Queer/Feminist Time" ( [57], p. 41) to notice how temporality in Queer strays from the normative, "unaccountable and dilated time" ( [57], p. 41) arguing that pregnancy and mothering both participate in temporal counter-normativity. When viewed as a formulation of 'women's time' with "women's characteristic capacity to be interrupted, by the demands of family, by pregnancy... we take into account the necessity for protecting against hostile and unwanted interruptions as well as promoting a liberatory trans-valuation of interrupted time... to strange new, queer formations of kinship, gender, and social life" ([57], p. 43). When gender performance enacts a great leap of faith outside of predictive maternal identity as biological destiny, late motherhood, as I have found in participants' case studies, is the struggle to achieve and nurture, penetrate and receive; a modern developmental task for the feminine hero. Theoretically, "the androgyne, a union of masculine and feminine which cannot be defined as either, resisting normative gender identity, is the essence of Queer. Understood this way, Queer is in effect the conclusion of Jung's alchemical opus, the Philosopher's Stone" [49]. The assumption of heterosexuality and gender certainty is a problematic of classical Jungian canon. Despite my and other Jungian Analysts' criticisms of 'gender certain' contra-sexual opposites, the archetypes of anima and animus, continue to appear in dreams to reveal shadow aspects, those parts of the self that are unknown, unwanted and un-integrated, as principles of both agentic and allowing energies seeking conscious integration in men and women. To dismantle gender performance from procreative identity and sexual desire was a pre-requisite for analyzing the embodied feminine as she coursed her way through intra-psychic association networks and inter-subjective affects aroused by the methodologies used in this study. Recognizing "the effect of the patriarchal animus on generations of women" ([6], p. xviii) Jungian Analyst Claire Douglas examined the outmoded aspects of Jung's theories including the ephemeral, contaminated, and biased, to find what would free women, and the feminine from patriarchal precepts. She proposes a re-examination of the words and ideas within 'Jung's map' rather than conforming to concretized descriptions as normative. "The feminine ego needs to learn how to connect without being engulfed, and how to differentiate without severing or splitting off" ([6], p. 299). Where Douglas' thinking can be most readily applied is to the idea that the masculine as animus must reside solely in the internal world of the woman, and for men the feminine anima must stay safely locked inside. While I do not question the psychic reality of these figures, identification of what is anima and animus has an unfortunate link to opposite sex gender in a straightjacket of inferiority. Anima and animus need each other in dialogue, taking turns as sources of authority. Gray [58] set out to examine, in philosophical terms Jung's individuation idea next to the subject of the feminine by drawing from Irigaray's work. "Individuation, I claim, is the telos of Luce Irigaray's ideal of a feminine-feminine symbolic/imaginary or system of meanings and significances that arises out of sex/gendered embodiment and collective responses to it...lest this reading of Jung be interpreted as reinscribing masculine notions of the feminine, I take a new look at the idea of essentialism, which has plagued Jung's own theoretical construction of the feminine and 'woman'...and also Irigaray's approach to the woman question" ( [58], p. ix). Jung perhaps explains his gender biases best in describing his view of opposites in male and female terms followed by problems when the opposites are not in their 'right order'. "...woman's conscious is characterized more by the connective quality of Eros than by the discrimination and cognition associated by Logos. In men, Eros... is usually less developed than Logos. In women on the other hand, Eros is an expression of their true nature, while their Logos is often a regrettable accident" ( [56], para. 29). "...instances to the contrary leap to the eye: men who care nothing for discrimination, judgment and insight, and women who display an almost excessively masculine proficiency in this respect | While conducting doctoral research in social science on late motherhood, two analytical engagements with the feminine came to my attention as evidence of a patriarchal bias toward the realm of womanhood. Jung's mythopoetic tension between symbolism and enactments with the feminine and Freud's supposition that a denial of the feminine was necessary for psychological and emotional development appeared to be perpetuating a social problem continuing in current times. Across affective behavior and narrative within stories of late procreative desire, dream journals and Word Association Tests of eight participants was the memory of a male sibling who had enjoyed primacy of place in the parental home over the daughter. The female body with a voice was missing in the one-sided perspectives of Analytical Psychology and Psychoanalysis on the subject of the feminine, until a whole view of psyche's discontents in Feminist inspired Psychoanalytic theories from both schools on the female body were included. Freud and Jung's views became evidence of patriarchy as background while extension of Feminist inspired psychoanalytical thinking, Queer theories and Creation Myth allowed new meanings of the embodied feminine to emerge through a recapitulation of a union of opposites as a union of epistemology and ethos. The essence of Jung's mid-life theories, altered by modernity and eclipsed by female advancement, remains replicatable and paradigmatic outside of essentialist gender performance. |
of sex/gendered embodiment and collective responses to it...lest this reading of Jung be interpreted as reinscribing masculine notions of the feminine, I take a new look at the idea of essentialism, which has plagued Jung's own theoretical construction of the feminine and 'woman'...and also Irigaray's approach to the woman question" ( [58], p. ix). Jung perhaps explains his gender biases best in describing his view of opposites in male and female terms followed by problems when the opposites are not in their 'right order'. "...woman's conscious is characterized more by the connective quality of Eros than by the discrimination and cognition associated by Logos. In men, Eros... is usually less developed than Logos. In women on the other hand, Eros is an expression of their true nature, while their Logos is often a regrettable accident" ( [56], para. 29). "...instances to the contrary leap to the eye: men who care nothing for discrimination, judgment and insight, and women who display an almost excessively masculine proficiency in this respect... Wherever this exists we find a forcible intrusion of the unconscious, a corresponding exclusion of the consciousness specific to either sex, a predominance of the shadow and of contra-sexuality" ( [59], para. 225). In her chapter on the 'Feminine Hero' in The Presence of the Feminine in Film, Jane Alexander Stewart [60] analyzes the role of Clarice Starling (played by Jodie Foster) in The Silence of the Lambs [61] as a "new heroic journey of the feminine" ( [60], p. 95). Clarice's story in the film begins with her lifting herself out of a chasm to stand at the top of the hill prepared to go forward. Stewart makes meaning of the scene in that "Clarice begins her story where classic stories of the heroine's journey end; at the return to ordinary life after the descent... from a metaphorical feminine center...a heroine making a return from the deep process of self examination and affirmation" ( [60], p. 96). Though the context of her meaning making resides in the modern American landscape where unseen killers await, her real message is not so much based on geography but an endemic fear of psychological and physical denigration of the feminine. "Not only do they fear men's attacks on their bodies but also they face denigrating social systems that reinforce a second-class status and devalue what it means to live through a feminine point of view" ( [60], p. 96). These dangers, horrors and defilements have been described and examined by both Kristeva [62] and Douglas [63] within a frame of prohibitions leading to abjection on a platform of incomprehensible fear for the dangers facing the feminine if it is not pure. With Clarice Starling we get a character who succeeds because she manages to claim and hold fast to her feelings, what Alexander Stewart refers to as "a set of feminine ethics... [to]... create hope for the safety of a feminine presence in our society" ( [60], p. 96). Clarice defies conventional wisdom on what is safe for a woman in a man's world, by not behaving like a man who fears for his survival. Instead Clarice chooses to trust what the feminine has to offer, "her inner forces (for example trusting in intuition, in revealing herself and interacting on the level of intimacy)" ( [60], p. 99) traits that invoke fear for her and of her, a greater threat to her survival than Hannibal Lecter himself, including "searches for meaning from the way his actions make her feel" ( [60], p. 104). Citing Barbara Walker's [64] The Woman's Encyclopedia of Myth and Secrets, Alexander Stewart ( [62], p. 103) offers an image, not only of the filmic style of Demme's Lambs to evince the underground, underwater, under-position of Starling's journey, but an insight into the journey toward motherhood in the fourth decade of life. "Students in mythology find that when the feminine principle is subjected to sustained attack, it often quietly submerges. Under the water (where organic life began) it swims through the subconscious of the dominant male society, occasionally bobbing to the surface to offer a glimpse of the rejected harmony" ( [64], p. 1066). --- Discussion The feminine hero may be different from the heroine in my observations. The heroine comes up in life believing it is safe to be female because her nurturing early environment made it so. Throughout her development she does not cower at real life challenges, even those threatening her with domination and sublimation rituals [44]. Whereas the feminine hero has had to learn how to have a relationship to her body, the root of having what Jung called a Self ( [18], p. 282). But as the feminine body can be interrupted through "punctuations" of menstruation, penetrative intercourse, becoming pregnant and breast feeding, rhythms resonating with vulnerability ( [57], pp. 39-40) it can take time to make or find a Self if it has not been installed in early childhood through conducive social interactions [65] altering the lived experience of temporality. An unconscious relationship to her body difference from the masculine counterpart, including her vagina, womb, breasts and ovaries, may indicate her feelings are as an unknown aspect of self, therefore making her unavailable for relationship or procreative identity until how she appears to others, how she fears she will be used/not used, no longer betrays her loss of integrity through some kind of violation [66], even one of abjection, but emerges in synthesis toward the primary task of finding integrity within herself. The dichotomous struggle to achieve equality in political, social and economic fields between the sexes only to abandon the struggle in the sexual realm confuses the need to uphold sexual difference ( [67], p. 139). In this dichotomous state lay the ingredients for an individuation process: psychic-physical tension with the potential for a union of opposites. "Creativity springs from the resolution and the reconciliation of opposing psychic forces within an individual" ([68], p. 83). This creativity is at the heart of the conclusion of the fairy tale Young-Eisendrath ([13], p. 18) draws from in considering the story of Sir Gawain and the Lady Ragnell ( [13], p. 171 n. 7), regarding what women really want: sovereignty over their own life. Here then lies the ethical methodological junction, where Feminist inspired Psychoanalytic and Feminist leaning Analytical Psychology join up to write an ethos for the use of intra-psychic and inter-subjectivity in research with female participants. The tension we are considering is when the body matters and when it does not. "A complex... results from the blend of an archetypal core... and human experience particularly in the early years of life" ([69], p. 6). It is both these complex processes of psychic development this research seeks to bring together-is delayed motherhood a revolt against domination of the biological imperative to reproduce in uncertain relationship to patriarchy? This is an ethical question to do with non-normative sexual behavior, the place where Queer theory began its linguistic life before moving into gay and lesbian caucuses, Feminist politics upward to academic institutions, in parallel to rising awareness of AIDS [70] before turning on gender itself as an encasement of an "oppressive system of classification-both heterosexuality and homosexuality...as artificial categories" ([71], p. 29). Queer is evasive. "Just what 'queer' signifies or includes or refers to is by no means easy to say" ( [72], p. 20). "Queer is a relation of resistance to whatever constitutes the normal" ([70], p. 99), the "open mesh of... excesses of meaning where the constituent elements of anyone's gender, anyone's sexuality aren't made (or can't be made) to signify monolithically" ([73], p. 8). Queer as a theoretical and non-predictive-performative condition may be emerging as a new signifier of normative behavior. In this way Queer undermines notions of feminine, masculine and eclipses both the conflict and union of opposites [49], something Jagose [70] describes as "holding open a space whose potential can never be known in the present" ([70], p. 107). Yet, "the conceptual slippage" in Butler's theorizing of subject formation has resulted in "a lack of clarity... [regarding] the capacity for action held by subjects relative to the power that enables their existence in the first place" ( [74], p. 28). The use of Queer Theory and consideration of Judith Butler's later elucidation of a "'third way' between voluntarism and determinism" ( [75], p. 291) is as much about reconceiving agency [76,77] as it is about holding an ethical position against pathologizing women who discover the need for motherhood and partnership later in life. Thus late motherhood is turning upside down Jung's views of individuation in mid-life for women as a time of integrating the repressed masculine, a shift from an identity centered upon dependence and nurturer of others to one of agentic "embrace of one's own development" ( [13], p. 87). The task of procreative identity at mid-life appears as a new definition of a union of opposites, following the paradigm of first Adam then Eve. Unwittingly bio-technology has challenged, even re-arranged Jung's life stages for women, though not the essence of his observation of the mid-life 'calling' to integrate what has been overlooked in the first half of life. --- Concluding Thoughts I did not enter into the research topic of a midlife pregnant pause [37] leading to late motherhood with Feminist intentions. Rather I had a Jungian perspective that cultural and collective complexes with hooks into personal complexes were getting in the way of the developmental aspect of achieving motherhood due to difficulties between the sexes. Delayed motherhood did not emerge as a Feminist issue until particular themes in regard to men in the form of absent or wayward fathers, overtly privileged brothers and betraying mothers began to surface. I came to see women as having to struggle with 'indigenous' cultural assumptions about their bodies being ordained for motherhood, extending a long period of adolescence while striving for accomplishment in the masculine world. Coming to motherhood was a reparative process the closer in age they came to embodying the stage of life known as 'an older woman' (Crone/Witch Archetype). In looking more closely at Psychoanalytically informed Feminist literature mainly written by women, I also discovered in Freud and Jung similar problems with the feminine, at different points in their professional development. These 'problems' mirrored the problems participants were implying with real male others regarding their own relationship with the feminine and integration of the masculine. In Feminist inspired analytic literature I found the body of the woman who had lost time during her most fertile years as context for the messages from the unconscious. In short, I came to see Jung and Freud as reproducing what has been long standing in civilization, a feminine split between denigration and idealization, and have used their words as evidence of patriarchal privilege, the screen through which each man analyzed female patients. It is my belief their work was the beginning of a longer work on the reproduction of misogynistic culture, with late motherhood appearing as a protection against androcentric interruption. Therefore, an ethical position to mutable and evolving expression and repression of the feminine necessitates in-depth understanding of these ingredients as alchemical products of intra-psychic and inter-subjective primal material, rather than constructing pathologies for non-participation in essentialist notions of feminine performance. Unconscious processes of the embodied feminine achieving late motherhood in mid-life emerged as a Feminist issue of power, control, defense, separation and repair. From this, a new union of epistemology and ethos has become impossible to ignore, in part because what is emerging in late motherhood is a different kind of mothering, on which rests the future of a different relationship to patriarchy. --- Conflicts of Interest The author declares no conflict of interest. | While conducting doctoral research in social science on late motherhood, two analytical engagements with the feminine came to my attention as evidence of a patriarchal bias toward the realm of womanhood. Jung's mythopoetic tension between symbolism and enactments with the feminine and Freud's supposition that a denial of the feminine was necessary for psychological and emotional development appeared to be perpetuating a social problem continuing in current times. Across affective behavior and narrative within stories of late procreative desire, dream journals and Word Association Tests of eight participants was the memory of a male sibling who had enjoyed primacy of place in the parental home over the daughter. The female body with a voice was missing in the one-sided perspectives of Analytical Psychology and Psychoanalysis on the subject of the feminine, until a whole view of psyche's discontents in Feminist inspired Psychoanalytic theories from both schools on the female body were included. Freud and Jung's views became evidence of patriarchy as background while extension of Feminist inspired psychoanalytical thinking, Queer theories and Creation Myth allowed new meanings of the embodied feminine to emerge through a recapitulation of a union of opposites as a union of epistemology and ethos. The essence of Jung's mid-life theories, altered by modernity and eclipsed by female advancement, remains replicatable and paradigmatic outside of essentialist gender performance. |
management, Queensland, tuberculosis. --- FULL ARTICLE: --- Context Mycobacterium tuberculosis is one of the most ancient bacteria to infect humans, and those without treatment of active infection have a 5-year survival of less than 50%. Tuberculosis (TB) remains one of the primary causes of mortality in developing countries. TB diagnosis and treatment are unfamiliar to most urban Australian health practitioners; this disease, virtually eradicated from the Australian medical lexicon by the 1980s. Of contemporary cases in Australia, 90% occur in those born overseas (migrants). The remaining 10% of cases occur predominantly in Indigenous populations, at a rate six times that of non-Indigenous Australians. Indigenous people living in Cape York and on the Torres Strait Islands have historically higher rates of TB than people in other parts of Australia. Proximity to Papua New Guinea, which has a TB prevalence among one of the highest in the world, and the social and biological factors increasing risk of transmission, continues to put these communities at risk of this communicable disease. Public health guidelines recommend anti-TB treatment commence within 3 days of diagnosis of active disease to mitigate the risk of transmission, with medication delivered as directly observed therapy in daily or three times weekly dosing. The treatment cure rate for Indigenous people in Australia in recent years has been approximately 10% less than for non-Indigenous groups. One primary contributor to treatment failure is non-adherence to medication, increasing the potential for the evolution of multidrug resistant TB, an increasingly global concern. A study published in 2015 reported 13% of cultured TB in Australia was drug resistant to first line anti-TB agents. Challenges to medication adherence are multifactorial and include structural factors such as organisation of treatment and care for patients; and individual/patient factors such as patient interpretations of illness and wellness, knowledge, attitudes and beliefs about treatment, personal characteristics and adherence behaviour, influence of side effects on treatment adherence, and family, community and household influences. There are considerable challenges for health professionals who treat patients with TB, as they attempt to balance public health concerns and the maintenance of patient autonomy and quality of life. The remote environment and the cultural complexities of providing health care to Indigenous Australians compound these challenges. Indigenous people demonstrate lower TB cure rates than other populations, with increased vulnerability of remote Indigenous communities to disease outbreaks. There is a considerable body of literature exploring these issues in developing countries but little documentation of these complexities in the Australian remote Indigenous context. This case report will explore the challenges of diagnosis and coordination of treatment of TB in a very remote Indigenous community and the impact this process had on the physical, social and emotional wellbeing of an individual. The patient has provided written consent for the publication of this case report. --- Issue The index case is a middle-aged Aboriginal woman residing in a remote Cape York Indigenous community. She initially presented to the local primary healthcare centre (PHCC) with cough and fever, at which time a local outpatient chest radiograph demonstrated a novel apical lung lesion. Multiple serial radiographs were obtained over a few months, with non-resolution of the lesion, with a CT radiograph recommended for definitive diagnosis, a service that requires transfer to the closest tertiary hospital, more than 800 km away. The patient did not attend for this scan and was lost to follow-up. The patient re-presented to the PHCC 3 years later and a CT chest radiograph, with findings suggestive of TB, was completed. Sputum sample smears confirmed Mycobacterium tuberculosis 3 months post-CT. Although at this stage she was asymptomatic, these results confirmed what appeared to be an autochthonous case of active pulmonary TB from an unknown source. As per guidelines and under the remote supervision of the stateled TB control unit, the patient was admitted to the local remote hospital to commence the first 2 weeks of drug treatment. Attempts at isolating her in hospital for induction treatment proved unsuccessful as the patient self-discharged. The patient continued to engage with the health staff in the community although her adherence to medical treatment and appointments was inconsistent. To reduce the public health risk, the TB unit advised the patient to cease her welfare/government supported employment at the local childcare centre. The loss of her work role appeared to precipitate an increase in alcohol and cannabis use, resulting in chaotic social interactions and loss of routine. This complicated the medical management of this patient, with the health service eventually using a modified directly observed therapy (DOT) approach to optimise medication adherence. --- Treatment strategies Following the initial failed hospital admission, DOT was trialled in the community. Initially, a clinical nurse consultant, with support from an Indigenous Health Worker (IHW), delivered medication Monday to Friday with weekend medication left with the patient on a Friday. This resulted in suboptimal adherence as locating the patient was problematic. The period directly after diagnosis and subsequent withdrawal from work was the most chaotic for the patient. The patient's loss of a consistent daily structure that would ordinarily support treatment adherence significantly affected the delivery of treatment in the first few months. Alternative treatment regimens were trialled, including three times weekly treatment and provision of take-home medications for weekends; however, there remained uncertainty with adherence when self-administering. Finally, in consultation with the patient, the use of a pre-packed medication system (Webster-pak) was employed. Initially the registered nurse and an IHW observed selfadministration of medications from the Webster-pak daily. As the use of the Webster-pak did not require a registered nurse to administer medication, the case management of the patient care was able to be transferred to the IHW, who was able to build a relationship with the patient to facilitate more reliable treatment adherence. Visits were short (approximately 5 min) but daily. Through building a medication routine acceptable to the patient, the IHW could gradually reduce the level of support provided to the patient to ensure medication adherence. This short-term intensive support to develop a medication routine had positive unintended consequences such as improving the consistency of the patient's daily routine, reducing alcohol and improving her nutritional intake. The patient also re-engaged with her employment provider. She ceased medication after 14 months of therapy and thus far is considered cured of TB, for ongoing surveillance. --- Lessons learned This case report describes the diagnosis of TB in an Aboriginal person residing in a remote community in Cape York and the treatment strategies that resulted in successful disease eradication. Although the desired outcome of cure was achieved, limited access to timely and appropriate health services and the lack of engagement with the patient outside of their immediate medical needs resulted in delayed diagnosis, extended treatment requirements and significant disruption to the patient's work and social roles. Two interrelated categories can be used as a framework for exploring the challenges faced by the patient and health service in treating this case: the health service and environmental factors; and individual, personal and lifestyle factors. --- Health service and environmental factors Several health service factors were identified through this case study as impacting on the provision of best practice for the patient. These included staff unfamiliar with diagnosis and treatment of TB, poor access to diagnostic tools such as CT scan, paper-based records restricting timely communication between local hospital and primary health care staff, and a lack of multidisciplinary involvement to reduce the burden of the disease on the patient's social and occupational roles. Both patient delay in seeking health care and health systems delay contribute to delayed diagnosis and treatment, potentially increasing morbidity and public health risk, as in this case. Previous research has reported Indigenous Australians have been less likely to experience a delay in diagnosis than other groups. When this patient initially presented with cough and fever, and radiographical evidence of an apical lung lesion, the index of suspicion for TB should have been higher, with unfamiliarity and the relative rarity of this illness in Cape York possibly contributing to this delayed diagnosis. Access to CT imaging for all residents in Torres Strait and Cape York involve air travel to Cairns and at least one overnight stay. This can be a significant burden for patients who are responsible for caring for others (children, elders) or have other family or community responsibilities, and navigating the complexities of tertiary level services and location has previously been demonstrated to present a barrier to accessing services for remote Indigenous residents. Initiation of treatment was also compromised by distance, as medications took time to arrive from Cairns and did not arrive prior to the patient's self-discharge, resulting in further delays to continuation of treatment and an associated increase in public health risk. In Australia, following the diagnosis of active TB, treatment is recommended to commence within 72 hours, with a 2-week hospitalisation recommended at the initiation of treatment. These protocols exist out of context to this patient's remote community setting and hospital service and in this case the execution of the TB protocol was not well considered from an organisational perspective. Due to unfamiliarity, health service staff experienced reduced comfort levels with respect to managing this patient. This resulted in staff focusing on the biomedical and risk management of this infectious disease without considering how to best support someone, who felt relatively well, to stay in hospital for 2 weeks. Two attempts at hospitalisation resulted in self-discharge despite the patient's willingness to treat the TB and agreeing to the admissions. In hindsight, this was unsurprising as there was no discussion with the patient or health staff about how the patient would spend their time in hospital. There was no referral to social work for input and at the time the health service did not have an occupational therapy service that could support this patient to maintain occupational roles during the treatment process. The combination of DOT and a case management model better served the needs of this patient and ensured her care was maintained within the complicated context of this remote health service. Strategies used by the case manager involved raising issues regarding patient engagement at the daily primary care staff meetings and weekly multidisciplinary team meetings. The success of case management models in the treatment of TB, especially when used in conjunction with DOT, has been reported within the literature and this model certainly reduced fragmentation of communication in this case (a problem accentuated by separate paper-based records between the hospital and PHCC). A lack of shared electronic medical records and electronic patient recall system meant that there was a heavy reliance on verbal and email communications, which were not always filed in a timely manner or sent to all relevant parties involved in patient care. This not only resulted in disjointed provision of service and missed opportunities for patient review, but also occasionally put staff at risk who were not aware that this patient, who presented sporadically to the emergency department, required airborne precautions for TB. It has been suggested that a common electronic platform of communication can improve management of TB, especially with respect to the complicated drug regimens in multi-drug resistant TB, and this certainly would have been helpful in this case. --- Individual, personal and lifestyle factors Individual and cultural factors affect the adherence to treatment in TB. In this case, the patient was agreeable to treatment, attended medical appointments and was open to discussions about her illness and the treatment regimen. Regardless of this cooperation, it became quickly obvious that competing needs of cultural and personal priorities and the adherence to the intensity of the treatment were going to place the patient at risk of treatment failure and development of multi-drug resistant TB. The health priorities for many patients with chronic conditions are not that of disease control and eradication but the maintenance of daily social activities. Stigma from the community can limit willingness to engage in treatment, and a'shame' factor was identified in this case as a significant barrier to daily DOT, with the patient embarrassed by being singled out by the health service in front of her family. As observed in this case with the transition of case management to the Indigenous health worker, the provision of a culturally competent health service in-conjunction with medication adherence strategies (Webster-pak, simplified dosing), can improve treatment outcomes in Indigenous patients. Due to the perceived public health risk in this case, the patient was advised to withdraw from her role as a childcare worker, resulting in disrupted interaction with others and a degree of social isolation, with no redress for this offered by the health service. In patients with TB, as with many diseases, unemployment and low socioeconomic status are associated with a lower baseline healthrelated quality of life, and the psychosocial burden can sometimes be greater than the physical burden. The advice to cease employment was provided without the support of a social worker or occupational therapist. If the public health risk requires patients to restrict work or other social roles, efforts should be made to ensure this is implemented in consultation with relevant stakeholders and to enlist support of a multidisciplinary team to mitigate the financial and psychosocial losses to the patient. In some cases, it may be possible to negotiate alternative employment or social roles, which may have been helpful in this case. --- Conclusion This | Context: Tuberculosis (TB) is a serious infectious disease with high rates of morbidity and mortality if left untreated. In Australia, TB has been virtually eradicated in non-Indigenous Australian-born populations but in remote Aboriginal and/or Torres Strait Islander communities TB presents a rare but significant public health issue. Remote health services are most likely to encounter patients with suspected and confirmed TB diagnosis but may be unprepared for supporting someone with this disease and the complexities of balancing public health risk with patient autonomy. |
Introduction O ut-of-hours primary care services (OPCSs) are intended for acute, but non-life-threatening healthcare needs that cannot wait to be attended in daytime general practice (DGP). 1 Timely access to an OPCS is pivotal for adequate delivery of primary healthcare and prevention of unplanned hospital visits. 2,3 For patients with multiple health problems, the acute health problem focused approach of OPCSs could, however, be disadvantageous. Continuity of care is hampered by limited or lack of knowledge about the patient's medical history at the OPCS. The general practitioner (GP) on duty generally is not one's regular GP and patient information exchange between the patient's DGP and OPCS is challenging. [3][4][5] Information exchange is crucial in both directions since subsequent to an OPCS contact patients often have follow-up contacts in DGP. 6 Additionally, quality of care is challenged by scarcity of time and high workload in an acute care setting. 7 People with a low socioeconomic position (SEP) were found to use more acute and unplanned healthcare services than high SEP individuals, 8 whereas they would particularly benefit from continuity of care in a primary care setting. Individuals with low SEP more often experience worse health, with higher prevalence of chronic diseases and multi-morbidity, and at younger age, than more prosperous individuals. 9 Moreover, disadvantageous circumstances, such as unfavourable health behaviours and financial strain, more often accumulate among low SEP individuals. 10 Consequently, their healthcare need is often more complex. Socioeconomically vulnerable individuals thus would benefit from the continuity of care and familiarity with the patient's background in DGP. 2,11,12 However, there lies a paradox in the needs of low SEP patients, with their generally more complex health problems, and the generally limited resources available to them to put these needs into adequate action and benefit from healthcare. The complexity imposed by multimorbidity necessitates skills that low SEP individuals often lack. 13 Moreover, the 'inverse care law' dictates poorer availability of good quality healthcare for the people who need it the most, particularly in strong market competition of healthcare providers. 10 Although the strong primary healthcare system of the Netherlands does not represent a strong market competition, and fosters equity in healthcare accessibility, 11 low SEP is related to more fragmented and inappropriate use of health and social services. 2 Suboptimal healthcare use may be reflected in higher rates of OPCS use by low SEP individuals. In a previous study, we found that OPCS use was higher in each lower level of neighbourhood socioeconomic status. 14,15 It is unknown whether higher OPCS use of socioeconomically vulnerable individuals reflects worse health and resembles equal care for equal need. 16 The aim of this study was to determine whether a patient's SEP was associated with OPCS use, taking their health status into account. In addition, we aimed to determine whether the associations were stronger for patients with a chronic disease. To put the use of an acute care service in perspective of a regular healthcare provider, we compared OPCS use with DGP use. We used electronic health record (EHR) data from a large number of DGPs linked to OPCS EHR data, 17 including healthcare use and health status of almost a million Dutch residents enlisted in DGPs. --- Methods --- Setting Every citizen in the Netherlands is enlisted in a DGP. The DGP has a gate-keeping role for specialist care and therefore is the first point of contact with the healthcare system. Consequently, the DGP EHRs represent the patient's most comprehensive medical record. 18,19 Acute primary healthcare out of office hours is provided by OPCSs with 50-250 affiliated GPs. Patients generally contact the OPCS by phone, after which a triage nurse assesses the level of urgency paired to action (e.g. consultation, home visit). The use of healthcare in DGP and OPCSs is fully covered by the national basic health insurance scheme and does not require any out-ofpocket payments. 1 --- Patient involvement and ethics approval Patients were not directly involved in this study. This study does not fall within the scope of the Medical Research Involving Human Subjects Act and therefore does not require ethical approval. General practices and Primary Care Cooperatives that participate in Nivel Primary Care Database are contractually obliged to: (i) inform their patients about their participation in Nivel Primary Care Database and (ii) to inform patients about the option to opt-out if patients object to inclusion of their data in the database 42. Dutch law allows the use of EHRs data for research purposes under certain conditions. According to Dutch legislation, and under certain conditions, neither obtaining informed consent nor approval by a medical ethics committee is obligatory for this kind of observational studies [Dutch Civil Law (BW), Article 7:458; http://www.dutchcivil law.com/civilcodebook077.htm, Medical Research Involving Human Subject Act (WMO); http://www.ccmo.nl/en/nonwmo-research and General Data Protection Regulation (AVG) Article 24 (GDPR)]. This study has been approved by the applicable governance bodies of Nivel Primary Care Database under no. NZR-00317.017. --- Study population Data concerning DGP and OPCS use in 2017 were derived from routine EHRs from DGPs and OPCSs participating in Nivel Primary Care Database. 20 251 DGPs were included, with 1 013 687 listed patients, located in the catchment areas of 27 OPCSs. DGPs in strongly urbanized regions were slightly overrepresented. We linked DGP enlisted patients with OPCS contact records. DGP enlistment was recorded per quarter of the year. The majority of patients was enlisted the entire year (>90%), data of patients enlisted only part of the year (due to for instance births, deceased and house moving) were linked to OPCS data for the corresponding part of the year. Newborns in 2017 were excluded from the study population since they were not yet included in the population registry data that was used. The patient sample was linked to population registry data from Statistics Netherlands 21 and included household income, migration background (western vs. non-western) and household composition (living alone vs. not living alone). Patients were excluded from the analyses if they could not be linked to the socio-demographic data (n 1<unk>4 25 674, 2.5%) (Supplementary table S1). --- Measures --- Outcome measures OPCS use included claimed OPCS contacts of DGP enlisted patients in (part of) the year 2017. Outcome measures included number of contacts and dichotomized measures reflecting whether the patient had a contact or not during the year/part of the year (yes/no), and whether the patient contacted an OPCS twice or more (yes/no). Assessed urgency was included as at least one high-urgency contact (urgency levels U1-U3: U1 1<unk>4 life-threatening, U2 1<unk>4 acute and U3 1<unk>4 urgent yes/no), and at least one low-urgency contact (urgency levels U4 and U5: U4 1<unk>4 non-urgent and U5 1<unk>4 self-care advice yes/ no). Additionally, contacts for acute health problems and contacts for long-lasting and chronic health problems in OPCS reflected the category of symptoms or diagnoses recorded according to the International Classification of Primary Care-1 (ICPC) code. 18,22 DGP use included the annual number of contacts and a dichotomous measure indicating whether an enlisted patient had at least one DGP contact in the year/part of the year 2017 (yes 1<unk>4 1/no 1<unk>4 0). --- Independent variables Patient socioeconomic status was measured by net disposable household income, standardized for size and household composition. Patient income was categorized in quintiles ranging from 1 (low income) to 5 (high income), following from standardized percentiles based on the total Dutch population. 23 --- Potential confounders Patient characteristics included age (in age-groups, table 1), sex, living alone (yes/no) and non-Western migration background (yes/no). Non-Western migration background included patients with one or two parents born in Morocco, Turkey, Suriname, The Netherlands, Antilles or other non-Western countries. Chronic diseases/multimorbidity included the number of chronic irreversible illnesses (none, one, two, three or more) 24 on 1 January 2017, or on the first day in the quarter of the year, the patient was enlisted in general practice. The presence of a chronic disease was derived from the EHR data using a method described elsewhere. 18 --- Stratification variables Data on ICPC-coded chronic diseases from the EHRs of general practices were used to define four subgroups of patients: diabetes mellitus (ICPC-code T90), chronic obstructive pulmonary disease (COPD) and asthma (R91, R95 and R96), cardiovascular disease (CVD) (K74, K76-77, K86-87 and K90-92) and other chronic disease (any other ICPC code from the list of chronic diseases). 18,20 --- Statistical analyses To assess the probability of OPCS and DGP use according to the patient's income group, we conducted logistic regression analyses. To control for clustering of patients within practices, we applied two-level hierarchical models including patients (first level), nested within DGPs (second level). We adjusted the analyses for patient characteristics, e.g. age and sex and number of chronic diseases on patient-level. We additionally conducted stratified analyses for four chronic disease determined groups. We reported age and sex standardized probabilities to evaluate the extent of the reported odds ratios in terms of effect size. All confidence intervals were set at 95% and analyses were conducted using the statistical software package Stata version 15.1. 25 Additionally, we calculated population attributable fractions (PAF) to determine the proportion of OPCS and DGP use in the study population attributable to having a lower household income (groups 1-4) when compared with the most favourable income group (group 5). The PAF was calculated according to the equation below 26 : PAF 1<unk>4 P<unk>RR <unk> 1<unk> P RR <unk> 1 <unk> <unk> 1 where P is the proportion of the population exposed to a level of income (income levels 1-4 vs. income level 5) and RR is the relative risk of DGP/OPCS use summed for the four income groups with income levels 1-4. --- Results Characteristics of our study population (N 1<unk>4 988 040) are presented in table 1. Regarding age, sex and household income, our population closely resembled the general Dutch population. 23 Individuals with a non-Western immigration background were overrepresented. More people were using healthcare in both OPCS and DGP for each subsequent lower income group. With each lower stratum of income, a higher proportion of individuals suffered from three or more chronic diseases, and from at least one of the specified chronic diseases. The second lowest income had the highest proportion of patients with multimorbidity and CVD. For other chronic diseases, prevalence rates increased with each higher income group. Healthcare use in OPCS and DGP followed a similar pattern, with higher use rates for each lower income group (table 2). In the second lowest income group, the mean number of yearly DGP contacts was considerably higher compared with the lowest income group. Regarding OPCS contacts, inequalities were observed across all types of contacts, particularly contacts for a chronic health problem and low-urgency contacts. (Please refer to Supplementary table S2 for the mean number of OPC and DGP contacts stratified to chronic diseases.) In table 3, we quantify the size of socioeconomic inequalities in the probability of having had at least one OPCS contact. Individuals from the lowest income group had a 48% higher probability of at least one OPCS contact than those in the highest income group. The extent of the inequalities was nearly similar for high-and lowurgency contacts, and for contacts for an acute health problem. Inequalities were largest for the probability of two or more OPCS contacts in a year, and for contacts for a chronic health problem. Inequalities between income groups were substantially smaller for the probability of DGP contact in 2017. Compared with the highest income group, individuals with the lowest income had a 17% higher probability. The probability of contacting an OPCS at least once a year attributable to not being part of the highest income group was reflected in a PAF of 22%. The largest PAF was observed for having had two or more OPCS contacts, with 41% of OPCS use attributable to being part of a lower income group. In comparison, a marginal PAF of 4% was observed for DGP use. Income inequalities in OPCS use were larger within patient groups with a chronic disease (table 4) compared with the total study population (for instance: lowest income group OR 1.60, CI 1.53-1.67 for CVD patients vs. OR 1.48, CI 1.45-1.51 for the total study population), mainly due to larger inequalities between the lowest and the second lowest income groups. In table 4, we compare OPCS use with DGP use for patient groups with a chronic disease. Income inequalities regarding DGP use were much smaller than for OPCS use for these patient groups. Compared with the total study population (table 3), income inequalities in DGP use were somewhat larger for patients with COPD/asthma (lowest income group: OR 1.25, CI 1.18-1.33 vs. total study population: OR 1.17, CI 1.15-1.19) and for patients with diabetes. In the group of patients with CVD, inequalities were smaller (OR 1.11, CI 1.04-1.18) compared with the total study population. The probability of an OPCS contact due to not being part of the highest income group was larger for patients with CVD (PAF 25%) and patients with diabetes (PAF 24%) compared with the total study population. For DGP use, the PAF for patients with a chronic disease was somewhat smaller compared with the total study population. --- Discussion --- Key findings We observed inequalities in both OPCS and DGP use, reflected in higher use rates within every lower stratum of household income. Inequalities for OPCS use were considerably larger than for DGP use. These inequalities persisted when taking the patient's health status into account. Among patient groups with COPD/asthma, CVD or diabetes, income inequalities for OPCS use were larger than in the total population. The extent of inequalities in DGP use between income groups were quite similar for patients with a chronic disease and the total study population. --- Study strengths and limitations The use of routinely recorded EHR data enabled us to study a large nationally representative patient sample. The recorded chronic diseases were either diagnosed by the GP or a specialist and are therefore more reliable indicators than self-reported diseases. 19 Our study results may have been biased due to limitations of the data and the applied methods. First, the use of household (disposable) income as indicator for socioeconomic status provided us with a robust measure that was routinely registered by tax registries. The use of income, however adequately classifies groups in the productive age bands and may be less adequate for people of younger and older age due to their loose attachment to the labour market. Different measures of SEP each have their advantages and disadvantages. For instance, wealth is a more appropriate measure for older age groups, 27 however less so for younger age groups. 28 The use of household income in this study consequently suboptimally classified both younger and older people. Secondly, health status was measured by the number and nature of chronic diseases as recorded in DGP. Nevertheless, we were unable to quantify the severity of the generally more complex health problems of socioeconomically vulnerable patients. Our operationalization of health status therefore likely underestimated the healthcare need of low SEP individuals, and the extent to which this could account for the observed inequalities in OPCS and DGP use. --- Interpretation of key findings Our results showed that socioeconomic inequalities in OPCS use could not be explained by differences in health status and that these were larger than inequalities in DGP use. A previous study also indicated that attendance of OPCS was higher in low SEP patients after adjusting for health status. 29 The larger income inequalities for OPCS use compared with DGP use likely ensue from factors additional to, and interacting with, the patient's health status. For instance, limited health literacy, need for reassurance, perceptions of illness and doctor-patient communication likely contribute to inequalities in use-patterns between SEP groups. 30,31 Limited health literacy, for example, may inhibit finding the way through the healthcare system, 32 whereas poorer doctor-patient communication leads to misinterpretation of the patient's care need. 31 Moreover, people with low SES may experience more difficulty in waiting for an appointment in DGP the next working day, and turn to an OPCS for immediate relief of their worries. 5 The larger income inequalities for OPCS use among patients with a chronic disease suggest a different healthcare need among chronically ill patients with low SEP. Due to the clustering of health and (psycho)social problems, and more severe comorbidity, 12,33 care coordination and continuity of care for these patients in DGP is more challenging. 10,12,13,34 Their care needs likely demand more time than DGPs are able to spend on their patients. 33,34 Additionally, these patients may have difficulty obtaining other healthcare and social services and therefore may experience unmet needs. 2,35,36 The higher OPCS use therefore may be a reflection of the inverse care law as a result of impeded access of DGP for low SEP individuals. 5,10,13,37 Implications for research and practice The results suggest that OPCSs fill a void in healthcare needs, for socioeconomically vulnerable patients, particularly among the chronically ill. As such, OPCSs contribute to equity in healthcare access by providing low threshold care. On the other hand, using OPCS services comes with downsides of acute healthcare, such as lack of continuity. 4 Ideally, from a continuity of care perspective, DGP may be even more sensitive to the more complex care needs of vulnerable patients, to prevent them from care seeking in OPCSs. 35,38 Additionally, coordination and continuity of care between DGP and OPCS should be improved by better information exchange and close involvement of the patient 36 to more adequately address the patient's needs, resources and skills. The higher OPCS use within lower income groups, as reflected in the PAF, appears to be additional to DGP use. Therefore, overall healthcare use and the workload of GPs increases. Since OPCSs increasingly experience difficulties in fulfilling vacancies and voids in work schedules, the sustainability of accessible OOH primary care is at stake. 7 How to relief the high workload of both OPCS and DGP should be subject of further study. For instance, by studying the effect on workload, by scaling-up OPCS healthcare professional staff by employing nurse practitioners 39 and integration of social support services. 38 We found substantial income-related inequalities in OPCS use, the more so when compared with inequalities in DGP use, particularly among patients with a chronic disease. These inequalities suggest that OPCS meets a healthcare need of vulnerable groups additional to healthcare provided by DGP, particularly among individuals with low SEP and chronic disease. Optimization of care SB, standardized probability by direct standardization for age and sex; OR, Odds ratios from multilevel logistic regression analyses; CI, confidence intervals; ICC, the intra-class correlation between daytime general practices: the relative contribution due to clustering of patients in DGP to the variation unexplained by characteristics related to the patient-level; PAF, population attributable fraction for income groups 1-4 vs. the highest income group; Models adjusted for age-groups, sex, living alone, non-Western immigrant background, number of chronic disease episodes and random effect of DGP level. --- Key points • SEP is related to worse health, generally following a gradient with less favourable outcomes for each lower level of SEP. • Low SEP is associated with higher healthcare use rates of OPCSs. • Income-related inequalities in OPCS appeared to be only partly related to health status. • Income-related inequalities in OPCS use were particularly large for patients with a chronic disease, and they were larger than inequalities in DGP use. • These findings suggest that OPCSs address an additional healthcare need of socioeconomically vulnerable patients, particularly among patients with low income and chronic diseases. --- Data sharing statement Results are based on calculations by the researchers of this paper using non-public microdata from Statistics Netherlands. Under certain conditions, these microdata are accessible for statistical and scientific research. For further information: microdata@cbs.nl. The unpublished statistical code and raw data files excluding the microdata of Statistics Netherlands are available upon reasonable request from the authors. --- coordination in DGP and between DGP and OPCS should be considered to address the generally more complex care needs of socioeconomically vulnerable patients and preferably reduce OPCS use. --- Supplementary data Supplementary data are available at EURPUB online. Conflicts of interest: None declared. | Background: Low socioeconomic position (SEP) is related to higher healthcare use in out-of-hours primary care services (OPCSs). We aimed to determine whether inequalities persist when taking the generally poorer health status of socioeconomically vulnerable individuals into account. To put OPCS use in perspective, this was compared with healthcare use in daytime general practice (DGP). Methods: Electronic health record (EHR) data of 988 040 patients in 2017 (251 DGPs, 27 OPCSs) from Nivel Primary Care Database were linked to sociodemographic data (Statistics, The Netherlands). We analyzed associations of OPCS and DGP use with SEP (operationalized as patient household income) using multilevel logistic regression. We controlled for demographic characteristics and the presence of chronic diseases. We additionally stratified for chronic disease groups. Results: An income gradient was observed for OPCS use, with higher probabilities within each lower income group [lowest income, reference highest income group: odds ratio (OR) ¼ 1.48, 95% confidence interval (CI): 1.45-1.51]. Income inequalities in DGP use were considerably smaller (lowest income: OR ¼ 1.17, 95% CI: 1.15-1.19). Inequalities in OPCS were more substantial among patients with chronic diseases (e.g. cardiovascular disease lowest income: OR ¼ 1.60, 95% CI: 1.53-1.67). The inequalities in DGP use among patients with chronic diseases were similar to the inequalities in the total population. Conclusions: Higher OPCS use suggests that chronically ill patients with lower income had additional healthcare needs that have not been met elsewhere. Our findings fuel the debate how to facilitate adequate primary healthcare in DGP and prevent vulnerable patients from OPCS use. |
Introduction Papadopoulos (2011: 432) contends that 'every epoch has its brain' and that the concept of brain plasticity 'occupies the brain-body imaginary of the contemporary epoch'. 1 The idea that the capacity of the brain is not fixed, that it is an organ with the potential to adapt and change, underpins and finds expression in the current scientific research and wider public interest in pharmacological cognitive enhancement. The possibility of increasing 'brain power' through pharmaceuticals -sometimes referred to colloquially as'smart drugs' -has gained considerable prominence in popular culture, science magazines, and the wider media, as well as in policy debates. 2 For example, in recent mainstream big-budget films such as Lucy (2014) and Limitless (2011), drugs with potent powers of enhancement enable the central characters to overcome the limitations of the 'normal' human brain and thereby exert their influence on the world around them. In a more modest vein, media reports have discussed 'normal' people taking drugs believed to enhance cognition in the context of employment, including speculation about 'How Smart Drugs and Cybernetics Could Create a Superhuman Workforce' (Louv, 2012; see also 'The Pharmaceutical Path to a Superhuman Workforce ', 2012). At the same time, a number of prominent policy-orientated reports have discussed the possible economic and social benefits of cognitive enhancement drugs (see, for example, Academy of Medical Sciences, 2012;British Medical Association, 2007). Notwithstanding the distortions of the scientific concept of brain plasticity within its popular manifestations in the media and popular culture, the idea that drugs have the power to enhance the brain or unlock its potential is consistent with a general turn to pharmaceuticals to solve a whole range of problems and achieve desirable ends for individuals and society -'a pill for every ill' (Beaconsfield, 1980). 3 This tendency has been described as 'pharmaceuticalization' (Abraham, 2010;Busfield, 2010). It is also expressive of the prevalent view of brain functioning as essentially constituted by neurochemical processes and interactions, which themselves can be adjusted and readjusted through the use of pharmaceuticals. This 'psychopharmacological imaginary' (Rose and Abi-Rached, 2013: 12) appears to hold out the promise to help people manage not only specific diseases of brain function, but also, importantly, aspects of ordinary everyday life (see, for instance, Fox and Ward, 2008;Williams et al., 2008). But at the same time as greater control seems to be available through pharmacology, Rose (2007) argues that the more we see ourselves in terms of brain chemistry, the more we become subject to neurochemical evaluation and intervention. Discussion of cognitive enhancement falls broadly into two areas: bioethical debate and sociological studies. Ethical discussions centre on two key issues. First, coercion versus free choice: whether individuals might seek to enhance themselves out of their own volition, or because they may be required to enhance, or might feel pressured to do so, due to working conditions or to keep up with (enhanced) others in education and the labour market. The pressures for productivity or profitability, the impetus to reduce the costs of labour, and the current move towards more casualized employment conditions are some of the main drivers that could lead to coercion to enhance. Ethical debates tend to problematize coercion, but do not question what is assumed to be its opposite: 'free choice'. Analysing the context of the contemporary labour market and employment relations enables an understanding of the conditions under which this 'choice' comes to be seen by individual employees or students as possible or desirable. Second, there is the question of fairness/equity in the access to such drugs and their outcomes: whether consumption of enhancement drugs might give an unfair advantage to some people who can afford them over others who cannot (Farah et al., 2004;Greely et al., 2008). Sociologically orientated work on cognitive enhancement has tended to use it to develop explorations of the concepts of medicalization, biomedicalization (Coveney, Gabe, and Williams, 2011) and pharmaceuticalization (Williams et al., 2008). Although this is not the focus of these studies, they have made some points on specific political economic aspects of cognitive enhancement. In particular, this has been discussed in relation to the management of sleep and how the 'customisation' and even potential 'optionalisation' of sleep provides opportunities for greater productivity, especially in the light of shift work (Coveney, 2011;Williams, Coveney, and Gabe, 2013;WolfMeyer, 2012). Other research has considered enhancement drugs in relation to the pressures associated with the characteristics of contemporary employment -including increasing demands for flexible labour, precarity, extreme forms of working, long working hours, 24/7 availability, and so on (see, for instance, Bloomfield and Dale, 2015;Smith and Land, 2014). Whilst acknowledging the importance of the issues addressed in such literature, here we seek to situate cognitive enhancement as part of a broader relationship between cultural understandings of the body-brain and the political economy. It is the body of the worker that forms the intersection of this relationship and through which it comes to be enacted and experienced. Through our analysis below, we argue that the use of pharmaceuticals has come to be seen not only as a way to manage our brains, but through this as a means to manage our productive selves, and thereby to better manage the economy. More specifically, in this article we investigate the imaginaries that both inform and are reproduced by representations of pharmacological cognitive enhancement, drawing on cultural sources such as newspaper articles and films, as well as policy documents and pharmaceutical marketing material, to illustrate our argument. Previous studies have analysed media reporting on drugs such as modafinil (Coveney, Nerlich, and Martin, 2009;Williams et al., 2008), or the portrayal of a range of enhancement technologies in science fiction (Delgado et al., 2012). However, in this article we analyse a range of cultural sources, arguing that despite their differences, they also encapsulate a commonality in their construction of images of minds and brains, and their potential for enhancement. As the basis for our analysis, we contend that prevalent representations of cognitive enhancement are inextricably intertwined with the contemporary social context. As Hogle argues, The work that goes into both identifying and amplifying certain characteristics as being amenable to change and constructing certain traits as desirable does more than essentialize them as preferred human traits. Rather, it forms a circuit of enterprise, biology, medicine, and culture in complex relations to each other. In this sense, the traits being enhanced are not inherently natural but cultural. (Hogle, 2005: 703) We contend that the cognitive traits that are associated with pharmacological enhancement are predominantly concerned with making the body more productive and thus linked to particular characteristics that are seen as having economic worth, and thereby connected to the broader political economic context. Enhancement for the purposes of improving work rates is not new. Historical examples of pharmacological modification of cognition include the military deployment of amphetamines and methamphetamine to improve attention and wakefulness during the Second World War (Bloomfield and Dale, 2015;Rasmussen, 2008). We can see this as part and parcel of the longstanding relationship between the economy and the body. This is captured in Foucault's remark that 'in fact the two processes -the accumulation of men and the accumulation of capital -cannot be separated' (Foucault, 1977: 221). Whilst the human body is the primary instrument of labour, it is also a significant limitation on it. For some time now, the economy has demanded flexibility on the part of the labouring body (Martin, 1994), and in the current era of brain plasticity this translates into the idea of enhancing cognitive powers. Cognitive enhancement may thus be seen as a form of work on the body -an accumulation strategy (Harvey, 1998;Harvey and Haraway, 1995) -that aims to reshape it to fit the particular demands of the economy. Indeed, it is very noticeable that discussions in the news media of the potential for cognitive enhancement through the use of'smart drugs' frequently focus on how this might improve various working or studying practices. Modafinil, a stimulant synthesized in the 1970s in the context of brain research and sleep, is medically prescribed for conditions such as narcolepsy. However, it has been deployed by various military forces (Moreno, 2008) and subsequently taken up within wider society as a cognitive enhancer, a'smart drug'. For example, in 2011 Reuters Health reported on research into whether 'the sleepfighting medication modafinil may boost the brain power of weary surgeons' (Joelving, 2011). Similarly, other coverage suggests that enhancement drugs might improve driver performance and safety (Diver, 2017;Margo, 2000). In this guise, modafinil has also proved particularly popular amongst students (see, for example, Dietz, Soyka, and Franke, 2016) and others in high-pressure occupations, even though its efficacy remains a subject of scientific debate (see, for instance, Repantis et al., 2010), along with concerns about its known side effects. Paid work is multifaceted. It is about more than production or making a living. In capitalist economies, paid work is a source of social status, social interaction, and social identity formation. It has been argued that more and more social relations come to be viewed through the lens of economic worth, such that being 'productive' is socially valued and validated whilst there is a concomitant demonizing of those who are 'unproductive' (Fleming, 2015;Smith and Riach, 2016). More recently, conditions of employment have become framed within the language of what constitutes neoliberalism. Mindful of the ubiquitous but often ill-defined use of the term (Flew, 2014;Lemke, 2002), we limit our reference to neoliberalism specifically to the forms of governmentality articulated by Foucault in his lectures of 1978-9. Here he elucidates the development of ideas and institutions that promulgate the 'generalization of the economic form of the market...throughout the social body' (Foucault, 2008: 243) Foucault analyses neoliberal developments in relation to biopolitics: the ways in which life itself is 'put to work'. In this conception, where each individual might be seen as a'micro-enterprise', investing in themselves to gain the best return on their own self as 'human capital', it becomes more comprehensible why some people might choose to, or feel they need to, turn to smart drugs and other technological interventions in order to succeed, compete, or even just survive in this 'enterprise society'(ibid.: 226). With the growth of neoliberal economics, the focus has shifted to the responsibility of the individual to work on their own body and, we would add, their brain. This then becomes part of an individual project of self-construction -an 'ethic of personal self-care and responsibility linked to modifying the body' (Pitts-Taylor, 2010: 639) -whilst remaining tied to the wider political economy. It is in this context that we would note there is a continuity between students taking cognitive enhancement drugs as a study aid and employees taking them to perform better at work. As one prominent headline put it, 'Students Used to Take Drugs to Get High. Now They Take Them to Get Higher Grades' (Cadwalladr, 2015;cf. Williams et al., 2008). Within the context of neoliberal biopolitics articulated by Foucault, there is a subtle shift where education becomes one of the means by which individuals can improve and actualize their 'human capital', postponing earning opportunities in the present in order to invest in their future employability and earnings potential (Foucault, 2008: 228-30). In the light of this, the consumption of pharmaceuticals for cognitive enhancement can be seen as a potential tool for the worker acting as an 'entrepreneur of the self' (ibid.: 226). One example of pharmacological enhancement that perhaps particularly well illustrates its interrelationship with contemporary employment is the renewed interest in lysergic acid diethylamide (LSD) used in very small quantities as a potential spur to creativity, especially in the context of occupations such as software engineering (Karim, 2017;Kuchler, 2017). In conditions of uncertainty and competition over jobs, even for those in the professions, including the much-discussed replacement of human workers with robots and AI and the casualization of workers' contracts and rights in the so-called 'gig economy', workers seek for the means to reduce their precarity and increase their 'competitive edge'. In contemporary 'knowledge-based economies', it is the 'gold in workers' heads' that is particularly valued, including such skills as creativity and innovation. Thus it is not surprising to find enhancement practices focused on these traits. In summary, cognitive enhancement resonates with the prevailing political economic order, in particular its valuation of productive performance and the associated expectations on individuals to take responsibility for realizing their own potential in order to achieve this. We contend that this ethos produces a commonality that runs through the seemingly diverse representations of cognitive enhancement that we analyse in this paper, ranging from fantasies based upon the acquisition of superhuman skills, through cognitive enhancement, to more'mundane' pharmaceutical interventions aimed at managing cognitive functions such as alertness and attentiveness, as well as attempts to attain greater focus and improved memory. --- Imaginaries of cognitive enhancement In order to further explore how cognitive enhancement is represented and understood in everyday life, we deploy the analytical term imaginaries as a means of highlighting the connection between ideas, imagery, and context (on this, see for example Bloomfield and Doolin, 2011;Le Doeuff, 1989;Macnaghten, Kearnes, and Wynne, 2005;Taylor, 2002). Imaginaries relate in part to the cultural images and ideas that circulate, as well as to the various ways in which people relate to them by interpretation, incorporation, and rejection, often contradictorily, sometimes explicitly, and sometimes without intention or conscious deliberation. Therefore, we purposefully refer to 'imaginaries' in the plural not the singular. The imaginaries we explore express contestations and struggles surrounding cognitive enhancement. We do not consider here how these imaginaries of enhancement are received -it is enough to note that we do not see them as deterministic or predictable (O'Connor and Joffe, 2015;Racine and Forlini, 2010). Instead, we aim to elaborate specific aspects of the social, cultural, and economic context from which they emerge, and which they in turn help reproduce. The imaginaries of cognitive enhancement that are explored in this paper are not passively derived from current dominant images in society, nor are they simply abstract fantasies of how society might be different: They stand in the interstices between such images and those future states (Dawney, 2011;Gatens, 1996). Imaginaries have material effects; they are intrinsic to the possibilities of action, because they hold out the prospect of a future path for the individual and thus motivate desire and choices. Dawney's (2011: 538) development of the concept of imaginaries as'material, embodied and affective' takes this further: Ideas and imaginings do not cause practice: they are practices. In other words, to position the imagination in the realm of ideas alone runs the risk of excluding a consideration of the immediate, sensate and embodied modes through which imaginaries come to be experienced and felt. (ibid.: 539) However, imaginaries have a material effect not only for individuals, but also through how they come to frame future possibilities through cultural and scientific understandings. For example, in 1628 William Harvey described the circulation of the blood through the pumping motion of the heart. Since then, the major imaginary of the heart as a functional organ has treated it as if it is a pump, even though this ignores important aspects of its electro-biochemical characteristics. In the 20th century, doctors worked out how to replace a faulty heart with a device that was indeed very like an electrical pump (Laurance, 1995;Dale, 2001: 94-5;Sawday, 1995: 31). Our analysis below shows that the contemporary imaginaries of cognitive enhancement present in a diversity of sources are significantly embedded in cultural constructions of what might be described as 'the productive body' (Gue <unk>ry and Deleule, 2014): that body which is made fit for work and employment. However, beyond the immediate demands of the working body, these imaginaries express aspirations for performance that are characterized by an increased emphasis on achievement, personal development, and realizing one's potential. The article draws upon a variety of English-language sources of material that refer to pharmacological cognitive enhancement deriving from the period 1997 to 2017. The selection of a range of sources was shaped to a certain degree by our awareness of the interplay or cross-referencing between them. Along with the growing media (including Internet) coverage and public discourse surrounding the topic of cognitive enhancement, and with its manifestation in popular culture through films such as Limitless and Lucy, it was interesting to observe that the latter became drawn upon in those media reports as a means of narrating the topic to their audience. Furthermore, informed by other research on drugs and the brain that examined the role of industry advertisements (see, for instance, Singh, 2007;Tone, 2009), we too chose to consider the marketing materials for the drug modafinil. We view our sources as related cultural or social manifestations of the notion of brain plasticity in general, and the imaginaries of cognitive enhancement in particular. In short, we regard our chosen sources as 'public fragments of social consciousness that work (albeit loosely) in concert, encouraging people to reason, know and fashion their worlds in particular ways' (Kroll-Smith, 2003: 627). Accepting this commonality, it is nonetheless useful to distinguish the particular characteristics of the sources, as they each have different relations with the portrayal of the enhanced or modified brain in everyday life. The first category of sources comprises portrayals of cognitive enhancement in popular culture, specifically the recent films Limitless and Lucy. Although explicitly fictionalized narratives expressing fantasies far from everyday experience and current possibilities, they invoke distorted or exaggerated (pseudo)scientific ideas about the brain and the possibilities of pharmacological enhancement, and in doing so tap into some key desires and anxieties about its implications. Second, we draw upon international English-language media reports of pharmacological cognitive enhancement, particularly newspaper articles. Whilst at one level, some news coverage purports to present factual accounts of scientific developments in brain science, neuroenhancement, or the use of'smart drugs' amongst particular groups in society, it is also involved in the construction of particular imaginaries about such matters. Indeed, since there are currently no drugs that are licensed to be prescribed or marketed as cognitive enhancers, these reports are inherently involved with the formation of narratives of what these'smart drugs' are and how they are used (see Kroll-Smith, 2003). Such accounts typically refer to the off-label use of prescription drugs such as modafinil and Ritalin, usually associated with diagnoses such as narcolepsy and attention deficit hyperactivity disorder (ADHD), respectively, but consumed (without a medical prescription) by people who have not been given these diagnoses, with the aim of improving cognitive function. Scientific studies are mixed in their conclusions as to whether brain functioning is improved, and in any case laboratory studies are hardly representative of everyday work life. Overall, the drugs predominantly promote wakefulness rather than increase cognitive ability. Nevertheless, many media accounts relate the (supposed) improvement in brain ability to the possibilities of, or at least desire for, increased performance -such as in study or at work -and thus the pursuit of self-development promoted by neoliberal discourse. For instance, taking a snapshot of coverage in UK national newspapers in 2016 revealed 20 unique reports on the topic of modafinil or smart drugs and the brain, of which 18 referred to enhancement (positively or negatively) in the context of performance in study or at work. 4 We contend that such examples draw upon particular ideas and social values -imaginaries -surrounding selfrealization, employment, and the social valuation of productive effort (characteristics associated with neoliberalism), but at the same time offer imaginaries of cognitive enhancement that in turn reproduce those ideas and values. The third category we draw upon is marketing material for the drug modafinil (and its variants). We chose to concentrate on modafinil, rather than other substances that are discussed as enhancers, for several reasons. First, it is the most common substance referred to in media reports. Second, there has already been a more formal crossover of the drug into work environments: It has been used in the military and discussed in relation to long-distance driving as a potential aid to safety, and experiments with it taken place in the medical and surgical field (Krueger and Leaman, 2011;Sugden et al., 2012; see also Bloomfield and Dale, 2015). Third, there was a meta-analysis of scientific studies of modafinil in 2015 (Battleday and Brem, 2015), which led to its being labelled 'the world's first safe smart drug' in newspaper reports (Thomson, 2015), thereby increasing its visibility. And fourth, the advertisements for modafinil specifically relate its use to employment. Official promotional material from the pharmaceutical industry does not represent cognitive enhancement as such, since it is not allowed to market drugs for anything other than their licensed uses. 5 However, we contend that the adverts nonetheless offer imaginaries about the relationship between the brain and potential pharmaceutical interventions in its functioning; for instance, in terms of restoring alertness, attention, or wakefulness in sleep-deprived individuals. The imaginaries typically deployed in promoting a drug refer to both its power to transform an individual's condition and the future self that they hope to become. One of the ways in which industry seeks to convey product information to consumers is through the use of narrative devices and associated imagery centred on clearly recognizable as well as believable characters; individuals that one can identify with. In this regard, the promotional material for pharmacological drugs is often no different (see Frosch et al., 2007;Rasmussen, 2008;Singh, 2007;Tone, 2009). Such material and its associated imaginaries present possible identities that the observer can 'try on' to see if they would fit into the imagined future that is portrayed therein. The envisaged use of the product that is being promoted may thus become thinkable as a first step towards its actually being acquired. Moreover, imagined futures have an emotional component, and this is exploited in a number of advertising campaigns for various pharmaceutical drugs (Frosch et al., 2007). There is also unofficial, unregulated marketing material from online retailers who overtly promote the supposed cognitive enhancement potential of these drugs, with some making an explicit link to the Limitless film, offering the'real' smart drug. This material includes narratives that are presented as the experiences of those who have tried the drugs (though obviously this cannot be verified, and the experiences are clearly portrayed in a particular way, since they are made available on sites designed to promote the sales of the drugs). It also includes online discussions between users, and other information that is presented as factual about the drugs. Fourth, we draw upon two policy reports that have been published in the UK on the use of drugs for the purposes of enhancement: the Academy of Medical Sciences 2012 report on Human Enhancement and the Future of Work, and the British Medical Association's 2007 publication Boosting Your Brainpower: Ethical Aspects of Cognitive Enhancement. These are relevant because they bring in discussions that cut across the scientific and policy communities and seek to construct future-orientated activity, especially with regard to the economy. As we have already noted, despite the seeming diversity of these materials, it is the strands of commonality between them that enable us to better understand the ways in which the associated representations of the brain relate to the cultural context out of which imaginaries of enhancement emerge as immanent potentialities. This can be shown by briefly illustrating the interrelationships between the different sources. For example, newspaper reports pick up on scientific and policy discussions that they then re-present in a popular, digestible form. Similarly, references to 'the real life "Limitless" drug' can be found in newspaper headlines and online pharmacies for modafinil; and reviews of the film claim that it was based upon modafinil. 6 Our main argument in what follows is structured according to three analytical themes followed by a concluding discussion. In the first of these, Mind over matter?, we explore the commonplace imagery of the brain that is deployed in popular coverage of enhancement. The second theme, Valuing productivity and performance, examines the connection between imaginaries of enhancement and the social valuing of productivity and performance, especially in paid employment. The third theme, Enhancing the economy, considers how management of the (neoliberal) self is but a microcosm of broader managerial efforts to organize the world. Noting Wolf-Meyer's (2009: 13) point about the 'need to understand the economy as an always embodied practice', we illustrate this theme by reference to efforts to exercise pharmacological control over alertness, wakefulness, and sleep. --- Mind over matter? In the film Lucy, we see portrayed a fantasy of total control in which the mind-brain has power not just over the individual's body, but the external world too. For example, Lucy is instantly able to accomplish complex tasks involving adept physical coordination by the sheer power of thought/knowledge. At the beginning of the film she is unable to drive, but once the drug has enhanced her brain she transforms into someone who can skilfully weave a car through fast oncoming traffic. Similarly, through the power of the drug Lucy can immediately understand languages that she could not previously speak, and is able to wield a gun like a professional. The embodied nature of human skill acquisition and practice is ignored: Her new abilities are derived internally, as it were, directly from her brain power -she just knows what to do and is able to do it. This portrayal of the possibilities of cognitive enhancement sidesteps any understanding of how learning comes from embodied interaction with the world. Moreover, as Lucy's powers develop she becomes able to exercise telekinetic and other powers over matter itself. Everything becomes subject to her will, which can be perfectly enacted because of the realization of the full potential of her brain. Although easily dismissed as science fiction fantasy, Lucy may be better understood as the extension of current lines of thinking pushed to their limits. This imaginary of cognitive enhancement shows both connection with and contrast to the current dominant'materialist' view of the mind-brain, where mind is understood as a property of the brain, which is an organ of the body (Rose and Abi-Rached, 2013: 1). Here, on the one hand, we see the brain as an organ of the body whose biological capacity can be extended through pharmaceutical substances. On the other hand, it is seen as capable of being enhanced as if it was autonomous and almost separate from the body, in a way that is suggestive of a power of mind that goes beyond its biological nature. This evokes images of an enhanced brain, which is now able to 'pull along' a body that can be perceived as a constraining factor due to its biological limits. Lucy's body before it is enhanced is weak and deficient -she is not able to resist those who forcibly turn her into a drugs mule. Once she is at the full extent of her extraordinary pharmaceutical enhancement, however, her brain becomes all-powerful -indeed, so much so that the biological body is no longer able to contain it, and she becomes a supercomputer before dissolving and leaving her superior knowledge behind on a flash drive. In this strange sequence, we see the dream of overcoming and even entirely transcending the physical body, a fantasy that resonates with a long history of denial and degradation of the body (Turner, 1984). We suggest that there is a residual Cartesianism in this imaginary, which coexists and is in tension with the prevalent materialist view. Within more mundane everyday examples of imaginaries around cognitive enhancement we can see similar themes. Newspaper reports about the use of'smart drugs' emphasize how they provide a means to transcend biological limits. Headlines include 'In the City That Never Sleeps...Traders Stay Up on "Smart Drugs"' (Dean, 2013), 'Public Servants Used Drug, Modafinil, to Stay Awake to Complete the Federal Budget on Time' (Farr, 2014), 'Drug-Taking: Think What We'd Achieve If We Never Slept a Wink' (Clay, 2012), and 'Smart Drug Helps You to Sleep Less and Think More' (Lay, 2015). All of these examples rely on an imaginary where the limits of the tired body can be overcome by an enhanced brain. The visual imagery that is often deployed next to articles such as these is also telling: for example, a brain with coloured lines radiating out of it, and a brain with sections brightly lit up (Petrow, 2013;Lay, 2015). Again, a notable common theme with these pictures is that the brain is shown as a single self-contained organ abstracted from the rest of the body; it appears to be able to exist and function alone and independently, as a disembodied agent. This somewhat mechanistic view of the brain runs through much of the discussion and visual representations of cognitive enhancement. The problematizing of the body is something that can also be discerned in the official adverts for Provigil (an early tradename for modafinil) in the USA. These directly promote the use of the drug for patients reporting symptoms of excessive sleepiness (ES). For example, a dramatic transformation from sleepiness to alertness is captured explicitly in an advert for Provigil that appeared in professional journals such as Psychiatric News. 7 Under the caption 'Cut through the fog of ES with PROVIGIL', we see an image of a female clinician in the foreground who is striding ahead, bright and alert. In the background lurk several other figures, all bearing the trappings of their employment, but shrouded in fog and looking tired and deadbeat from their work. A series of related images in other adverts similarly portray the 'before' and 'after' message, with the former depicting tired, aching bodies and downcast eyes, and the latter showing figures whoafter taking modafinil -are energetic, refreshed, and committed to their work. The suggestion that is carried through this promotional material is one that poses the chemically enhanced brain as a'solution' to the failing body. As Elliott (2003: 13) notes, 'Enhancement technologies are usually marketed and sold by taking advantage of a person's perception that she is deficient in some way'. The tensions that we see in the imaginaries of cognitive enhancement illustrate the complex and continuing power of Cartesianism, with its mind-body dualism, and hierarchical assumptions that the mind constitutes the active subject whilst the body is mere matter. The 'brain' in some ways occupies an ambiguous and ambivalent position: Sometimes it is equated with the power of mind (over that of matter), and sometimes it is equated with the limitations of that biological matter (as an organ of the body). Furthermore, despite the prevalent idea that the mind is now embodied in the brain, which is another organ of the body, there is an assumption that enhancing the mind-brain can be secured without risk or detriment to the passive body, with'side effects' considerably downplayed. The transformations of the body-brain depicted in films such as Limitless and Lucy, as well as in the adverts discussed above and in cultural representations of enhancement more generally, can be further analysed in terms of Sobchack's (2000) discussion of'morphing'. This is a technique deployed in cinematic representations of radical bodily transformation. These metamorphoses depict bodily changes without any of the 'natural' biological processes involved; they are about'making visible (and seemingly effortless) incredible alterations of an unprecedented plastic and elastic human body' (ibid.: 45). In doing this, they ignore, obscure, or even write out the time and pain involved in the experience of bodily change: 'transformation without time, without effort, without cost' (ibid.: 50). For example, in Limitless radical cognitive enhancement is achieved (and sustained) through taking a fictional drug, 'NZT-48'. In this case, biological processes and time are doubly removed. First, because once a pill is consumed it moves from the outside to the inside of the body, and the changes it instigates themselves become invisible and no longer consciously thought of (Martin, 2006). Second, because the (imagined) changes of enhancement are refracted through the remnants of the Cartesian body. Thus, the dominant imaginary runs, these drugs produce specifically cognitive enhancement. Aside from the resultant powers that are bestowed by the pill (an ability to play the piano or speak other languages inevitably requires the body to behave differently, even if this is not acknowledged), the only sustained bodily changes that we see are relayed by a pronounced brightening of Eddie Morra's (the central character) irises. They seem to radiate, indexing cognitive prowess. Dramatic negative effects are visited on the body, but only as a consequence of the drug wearing off, the effects being quickly reversed once the drug is consumed again. There is thus an asymmetry as regards bodily processes: a seamless, almost disembodied transition to enhanced powers through the presence of the drug, and then a body rendered lumpen and dazed by its absence. The plasticity that is assumed and represented in morphing techniques can be related to the dominant image of the plasticity of the brain. But as Sobchack (2000: 45) notes, this plasticity comes with connotations and consequences for embodiment and social relations,'rendering human affective states with unprecedented superficiality and literalism'. At the same time, 'the plasticity of the image (and our imagination) has overwhelmed the reality of the flesh and its limits' (ibid.: 50). The bodily nature of the brain, with its biological time and processes, its emotions and interconnections, is effectively written out. Through this, the brain becomes rendered open to commodification and instrumentalism, both for the individual and within the broader political economy. Featherstone (1991) notes that the body in contemporary consumerist culture has become seen and experienced as plastic, and hence a lifestyle accessory, a thing to be sculpted, shaped, and'stylized'. Similarly, Emily Martin develops this theme in Flexible Bodies, here placing it in an economic context, and arguing that 'flexibility is an object of desire for nearly everyone's personality, body and organisation' (Martin, 1994: xvii). In the next section, we further develop our analysis by discussing how the impetus to overcome the limitations of the body-brain through cognitive enhancement is socially and morally legitimated in relation to the imperative to be an economically active and productive subject. | This article seeks to situate pharmacological cognitive enhancement as part of a broader relationship between cultural understandings of the body-brain and the political economy. It is the body of the worker that forms the intersection of this relationship and through which it comes to be enacted and experienced. In this article, we investigate the imaginaries that both inform and are reproduced by representations of pharmacological cognitive enhancement, drawing on cultural sources such as newspaper articles and films, policy documents, and pharmaceutical marketing material to illustrate our argument. Through analysis of these diverse cultural sources, we argue that the use of pharmaceuticals has come to be seen not only as a way to manage our brains, but through this as a means to manage our productive selves, and thereby to better manage the economy. We develop three analytical themes. First, we consider the cultural representations of the brain in connection with the idea of plasticity -captured most graphically in images of morphing -and the representation of enhancement as a desirable, inevitable, and almost painless process in which the mind-brain realizes its full potential and asserts its will over matter. Following this, we explore the social value accorded to productive employment and the contemporary (biopolitical) ethos of working on or managing oneself, particularly in respect of improving one's productive performance through cognitive enhancement. Developing this, we elaborate a third theme by looking at the moulding of the worker's productive body-brain in relation to the demands of the economic system. |
alism'. At the same time, 'the plasticity of the image (and our imagination) has overwhelmed the reality of the flesh and its limits' (ibid.: 50). The bodily nature of the brain, with its biological time and processes, its emotions and interconnections, is effectively written out. Through this, the brain becomes rendered open to commodification and instrumentalism, both for the individual and within the broader political economy. Featherstone (1991) notes that the body in contemporary consumerist culture has become seen and experienced as plastic, and hence a lifestyle accessory, a thing to be sculpted, shaped, and'stylized'. Similarly, Emily Martin develops this theme in Flexible Bodies, here placing it in an economic context, and arguing that 'flexibility is an object of desire for nearly everyone's personality, body and organisation' (Martin, 1994: xvii). In the next section, we further develop our analysis by discussing how the impetus to overcome the limitations of the body-brain through cognitive enhancement is socially and morally legitimated in relation to the imperative to be an economically active and productive subject. --- Valuing productivity and performance The potential to enhance the brain is linked to the high valuation that capitalist societies put on productivity, and thus on being a productive person. An article in the New Yorker entitled 'Brain Gain: The Underground World of "Neuroenhancing" Drugs' (Talbot, 2009) tells of a Harvard graduate who as a student regularly took Adderall as a study aid. 8 His summing up of this: 'Productivity is a good thing'. Another account in this article tells of someone who works with a colleague who takes modafinil and, in contrast to them, is seen by their boss as a problem 'for not being as productive'. A third story is of an older person experimenting with modafinil, who believes he is 'performing a little better'. The moral of this particular smart drug tale is that productivity is good, and hence achieving it through pharmaceutical means is also good. The rhetoric of productivity, and the expectation that the individual will work on their ability to be productive, legitimates the use of smart drugs. Imaginaries of cognitive enhancement are closely entwined with cultural norms and values of productivity and performance. At the level of the individual, there is an expectation that in order to legitimately participate within society, one has to be a productive person. For example, this underpins the argument of the Academy of Medical Sciences report Human Enhancement and the Future of Work: Enhancement could enable more people to work at their full biological capacity and to meet necessary entry requirements for an occupation, which could result in a rise in standards or potentially greater opportunity and diversity at work. Individuals with lower cognitive abilities tend to have less choice of occupations, but enhancement may enable them to compete and thus have greater choice. (Academy of Medical Sciences, 2012: 44) This articulation indicates a key shift in the contemporary relationship between the individual employee and their place in the labour market. The language is that of choice and competition, but in the sense that the individual, in order to have greater opportunities in their working life, has a 'choice' of enhancing themselves so they can better compete with others. The significance of this can be further seen in imaginaries of cognitive enhancement that find expression in popular culture. One of the straplines for the film Limitless is, 'What if a pill could make you rich and powerful?'. Similarly, newspaper articles and blogs are headed: 'Nootropics [substances that improve mental function]: Can These Smart Drugs Super-Charge Your Career?' ('Nootropics', 2013); 9 'Smart' drugs are coming to the office -to make you sharper, stronger...better" (Metro, 1st June 2016); 10 and 'How (and Why) to Use Nootropics to Boost Productivity and Performance' ('How (and Why) to Use Nootropics', 2016). It is noticeable that these headlines address the individual worker directly. In this framing, then, there is a sense in which the worker is expected to want to make themselves more employable, in order to achieve their own full potential and selfactualization. From this perspective, it is the worker who has to ensure that they are fit for work -in other words, to 'choose' to enhance themselves and make themselves into a productive body, and brain. In early industrial work, the worker was fitted to the job, in the sense that it was recognized that different workers had different levels of skills and abilities. Techniques were devised to measure these and thereby'sort' workers into their appropriate places within the labour market (Hollway, 1991). However, in the contemporary world, for many there has been a shift of responsibility towards the individual to make themselves 'employable', to take responsibility for their own wellbeing such that they are 'fit for work' -effectively, to make of themselves a marketable asset that can be sold to the highest bidder in the employment market (Dale, 2012;Dale and Burrell, 2014). This elision between productivity as being something that we do 'at work', and productivity being a characteristic that we have or are, can be discerned in entries in blog discussions amongst those who take modafinil/Provigil:'[Modafinil] We can link these commentaries on the benefits of the drug in terms of productivity back to the earlier discussion on the need for the flexibility of workers' bodies. The imaginary of the plasticity of the brain, harnessed through the use of cognitive enhancers, becomes yoked to this impetus to continually provide potential for ever-greater performance and productivity. Thus the speculative nature of contemporary capitalism (Cooper, 2011) is worked out through the possibilities of enhancement: 'The Real-Life Limitless Pill? Drug Helps Adults Learn as Fast as Children by Making the Brain More "Elastic"' (Woollaston, 2014). Here the plasticity of the brain has its counterpart in the idea of human'resourcefulness' -the idea that human qualities can be extended and enhanced, that they are not finite or fixed as are other assets: The working subject is always capable of'more', of 'becoming better', of learning, creativity, knowledge and 'talent' beyond that which is currently performed. (Costea, Crump, and Amridis, 2007: 250) Thus, the sorts of traits that are explicitly valued here in the employee are also those that are targeted through pharmaceutical enhancement. Enhancement drugs therefore do not solely increase the productivity of an individual in quantitative terms, but also enable employees to demonstrate that they are constantly '"switched on", present, alert, creative and enthused' (Fleming, 2015: 67). In other words, employees have to not only be productive, but look productive. Enhancement drugs aid in this because of their potential to increase focus and attention, even where an employee would otherwise be demotivated or uninterested. For example, in one online article we are informed that 'Lucas Baker, a Switzerland-based software engineer with a large tech company, takes nootropics every day. He says it helps him maintain focus, especially on projects he might otherwise put off. "When I find an unpleasant task, I can just power through it," he says' (Roose, 2015). In analysing the connections between imaginaries of cognitive enhancement and the valuation of production and performance, we can discern the interplay between the enhancement of the worker's body and the wider political economy. Furthermore, following Foucault (2008: 242), we can suggest that these imaginaries denote 'an extension of the economy to the entire social field'. In the next section, we move from considering the individual social relations of enhancement to a broader reflection on what this means for a political economy of enhanced brains. --- Enhancing the economy As we have seen above, imaginaries of cognitive enhancement are closely entwined with the cultural norms and values of productivity and performance, and at the individual level there is an expectation that in order to legitimately participate within society, one has to be a productive person. In this section, we move from considering the individual (enhanced) brain to look at wider aspects of enhancement in relation to Hogle's (2005: 703) point about the 'circuit of enterprise, biology, medicine and culture'. At a collective level, this means that bodies-brains are themselves seen as the source of productivity and performance for society. For example, a policy discussion paper on cognitive enhancement produced by the British Medical Association asserts the connection between the economy and cognition: An overall increase in cognitive ability in society could also lead to competitive advantages in the cut and thrust world of international trade and commerce. Fukuyama, who vehemently opposes the use of enhancements nevertheless acknowledges that 'a society with higher average intelligence may be wealthier, insofar as productivity correlates with intelligence'. (British Medical Association, 2007: 18-19) This fits with a commonly reiterated view that so-called 'advanced' or post-industrial capitalist economies are more dependent on knowledge, and its associated qualities of creativity and innovation, sometimes described as the 'knowledge economy' or labelled 'cognitive capitalism' (Vercellone, 2005). This impetus can also be seen in the film Limitless, where cognitive enhancement is presented as enabling and extending a number of intellectual capacities -from playing music to writing books -but its particular emphasis is on becoming so smart/cognitively enhanced as to be able to work in and command the world of corporate takeovers and financial markets. Moreover, in Limitless (and also in Lucy), cognitive enhancement enables the mind to'read' everything that is going on around it, making the world legible to the human brain. As Eddie, the central character in Limitless, explains, 'Everything I had ever read, heard, seen, was now organized and available'. To be able to'read' the world is to be able to understand it and thereby have control over it, to predict and change it. Ultimately, this is a form of cognitive knowledge and control that is to be put to work in relation to the economy. Turning from the Hollywood fantasy of enhancement to its more mundane applications in everyday life, a commonality can be discerned in terms of the orientation towards the economy -especially in enabling the organization and management of employment. Although sleep may be regarded as a precondition for health and hence the ability to work, it can also be seen as the absence of productive effort, a lack on the part of the body (Crary, 2013;Fleming, 2015;Wolf-Meyer, 2012). Accordingly, scientific efforts to understand sleep/wakefulness are not just aimed at offering treatments for individuals whose lives are plagued by an inability to regulate their patterns of waking/ sleeping, but are also of increasing relevance to organizations and the demands of an economic system that is geared towards 24/7 operation. It is in this context that Williams, Coveney, and Gabe (2013) discuss the desire for the customization of sleep, with the potential ultimately to make it optional. As mentioned earlier, the drug modafinil has become most touted as a cognitive enhancer, especially since a meta-analysis of it (Battleday and Brem, 2015) led to its being headlined as 'the world's first safe "smart drug"' (Thomson, 2015). 12 However, between its strictly medical use and the accounts of its enhancement properties (it has also been labelled the'real-life "Limitless" pill' by online pharmacies and news media accounts), its prevalent use is as an everyday regulator of wakefulness. For example, under the headline 'Sleepless in the City', The Times of India carried the subtitle: 'Modafinil, the latest lifestyle drug in Delhi, makes owls out of human nightbirds...the flip side of Working Delhi's graveyard shift' (Sharma, 2004). It is developments such as this -accounts of how modafinil is used to 'enhance' the lives of those who take it, enabling them to cope with the demands of work -that best illustrate the political economy of pharmaceutical enhancement. In 2004, the pharmaceutical company Cephalon received official clearance from the US Food and Drug Administration (FDA) to promote Provigil (its branded version of the drug) as a means of alleviating excessive sleepiness and promoting wakefulness in connection with shift work disorder (SWD), a condition associated with a significant section of the workforce engaged in long working hours or nightshift work (Kroll-Smith, 2003). 13 In 2011, Cephalon ran a new promotional campaign for Nuvigil (another variant of modafinil) targeting prescribing clinicians as well as potential consumers suffering from excessive sleepiness associated with shift work. Deploying the caption 'SUP-PORTING THOSE OF YOU WHO STAY AWAKE FOR THE REST OF US', one image on its website (also reproduced in related promotional material) presented a picture of four individuals (three male and one female): a firefighter, two paramedics/ clinicians, and an emergency services worker. These familiar, respected, and important figures in the community were presented as professionals who make sacrifices on our behalf. Forfeiting what the rest of us enjoy (at least in theory) -that is, a 'normal' night's sleep -they stay awake, responding to whatever emergency situation arises. The narrative here is that the drug enables these sorts of professionals to attain a better state of alertness or wakefulness whilst on duty, remaining vigilant, attentive, and thereby effective despite working at night or for long hours. Importantly, then, these individuals embody significant social values, such as possessing authority to deal with emergency situations, expertise, courage, duty to others, and caregiving. By supporting them in their night-shift work, the drug is also portrayed as upholding those values. Moreover, to the extent that drugs such as modafinil are portrayed as supporting key social values in the context of work, such social values are, conversely, 'borrowed' by the adverts to legitimate their usage. The official Nuvigil webpages and related adverts in professional journals (such as Medical Marketing & Media or Pharmacy Today) offer a series of 'user' narratives, including that of Jenn, a 32-year-old emergency room nurse dealing with both shift work and the need to take responsibility for her family: 'I'm so tired on my shift that it's hard to do my job'. Moreover, as well as the general occupational information that her presence on the page presents (she appears dressed in a cap and gown and has a stethoscope around her neck), we are provided with some further individual background information. Jenn, we are told,'sleeps approximately 6.5 hours during the day, waking to run errands and make dinner for the family'. So, in addition to the social values one might typically associate with an ER nurse, Jenn not only makes a sacrifice by working at night (whilst the'rest of us' sleep), but she also has responsibilities for others, foreshortening her daytime rest to perform domestic duties. The account suggests that to achieve this, she needs to manage the limitations of her own body, and appeals to modafinil as a solution to her problems. The drug offers a means of controlling her alertness/wakefulness, which in turn would allow her to do her job and manage the rest of her life and responsibilities. Once again, we can see an emphasis on the need for individual autonomy and responsibility -in this case over her performance in work and at home -coupled with a contribution to the collective good. 14 Of course, we all expect round-the-clock (24/7) availability when it comes to the emergency services. However, notably, the Nuvigil advertising campaign was later to include other rather different categories of workers -bartenders, waitresses, DJs, and warehouse staff -indicating a much broader scope of occupations deemed appropriate for the management of wakefulness. Thus, far from being reserved for people working in the essential or emergency services, the drug was available to support people working within all the services that modern consumer society takes for granted. On the one hand, then, the campaign reinforced the meaning and value of work in contemporary society, as we explored earlier. But on the other, it also indicated a managerial effort to realize a world in which employees' wake/sleep cycles are organized according to the demands of production and the provision of services: the desire to create an always-on body. In terms of imaginaries, what is notable about the official promotional material for modafinil discussed here is how it contrasts with the commonplace media imagery of an enhanced brain, as noted earlier. Instead of a brain or head in effect abstracted from the rest of the body, we have depictions of real-life embodied subjects who have recourse to the drug in order to stay alert and focused on their work. The material does not promise to boost intelligence, but it does offer the prospect of enhancing the brain's ability to cope with working extended hours or at night. This is not therefore an imaginary about becoming mentally exceptional through pharmaceuticals, but rather one that envisages normal working under conditions that would otherwise lead to a deterioration of cognitive abilities. 15 Finally, another example of the negative impact on work due to the limitations of the body occurs when individuals cross different time zones -namely, the problems of jet lag. A number of remedies have been proposed to deal with jet lag, including the prescription and marketing of melatonin (the hormone that regulates sleep and wakefulness). 16 In 2010, Cephalon applied to the FDA for the approval of Nuvigil as a possible treatment for jet lag, with business travellers being a particular target of interest (Pollack, 2010). From the perspective of the individual business person, the drug might seem attractive if it allowed them to stay alert and thus work effectively whilst crossing time zones, but from a managerial/organizational perspective, this application would help maximize use of human resources. In the event, the approval of the FDA was not forthcoming, but the fact of the application remains significant when considered in terms of the broader political economy of cognitive enhancement examined here. 17 In summary, whether it is a matter of enhancing the potential to cope with extended working hours, shift work, or business travel across different time zones, modafinil can be viewed as a putative contribution to the flexibility of labour, offering the prospect of suppressing the disorder to the system that the limited body-brain might otherwise precipitate, and thereby supporting the sort of 24/7 temporalities that are scrutinized by scholars such as Crary (2013), Fleming (2015), Williams, Coveney, andGabe (2013), andWolf-Meyer (2009). --- Conclusion Addressing the confluence of scientific ideas about the brain, pharmacological interventions in cognition, popular culture, and everyday life, we have considered imaginaries of cognitive enhancement in relation to three analytical themes. First, we considered the cultural representations of the brain in connection with the idea of plasticity -captured most graphically in images of morphing -and the representation of enhancement as a desirable, inevitable, and almost painless process in which the mindbrain realizes its full potential and asserts its will over matter. Following this, we explored the social value accorded to productive employment and the contemporary (biopolitical) ethos of working on or managing oneself, particularly in respect of improving one's productive performance through cognitive enhancement. In developing this, we looked at the moulding of the worker's productive body-brain in relation to the demands of the economic system. Aiming to build upon previous sociological studies that have researched individuals' motivations for and views about the decision to take cognitive enhancing drugs (Coveney, 2011;Smith and Land, 2014;Vrecko, 2013Vrecko,, 2015)), we have sought to connect the individual worker and their labouring body-brain with the contemporary neoliberal biopolitical context. Here, we briefly consider the consequences of these arguments: first in relation to studies in the history of the human sciences, then in relation to the use of the concept of imaginaries and especially how this relates to the remnants of Cartesianism, which we have analysed in imaginaries of cognitive enhancement; and finally, we reflect on some of the implications in relation to the neoliberal working subject. The sociological approach adopted here has wider implications in terms of the history of the human sciences. Viewing the shaping of the human body in the context of work as an accumulation strategy calls for an examination of the configuration or problematic that interlinks research into the body (including the brain) and associated developments in pharmacology; the pharmaceutical industry seeking to develop markets for its products; and the nexus of social, political, and economic factors that play a role in the constitution of problems or goals for which pharmacological or other technological solutions are sought. Seen in broader historical terms, there has of course been a long line of research endeavouring to understand the human body in order to better harness it to productive effort. Examples such as the 19th-century studies of fatigue (Rabinbach, 1992), scientific management (the human body envisaged and managed as an appendage to productive machinery), and the emergence of the field of work psychology (Hollway, 1991) help set current discussions of work and human enhancement in a broader perspective. In this article, we concentrate on the relations between the notions of brain plasticity incorporated in cognitive enhancement, and how this relates to the specific conditions of contemporary employment. To this end, we contend that imaginaries are of key importance -in terms of the imagined goals of scientific research (such as the search for an "on/off switch" in brain research); the ways in which these ideas are represented and interpreted more broadly in society, promoted, and marketed; and the part they play in the deliberations, sensemaking, and justifications whereby individuals orientate themselves in terms of their future possibilities for action. Accordingly, in this article we have considered how the imaginaries of pharmacological cognitive enhancement reflect and reproduce a number of key aspects of the contemporary cultural, socio-economic, and biopolitical landscape. Prevalent in a diverse array of sources, imaginaries are not mere abstract fantasies, but rather a key part of how individuals orientate themselves to their future possibilities: In order to act upon the world, individuals need to be able to imagine this action and its outcomes. Imaginaries are therefore performative. There is thus an anticipatory and promissory aspect to contemporary imaginaries about the brain that seems to suggest that not only will we be able to better understand ourselves as humans by understanding the brain, but also this knowledge and its associated techniques will enable us to govern ourselves and human affairs in general, in a better way (Rose and Abi-Rached, 2013). We would argue that through the use of the concept of imaginaries, there is a possibility to move away from dualistic conceptions, such as mind/body, as well as to better understand the implications of such conceptualizations. We have analysed the Cartesian strand that runs through the imaginary of cognitive enhancement and splits cognition or intellect from the (rest of the) body. One consequence is that the focus on the enhanced brain serves to distance not only the individual's body but also the collective, the social body. Erasing the social in this way not only reinforces the individualist subject position of neoliberalism, but also diverts attention away from substantive consideration of the coercive pressures stemming from the policies and conditions of employment that might drive individuals towards cognitive enhancement. The Cartesian strand that we perceive in imaginaries of cognitive enhancement sees technology as a means of surpassing the biological limitations of the body in order to achieve control through the exercise of the mind, along with its associated knowledge and rationality. Both Limitless and Lucy trade on the idea of unlocking the brain's potential through technology in the shape of a pharmacological substance, in the process acquiring power to act upon and hence manage the world, rendering it subject to organization and thereby control. Thus we might argue that the other side of the plasticity of the brain is the increased malleability of the world. In this connection, it is useful to note the emphasis on progress through technology that is a key trope of modern culture. Underlying this is a predominantly humanistic approach to technology as the product of human knowledge to harness the resources of the earth to human desires and designs. Moreover, the liberal view of enhancing the brain as individual 'free' choice is allied to the longstanding North Atlantic/Occidental notion that the human species inevitably seeks to better itself, to improve on the present by embracing the future -what might be seen as the 'ascent of man [sic]' -as depicted in Limitless, or even a move to a posthuman or transhuman future, as represented by Lucy. This suggests that it is somehow 'natural' for individuals to better themselves, to realize their potential, and thereby draws upon and reproduces the neoliberal view of the autonomous subject who possesses free will and, in taking responsibility for their own fate, acts as their own 'unit of enterprise' (Foucault, 2008: 226). However, complementing this focus on the neoliberal subject, we have also considered enhancement from the perspective of the broader economic system and the accumulation of 'human capital', to argue that the 'productive body' is an economic entity, shaped in relation to other bodies and technologies and the demands that the system generates. Indeed, organizations have long demanded flexibility on the part of the workforce, to adopt patterns of working according to the demands of the systems of production. In this light, cognitive enhancement might appear as yet another means of proving or realizing one's flexibility to fit with the system, as a way to create the always-on body. 18 It is no mere accident or contingent matter of biochemistry that the drug modafinil, which is officially prescribed to keep people awake when they are meant to be focused and alert, also appears attractive to those seeking or promoting cognitive enhancement (see, for example, Coveney, 2011;Williams et al., 2008). Preventing inadvertent sleep during the day (narcolepsy), avoiding sleepiness whilst working at night (shift work disorder), avoiding or controlling excessive daytime sleepiness, or seeking to increase focus and alertness on demand (enhancement) all represent efforts to manage the functioning of the brain towards productive or performative ends. Biochemistry may be part of the equation, but so too is the contemporary biopolitical ethos in which the accumulation of capital is increasingly dependent on the accumulation of flexible brains. --- Funding The authors received no financial support for the research, authorship, and/or publication of this article. --- ORCID iD Karen Dale https://orcid.org/0000-0001-8881-5375 --- Notes We would like to thank the participants at the symposium on 'Minds and Brains in Everyday Life' in Edinburgh for helpful discussions; Tineke and Susanne for their support and detailed comments on the article; and the anonymous reviewers for their insightful points. 1. This view of plasticity goes well beyond recognizing that the juvenile human brain goesthrough a significant process of development, to emphasize the changes that take place in the brain throughout an individual's whole life course. The concept encompasses the idea of an openness to influences from both within and outside the body, both environmental and -importantly -self-determined, including the effects of brain training, various therapies, harnessing neurofeedback loops, and so on (see, for instance, Brenninkmeijer, 2010), and includes in its effects both individual and potential epigenetic changes. This, then, provides a view of opportunities and threats for the future that focus on the brain (Papadopoulos, 2011;Rose and Abi-Rached, 2013). 2. The term'smart drugs' is a potentially misleading phrase for a number of reasons. First, in thisuse'smart' refers to the assumed increase in cognitive capability, but promoting wakefulness or alertness is not the same as the increase of intellect. Second,'smart drugs' has also been used to describe the search to develop drugs that target and treat only certain symptoms -in other words, the drugs themselves rather than the outcomes are'smart'. Third, there are no drugs licensed to be prescribed or marketed as'smart drugs'. Where drugs are taken for their assumed cognitive effects, these are drugs that are prescribed for other conditions (such as narcolepsy or ADHD) and taken off-prescription. 3. And indeed 'an ill for every pill', with the argument that illness and diseases come to beshaped such that they 'fit' pharmacological substances that have been developed. 4. The articles were identified by utilizing the Nexis database of news publications; our searchterms included'modafinil' or'smart drugs' coupled with 'brain'. The sample included the following UK national publications: The Guardian and The Observer, The Times and The Sunday Times, The Independent, The Daily Telegraph and The Sunday Telegraph, the Daily Mail and Mail on Sunday, The Sun, The Mirror and The Sunday Mirror, the Daily Express and Sunday Express, the i, and the Daily Star. 5. It is a matter of public record that Cephalon was reprimanded by the FDA for providingproduct information, reinforced by the behaviour of sales staff, that promoted the drug for general symptoms of sleepiness and fatigue well beyond its official authorization (US Department of Justice, 2008). In other words, the pharmacological manipulation of the brain by individuals who felt tired or insufficiently alert when they wished to be alert and attentive was becoming endorsed as a matter of individual choice. 6. Although Alan Glynn, the author of the original book that inspired the film (The Dark Fields), is clear that it was an entirely fictional drug. Similarly, the drug from Lucy, CPH4, whilst itself a fictional drug, is claimed by the film's director Luc Besson to be based upon a real chemical that is produced in pregnancy to promote the growth and development of the foetus. Web discussions show that individuals have sought for this substance online and sometimes unscrupulous retailers will claim to sell it, whilst others claim that it is modafinil that is the real source of the fictional drug. 7. Psychiatric News ( 2007) 42(7): 15, available at: https://psychnews.psychiatryonline.org/doi/ pdf/10.1176/pn.2007.42.issue-7. 8. Adderall is a mixture of amphetamine salts, primarily used to treat ADHD. 9. Nootropics are drugs, supplements, and nutritional products that are claimed to improveaspects of mental function (such as memory, motivation, and attention). The term was coined by Corneliu Giurgea from Greek words meaning'mind' and 'to bend or turn'. 10. https://metro.co.uk/2016/06/01/smart-drugs-are-coming-to-the-office-to-make-yousharperstronger-better-5917892/ (accessed 20th November 2019). 11. https://web.archive.org/web/20150215220209/http://modafinilorder.com/modalert-reviews/ 12. For a critical response to this study, see Repantis, Maier, and Heuser (2016). 13. Cephalon was acquired by Teva in 2011. 'Shift work disorder' is a medicalization of thedisruptions to their sleep and wake cycles that shift workers commonly experience, including the inability to sleep during the day or 'excessive sleepiness' whilst on night shift. This has not been a prescribed use of the drug in Europe since 2011, when the European Medicines Agency decided that the potential side effects of modafinil were such that shift work disorder should not be included. Direct marketing of pharmaceuticals to consumers is only legally permitted in the USA and New Zealand. The official website for Nuvigil (www.nuvigil.com), the branded version of modafinil that superseded its forerunner Provigil, offered information concerning a variety of conditions associated with excessive sleepiness and for which the drug might be prescribed. It suggested that some 15 million Americans work outside the 9 to 5 regimen of other employees, and that up to 25% of them might have SWD. 14. 'Checking Beds, Ready for Her Own' (2012, 1 July) Medical Marketing & Media, available at: https://www.mmm-online.com/home/channel/features/100-agencies-draftfcb-healthcare/; see also, Pharmacy Today (2013, February: 19). 15. Of course, in accordance with our earlier discussion of morphing and representations ofenhancement, this is not to suggest that modafinil might counter the other deleterious impacts that shift work has on the body -for which there is a growing amount of evidence. Rather, in a sense the imaginary on offer presents the bodily consequences of shift work as merely one of excessive sleepiness, thereby potentially diverting attention away from its more serious health effects. 16. An example of the discussion of this can be seen in Fleming (2017). 17. That said, there is continued online discussion of the merits of the drug for this purpose. 18. One possible further socio-economic factor in the future potential demand and take-up ofcognitive enhancers stems from the increasing automation of work and the substitution of human jobs by technology. --- Declaration of conflicting interests The authors declared no potential conflicts of interest with respect to the research, authorship, and/ or publication of this article. --- Author biographies Brian P. Bloomfield | This article seeks to situate pharmacological cognitive enhancement as part of a broader relationship between cultural understandings of the body-brain and the political economy. It is the body of the worker that forms the intersection of this relationship and through which it comes to be enacted and experienced. In this article, we investigate the imaginaries that both inform and are reproduced by representations of pharmacological cognitive enhancement, drawing on cultural sources such as newspaper articles and films, policy documents, and pharmaceutical marketing material to illustrate our argument. Through analysis of these diverse cultural sources, we argue that the use of pharmaceuticals has come to be seen not only as a way to manage our brains, but through this as a means to manage our productive selves, and thereby to better manage the economy. We develop three analytical themes. First, we consider the cultural representations of the brain in connection with the idea of plasticity -captured most graphically in images of morphing -and the representation of enhancement as a desirable, inevitable, and almost painless process in which the mind-brain realizes its full potential and asserts its will over matter. Following this, we explore the social value accorded to productive employment and the contemporary (biopolitical) ethos of working on or managing oneself, particularly in respect of improving one's productive performance through cognitive enhancement. Developing this, we elaborate a third theme by looking at the moulding of the worker's productive body-brain in relation to the demands of the economic system. |
INTRODUCTION There was an interesting statement by Kuntowijoyo about Madura when he researched the northern island of Java. "Madura is unique" (Kuntowijoyo, 2002). The word unique shows a separate meaning, form, and type (Nasional, 2008). The uniqueness lies in the values, culture, beliefs, and social structures essential in how the Madurese sees the world. One of the uniqueness of Madura is the practice of inheriting Roma Toah, which is the culture of the people. This inheritance reflects the family system, which is the philosophy of the Madurese people. The life philosophy of the Madurese, "rampak naong, banyan korong" means shady and shady (Zubairi, 2013), becomes the basis of behavior and attitudes in deciding the actions of the Madurese people. That is, the attitude of nurturing and caring for each other towards family ties is the main thing-predominantly female family members. The existence of disturbing behavior from outsiders toward family members is considered a disturbance to the whole family (Takdir, 2018). This perspective reflects in various aspects of Madurese community life. For example, the transfer of wealth between generations (inheritance) in the Madurese community is more like rampak naong than referring to their religion, Islam. If in Surah An-Nisa's verse 11, boys get two shares of girls, while in Madura, a woman gets more shares than boys. Even in the Roma Toah inheritance, Madurese women are privileged to occupy the Roma Toah building. The Madurese, seen from the shape of the tanean lanjheng house, is described as a religious society. The Musalla in every house is not only used as a center for religious rituals but also as a place to solve various problems of daily life led by a kyai or cleric' (Hipni & Nahidloh, 2015;Rochana, 2012). The spirit of religion is the basis of the solution to every problem in the Madurese community. However, the Madurese prefers the traditional model of transferring wealth between generations, which takes precedence. The form of inheritance of Roma Toah is the result of a combination of culture and Islamic law. This combination produces a collective inheritance model that is not based on individual principles as in Islamic inheritance. The combination between should (das sollen) and the practice of Islamic law in society (das sein) is a never-ending study. It is because life's problems are constantly developing, while the source of Islamic law has finished its descent process. Ngatawi Al-Zastrouw (2017) sees this combination as a natural model for developing Indonesian Islamic law. Even Muhammad Mutawalli, the combination of adat and Islamic law, has been practiced in the legal life of Indonesian society (Mutawali, 2022;Syaikhu et al., 2023). Moreover, studying the relationship between Islamic law and human rights has a broader scope (Mukharrom & Abdi, 2023). This research is interesting because it seeks to reveal the legal facts behind the romanticism of Islamic law and living law. By understanding the legal facts behind the phenomenon of the Roma Toah inheritance practice, the ratio legis or model of law formation by Madurese people can understand. Therefore, in this study, we focus on two main issues. First, how is the division of inheritance of Roma Toah in the traditions of the Madurese people? Second, how is the social construction of the values contained therein? --- METHODS This type of research is qualitative research with a socio-legal approach. The socio-legal research approach departs from the awareness that law resides in an ever-evolving world culture, so studying social sciences becomes necessary in this research. The Socio-legal model is a multidisciplinary research model. It means that legal research does not look at the law from its normative perspective but what is behind it that has never been shown in legal formulations (Banakar, 2019). This study uses social construction analysis to understand the parts that build the reality of Roma Toah inheritance in the dialect of the social life of the Madurese people (Berger & Luckmann, 1990). Social construction theory is the development of phenomenological theory, which was born as a counter-theory to theories in a social paradigm. Especially the theory initiated by Emile Durkheim, this phenomenological theory was originally developed by Max Weber. Although initially, it was a philosophical theory that became the leading theory of philosophical thought by Hegel and Husserl and was continued later by Schulz, which was perfect by Weber. Weber made phenomenological philosophical theory reliable in analyzing social phenomena in society (Syam, 2005). Social construction theory includes the functional, structural social theory, where this theory sees social reality more as the function of structure in each action. The structural-functional theory assumes that individual actions result from the formation of the structure that surrounds them in their social life. Individual action is like a product that follows the rhythm of the structure (Berger & Luckmann, 1990). The Roma Toah heritage building social construction theory is an analytical tool because it includes social facts. This method determines the cultural and social elements that shape the inheritance of Roma Toah in the people of Bangkalan Regency. This method generally has three processes before a social reality becomes real; externalization, objectification, and internalization. These three phases experience a positive dialectic that runs simultaneously in forming the legacy of Roma Toah; Meanwhile, data related to the practice of inheriting Roma Toah and its actual meaning, which the Madurese people uphold, is the primary data in this study. Meanwhile, this study's information on Islamic inheritance from books and other sources is secondary data. --- 123 --- RESULTS AND DISCUSSION --- Madura Culture The culture of the Madurese people is inseparable from religion as the value that underlies their outlook on life. According to the Madurese, religion is a fundamental way of life and an identity. Obedience to religion has become the identity of the Madurese people. The traditional Madurese reflects clothing models such as samper (long cloth, usually used as a lower cover for women), kebaya, burqo' (veil) for women, sarong (sarong), and songko' (cap) for men. -men have become a symbol of Islam in the countryside (Rifai, 2006). Even in every Madurese house building, there is a prayer room that not only functions as a place of worship but also as a place to solve various life problems. In addition to religion as the life identity of the Madurese people, malo is a situation that the Madurese highly avoids. In simple terms, malo is similar to the meaning of shame in Indonesian-however, the word shame is interpreted in Madurese with the word todus. The difference between malo and shame, todus lies in the cause of the shame. Feelings of fear of being reproached or afraid of being found out to have disgrace (Al-Muqaddam, 2015) caused by other people who deny or do not recognize their capacity so that the self-esteem in question feels humiliated, and feels tada' ajhina, (Wiyata, 2013). Causing a feeling of malo to a Madurese can lead to counter-actions demanding the return of the Madurese's ajhina. The Madurese people very much avoid the feeling of malo concerning the individual and the family. Feelings of Malo in the Madurese community usually involve violations against the honor of wives, children, especially daughters, and inheritance issues. Violations of these three things have resulted in very harsh prosecutions from the Madurese. The Madurese consider it a despicable act, losing face, dignity, honor, rights, and self-esteem (Rifai, 2006). Violations of fell malo is considered otang rassah repaid in full with nyerra rassa. Efforts to demand repayment of debts incurred to individuals and Madurese families become a "duty" with all family members. The strong kinship ties of the Madurese community generate this sense of togetherness. One family member gets malo, so all other family members also feel the same malo. The strength of family ties in the Madurese community observes from the tanean lanjheng settlement model in the Madurese community. The housing order in the tanean lanjheng concept describes a strong and harmonious family (Sari et al., 2022). The strong ties of the Madurese community allow for mutual love and care between family members. Every behavior of family members directs to maintain adherence and respect for individuals who must be respected. Those held in high esteem by the Madurese are reflected in the adegium bu pa guru rato. Madurese must respect their mothers, fathers (parents), teachers, and rulers (Hefni, 2007). Actions that do not respect their faults are considered disrespectful of adhet, manners, or manners. However, of the three individuals, the teacher is ranked first as someone Madura must respect. The teacher or kyai is a person who is highly obeyed and respected by the community because he is a symbol of religious authority in Madura. Following and obeying the kyai is considered obedient to religion. The position of the kyai is that the Madurese family always maintains good relations with the kyai. The Madurese people always preserve this message to their children and grandchildren. A Kyai-Santri bond relationship further strengthens the obedience of the Madurese community to kyai. Ulama' or kyai in Madurese society are religious authorities, and various problems of Madurese people complain to Kyai (Hannan & Abdillah, 2019). Regarding religious, social, cultural, and political issues and even naming a baby, Madurese often asks a kyai for blessings. That is, the clergy have a vital position in the life of the Madurese people. --- Distribution of inheritance Roma Toah Such socio-cultural conditions of the Madurese people influence their inheritance distribution model. The transfer of wealth between generations in Madura reflects the patterns and views of the Madurese people's lives. That is a lifestyle that upholds a strong family life. Adegium rampak naong bringin Korong became the underlying part of the property transfer. In the tradition of the Madurese people, there are three forms of distribution of assets; The first is the distribution of Sangkolan inheritance. Second, the division of inheritance of Roma Toah and thirdly, the distribution of inheritance, which refers to Islamic law the distribution of inheritance in Islamic law, is carried out if the two divisions of sangkolan inheritance and Roma Toah inheritance are not carrying first when the parents are still alive (Hipni & Karim, 2019). The division of inheritance of Roma Toah, which is the focus of this research, is the transfer of wealth from the previous generation (parents) to the next generation (children, grandchildren). The Roma Toah heir does not position one of the heirs as the inherited house and land owner. Allotments of houses and land are allocated to all heirs. It means that all descendants of the inheritor have the right to use the inherited house. Mukminah expressed This kind of model "olle ngennengin ben mabeccek keng lok olle ajuel" (you are welcome to occupy and repair but not allowed to sell or change hands) to other people who are not heirs. Even though the Roma Toah does not assign to one of the heirs, the parents appoint one of the heirs as pamolean or who occupies the Roma Toah. The heirs who occupy the Roma Toah are usually daughters. There are two reasons for girls to become pamoleans. First, girls are considered more capable of bidding on their parents. Second, the culture of the Madurese community is women who take care of household life. By appointing a daughter as a pamolean, other family members do not feel ashamed to be a mole visiting the pamolean's house. It is as expressed by Ansori "sopajeh lok todus mon taretan lekek mole, jalanh ngakan ka depor" (so that the brothers do not feel embarrassed when they come home and go straight to the kitchen to eat). It is different when boys become pamolean. If a son becomes an automatic polisher, his wife or in-laws inherits the household. This condition makes us feel embarrassed to go home often. In this case, women have a unique position, as respect for the people of Madura, womenfolk in their social life (Bukido et al., 2022). --- The Social Construction of The Legacy of Roma Toah The Madurese, in general, have strong religious ties. The practice of inheriting Roma Toah, which some people see as contrary to Islam, needs to be seen by looking at their understanding of the practice of inheriting Roma Toah that they do. Because the attitude of not understanding something can lead to fatal mistakes in responding to a fact, in this case, a comprehensive understanding of the Bangkalan people's perspective on the practice of Roma Toah is required. As a unique entity, the Madurese is a dynamic community in all its social actions. Sociological-anthropological reading to analyze some of the elements that build the formation of the Roma Toah inheritance model is necessary to pay attention. Epistemological analysis of the Roma Toah inheritance model needs as a basis or foothold to capture meaning. It can then analyze it from the point of view of Islamic law. Thus the Istimbat process is expected to produce the right legal product. Efforts to analyze the construction of Roma Toah in the realm of social action require the concept of social science to understand it. The Sociologicalanthropological analysis of Roma Toah's used to parse and understand social action to produce a causal explanation of social action in society and its consequences (Syam, 2010). Waris Roma Toah, as a social action, is a legal phenomenon that lives in the life of the Madurese people. For the public to get meaning, the writer must "keep his distance" to not fall into individual tendencies and produce less meaningful results. In order to understand how the social construction of Roma Toah inheritance in Madurese society, it is necessary to analyze the elements of this ancestral inheritance by using three processes of the formation of social reality. In the concept of social construction, there is a process of forming social reality; externalization, objectification, and internalization. These three phases experience a positive dialectic that runs simultaneously in shaping the inheritance of Roma Toah in the people of Bangkalan, Madura. Externalization is the initial process of the social construction phase of social reality in certain community entities. In this phase, individuals adapt to their socio-cultural aspects (Berger & Luckmann, 1990;Sulaiman, 2016). In this adaptation moment, humans use language and action as a medium to adapt to socio-, cultural, and socially adapted. At this moment, people sometimes do not adapt to their socio-cultural situation. Individual acceptance and acceptance of Roma Toah depend on whether or not he can adapt to his socio-cultural environment. --- Jurnal Ilmiah The use of language in everyday life in Madurese society is essential as a symbol of one's attitude towards the other person. The complex character of the Madurese people makes them very sensitive to the choice of words used in communication. Errors in word diction can cause disputes between them. In this extermination moment, a Pamolean Roma Toah is required to use excellent and acceptable language to other family members. The ability to communicate well and follow the context of politeness can influence the language used so that extended family members can accept it and prevent Roma Toah from becoming mateh obhur (torch kepaten). It means that the Roma Toah are no longer friendly, and the other family members do not want Mo to occupy the Roma Toah. It can even lead to disputes between heirs demanding ownership rights to Roma Toah. Such conditions can cause the existence of Roma Toah as a mandate from parents to be threatened with disappearance. It is the cause of breaking the silaturrahim rope between family members. On the other hand, good verbal language communication is not only imposed on Pamolean; other members must also use polite language so that Pamolean feels comfortable being caretakers and caring. Often disputes are caused by one of the families not maintaining their communication pattern with the Pamolean, so other families will defend the Pamolean as having a responsibility to look after the Roma Toah. If this condition persists, it will become the seed of division in externalizing Roma Toah. So, in conclusion, at this moment, all family members will adjust to the socio-cultural inheritance of Roma Toah as a reality initiated by their ancestors or parents. Process failure could have occurred carried out by some family members. In addition to verbal language, which is the key to successful socio-cultural adjustment during the externalization of the Roma Toah inheritance, the language of action of each family member becomes something that needs to show. All actions and behavior of family members are required to represent and describe an attitude of high appreciation and respect among all heirs. In order to maintain such a culture, all heirs must maintain communication traffic for the sake of the continuity of the Roma Toah to remain in harmony, according to what Hatija said, "satetanan koduh akor jek atokaran" (siblings must get along well, not fight). There are no privileges for the male lineage or the eldest child. Even though the male line still has the privilege of "managing" in the Roma Toah culture. However, they cannot "arrange" (win themselves) in their actions. In the Roma Toah tradition, the male lineage becomes parembugen (ask for opinion), a place to consult in all matters related to Roma Toah culture. However, language and actions must still be equivalent in describing appreciation and respect as a family bond in one descendant. When part of the family returns to Roma Toah, the attitude of being an outside family (the outside family who are not in Roma Toah) must always be adhered to, prioritizing the family who are Pamoleans. All household matters are rights handed over to Pamoleans. The outrageous attitude towards siblings appointed as Pamolan can cause the adaptation process to be disturbed because the family designated as Pamolean must be able to maintain a neutral position and have an open heart in dealing with various characters of other family members who are (Roma Toah). The attitude of pa mappa wedding is essential for Pamoleans to adapt to the socio-culture they are facing. Failure at this moment can cause Pamolean to choose to leave the mandate given to maintain and care for the existence of Roma Toah as a joint family inheritance. Alternatively, conversely, outside families do not want to return to Roma Toah, so the existence and spirit of Roma Toah automatically disappear (mateh obhur). Such conditions may become disputes because each family still has the right to Roma Toah. The explanation above is a tool used in adaptation to the socio-cultural by various parties involved in Roma Toah culture. The adaptation moment in this phase includes two essential processes: adaptation to the holy scriptures and old values that have become the culture of the Bangkalan people. As an entity cannot separate from the surrounding culture, religion, and old cultural values are essential to community life. Therefore, adapting to these two things is critical to understand how Roma Toah constructed inheritance in Bangkalan society. First, adaptation to the holy book or religious sources of the people of Bangkalan (Al-Quran and Hadith) community entity known for its vital religion, their holy book, should have been used as a guide in all actions. Because the existence of religion measures to what extent an action follows the religious scriptures of a particular community. The holy book is a barometer used for the legitimacy of "right" and "wrong" for an act of a religious community. However, at the level of the Bangkalan people, what is meant by barometer and legitimacy of good and bad deeds, does not directly refer to the Al-Quran and Hadith as their holy books but refers to the opinion of the kyai who are considered capable of translating the meaning and content of sacred religious texts as guidelines. The phrase bupak babu ghuru rato cakna kaeh, is a Madurese expression, a teaching that implies people who respect Madurese culture; mothers, fathers, teachers, and the government. The discussion contains other teachings, namely obedience to a kiai. The words cakna kaeh (the word kiai) represent this moral message. Even the kyai is considered a reference book in various matters. As expressed by Sayuti, "Tang ketab jiah keaeh" (my reference book is kyai). However, in Madurese society, in general, they prefer dawuh. The opinion of a kyai becomes the basis for the legitimacy of their actions. This fact applies to the general public, who do not have sufficient religious knowledge to take directly the primary sources of sacred texts. Those who have the ability sometimes still choose to consult with their teacher or kyai, who have a higher level of knowledge. --- Jurnal Likewise, obedience to teachers has a basis for legitimacy that exists in the scientific tradition of religion (For example, the source of the book which is a reference for interaction procedures between teachers and students, which is the curriculum in Islamic boarding schools, recitation, imtihanan, routines and madrasas in Bangkalan is media socialization of these teachings so that the legitimacy of a kyai as a reference in the field of religion becomes more vital for the people of Bangkalan. The legitimacy of the Bangkalan ulama' towards the existence of the Roma Toah heirs reflects their opinion. The opinion of some clerics is not concerned with the existence of Roma Toah as a transfer of wealth between generations in Bangkalan society, not contradicting Islam, as the religion adhered to by the people of Bangkalan. The opinion of the Madurese scholars who do not question the existence of Roma Toah is a strong legitimacy for the survival of Roma Toah's inheritance in Bangkalan. In addition to the opinion of the Bangkalan ulama' who do not deny the existence of the practice of Roma Toah in society verbally, the reality of the life of the Madurese clerics also practices Roma Toah inheritance by becoming a gegenten (substitute), a substitute as a successor to Roma Toah and a place for all heirs and successors of good da'wah to return-taking care of pesantren and other religious matters. Such behavior becomes the legitimacy of the Bangkalan people in maintaining the existence of Roma Toah as a cultural field for their extended family. Such conditions continue so that the reality of Roma Toah becomes natural for the community by hearing and witnessing it frequently in various social interactions in their social life. When they visit their sons at Islamic boarding schools, pray to the kyai, and read genealogy at haul events, Madurese people usually touch and hear the reality of the Roma Toah. The saying of the people of Bangkalan, "sapah se deddi gegenten, sapa se deddih pamolean" (who will be the replacement), is usually asked by the community when they hear of a kyai dying. A kyai, besides being a reference for the community both in speech and behavior during this externalization phase, the legitimacy of the scriptures for the practice of Roma Toah find in several social values that exist in the inheritance of Roma Toah. That is advice to maintain silaturrahim rope. As contained in the letter An-Nisa' verse 1, "O people, fear your Lord, who created you from one soul. Allah created his partner from himself, and He multiplied men and women from both of them. Fear Allah in whose name we ask one another and (maintain) family relations. Verily, Allah is always guarding and watching over us. In the friendship of the same descendants in the Roma Toah culture, there is a process of mutual strengthening of kinship ties with the transfer of cultural values conveyed in verbal communication by the older ones to the younger ones. Such a process will give birth to a strong family union. Apart from being a characteristic of the Madurese community with strong family ties, the sacred texts of Islam also strongly advocate for the unity of the Muslim Ummah and oblige its adherents to protect it so that it does not divide one another. That is, the behavior of the Bangkalan people in the Roma Toah culture has a legitimacy basis from their religion. The externalization phase above is a process of adjustment to the social values contained in the Roma Toah inheritance. In addition to the values described above, the determination of Roma Toah inheritance which carries out since the heir is still alive, has a legal ratio in Islam contained in the reference books of the Madurese clerics in giving fatwas and deciding religious issues of the Madurese people. It is discussed and described in the next chapter regarding the analysis of the opinions of Madurese scholars about Roma Toah. In the realm of Madurese culture, the distribution of inheritance of Roma Toah, which carry out since the parents or heirs are still alive, is considered an economic "strategy" as well as anticipation if their offspring have a terrible life. For example, there is a divorce for the male heir, who in the Madurese tradition follows and lives at the wife's house, so Roma Toah is the destination for settling down. Alternatively, for example, some descendants have no luck in the economic field, so they do not have a place to live. Maka Roma Toah is a shelter for those who cannot be economically independent. In the context of externalization, parents' attention to the economy of their heirs by making their house a joint inheritance (Roma Toah) gets a strong basis in Islamic religious teachings (Moesa, 2007). Because Islam is very concerned about its adherents leaving their offspring in a state that is capable in all areas of life and does not become a burden for other people. Thus the legitimacy of the holy book, the process of adjusting the individual to socio-cultural, through the legitimacy of the religious scriptures adopted by the people of Bangkalan, namely Islam, has a strong relevance. Thus, the practice of inheriting Roma Toah is a cultural process of the Bangkalan people carried out consciously, and there is no compulsion to do so. Second, adjustment to old traditional values. Adjustment to this old tradition has two forms; acceptance and rejection of old living values. The form of acceptance can play an active role in the process that applies to the distribution of Roma Toah. The active role of family members who receive can be in the form of participating actively in carrying out and even socializing old traditions that support the existence of Roma Toah in the culture of the Bangkalan people. One form of the attitude of accepting heirs towards the existence of Roma Toah inheritance in the reality of Bangkalan Madura life is the behavior of individual adjustments to old traditions. It is directly considered an effort to preserve the values in the Roma Toah inheritance. One form of behavior is the habit of the mole to Roma Toah, carried out on Thursday afternoons or commonly called amalem jumaten. This mole habit is a form of looking at parents when the parents are still alive. When a parent has died, like to tilik, mole ka bengkoh toah (going home to Roma Toah). The tradition of going home on Friday night was carried out with families already married by bringing modest gifts. The wife walked before her with a ter ater on her head. At the same time, her husband followed her from behind. If the mole habit does not carry out, the sibling who becomes pamolean will usually ask about the omission. The Mole on Friday night, carried out by people whose homes are close to Roma Toah. However, suppose the house is far away, for example, migrating outside the area. In that case, the homecoming tradition is a form of their acceptance of Roma Toah as a symbol of strong family ties. According to the Bangkalan people, Roma Toah is not only seen as a symbol of family ties but also of deceased parents' existence. So that the tradition of going home on holidays is a time that migrants from Bangkalan highly anticipate because they consider returning home to visit their parents as a form of devotion to their parents. With the increase in the number of families already getting more significant, mole and homecoming activities are usually filled with activities that illustrate efforts to preserve family ties by carrying out lir bilir ancestry (telling genealogy). Preservation of genealogy is done verbally at the mole or mudik momentum. For some families, the family tree of the Madurese people is written in a book that records the distribution of family members in several areas. It is done to preserve the family tree so that it is not forgotten and, at the same time to maintain friendly relations between family members. In families that have excess economic capacity or families with glorified lineages, for example, bhuju', kyai, or people who are prominent figures, one of the momentums to unite families in Roma Toah is the haul event of the parents who form the Roma Toah for the first time. The form of family acceptance of Roma Toah is their active participation as shahibul hajah at the haul event. --- 131 In addition to accepting the old tradition, which is considered good, in this phase, some family members do not accept the old traditional values that already exist in the Roma Toah culture. The non-acceptance of the Roma Toah heirs is in the form of actions promoting disharmony in family ties. There are different forms of action. Depending on the level of rejection of the existence of Roma Toah. For example, a very extreme refusal to question and suing their rights is in Roma Toah. This action caused the Roma Toah to no longer be a unifying family. In addition to suing and demanding ownership rights in the Roma Toah, the community's attitude of rejection can be in the form of "fear" of occupying the Roma Toah. Feelings of worry about other family members contesting their rights in the Roma Toah, the Roma becomes empty, and no one occupies it. Cases like this usually occur in the inheritance of the Roma Toah of the ancestors who do not become Pamolean or those who care for the Roma Toah do not have children or move houses. For those who refuse, he will refuse to be appointed Pamolean by the extended family meeting. --- Objectification of Roma Toah's Inheritance Value Objectivation is The second phase in constructing the reality of Roma Toah's inheritance. It is an individual's interaction with the socio-cultural world that surrounds him. This objectivation presupposes two realities between the individual as a social being, with the subjectivity owned and another reality outside the individual. This other reality becomes objective for the individual's self-world and looks different, so an intersubjective relationship forms between the two realities; subjective and objective. The relationship between the two occurs through the process of institutionalization and institutionalization. The process of objectivation to the reality of the inheritance of Roma Toah can explain as follows. First, Roma Toah is a building and land as a place to live for people and ancestors (bhuju') considered to have a meaning other than property and place of residence. The people of Bangkalan see the Roma Toah as a building with a magical meaning as a legacy from their parents and bhuju' because it is a legacy from their parents, a particular interaction model is needed, not the same as treating a home we bought ourselves. Treating Roma Toah well is believed to get sabeb (barokah) for parents or bhuju' who leave the house. On the contrary, mistreating Roma Toah, causing disputes with other relatives, and committing Disobedience, are believed to result in tola, belet (karma). Belet for a person's life can be a failure in his life. For example, a low economic level, a family that is not harmonious, has unsuccessful offspring, or even suffering from an incurable disease. In short, people who get belet will experience a bad life. The process of meaning obtains through trust in parents and land. Parents are considered as prince katon (visible god) by Madurese. As explained above, obedience to parents in the context of diversity has a strong basis of legitimacy from the two sources of Islam, the Al-Quran, and Hadith. The intersubjective process in Roma Toah inheritance has a solid logical basis in the culture of the Bangkalan people, who are known to be firm in upholding their religious principles. When parents decide that their inherited house will be used as a Roma Toah, all family members will hold it firmly and see it as a reality that must be maintained. Likewise, when parents have appointed one of the families as pamolean, who takes care of the existence of Roma Toah, they will respect and obey the parents' decision as a holy order that follows. Disobedience to the order is considered a child who is disobedient to parents. The same model of interaction is also shown for the legacy of the grandmother, bhuju', in the form of Roma Toah. Even the remains of Roma Toah left by bhuju' receive more sacred treatment than the legacy of their parents. This treatment is because there is an assumption that a bhuju' is a person who has sacredness and is considered wellih (guardian). Inappropriate treatment of Roma Toah or bhuju' remains considered we bring about sparrows. We believe that they come more quickly and sometimes in cash. Therefore, treating the Roma Toah left by bhuju' is more memorable. Apart from the belief that bhuju' is a holy person, many family members are already bound by Roma Toah so that they are treated more specifically. For example, in procuring haul as the symbol of the mole, it is carried out on a larger scale. It is inseparable from the number of families whose moles are getting bigger. In addition to the meaning of parents and bhuju' as an objective reality with a magical religious | Applying Islamic law within the framework of social culture is never complete for research. Certain social entities understand Islamic law, originating from revelation to produce unique legal products. Roma Toah inheritance in Madurese society is a division of inheritance intended for all heirs without being based on the individual principle as in Islamic inheritance law in general. This study aims to understand the construction of social, cultural, and religious inheritance of Roma Toah in the people of West Madura, who are known to be firm in upholding their religious teachings. This study uses a qualitative method approach using a socio-legal approach. This study uses social construction analysis so that the meaning of the inheritance of Roma Toah can be well understood. This research resulted in the following findings; first, the legacy of Roma Toah survives based on the local wisdom of the Madurese community, based on two aspects that align with Islamic values; maintaining family ties intact and family economic resilience. Second, by using social construction analysis based on three phases; externalization, objectification, and internalization, the Roma Toah inheritance is built on a harmonious blend of culture, social, and religion through the legitimacy of traditionalist Madurese Ulama so that it is considered a system that does not conflict with Islamic values in maintaining family and economic integrity. |
. The same model of interaction is also shown for the legacy of the grandmother, bhuju', in the form of Roma Toah. Even the remains of Roma Toah left by bhuju' receive more sacred treatment than the legacy of their parents. This treatment is because there is an assumption that a bhuju' is a person who has sacredness and is considered wellih (guardian). Inappropriate treatment of Roma Toah or bhuju' remains considered we bring about sparrows. We believe that they come more quickly and sometimes in cash. Therefore, treating the Roma Toah left by bhuju' is more memorable. Apart from the belief that bhuju' is a holy person, many family members are already bound by Roma Toah so that they are treated more specifically. For example, in procuring haul as the symbol of the mole, it is carried out on a larger scale. It is inseparable from the number of families whose moles are getting bigger. In addition to the meaning of parents and bhuju' as an objective reality with a magical religious meaning so that it has a unique position in the subjective world of the Bangkalan people, land in the culture of the Madurese people is considered to have a unique position. The land is not only considered as a property that has material and economic properties. More than that, the land is considered an entity with a magical value, so special treatment is needed in socializing in it. The belief in prohibiting selling sangkolan land to the people believes that Bangkalan will bring difficulties to the economy in the future. In addition, the Bangkalan people believe that land is considered self-identity in the culture of the Bangkalan people. Concrete evidence of the origin of "self" can be found and attached to the person's birthplace. In this case, the Roma Toah is a symbol that represents this. Second, from all the above descriptions in the context of building a reality, this is known as the process of institutionalization. This process is a process of building awareness into action. In the context of the inheritance of Roma Toah, the process of building awareness into action takes place in interpreting the meaning of the Roma Toah in the Bangkalan community as a legacy from their parents and bhuju'. As mentioned above, meaning enters the consciousness realm, then manifests in action. The manifestation of the actions of the Bangkalan people has a legal basis from their religious source. Third, after awareness of the inheritance of Roma Toah is embedded in people's cognition and then becomes active in institutionalization or institutionalization, all the values attached to Roma Toah become their guide in their behavior. What they are aware of is what they are doing. Thus, their actions regarding Roma Toah have a logical basis, not a reckless act or just joining in. In --- 133 the context of Roma Toah ancestral inheritance, their acceptance of their parent's decision, preserving the Roma Toah inheritance with their family is a logical action and has specific goals for them. However, the shift in knowledge and contact with the modern world influences the logic of Bangkalan society. The Roma Toah tradition is experiencing a shift in urban areas. Fourth, after the reality of Roma Toah becomes objective and people's behavior towards Roma Toah has undergone a logical conceptual process, then over time, the action is in the form of obedience to the decisions of parents (Roma Toah appointed by parents) and ancestors (Roma Toah as a legacy from their ancestors). ancestors) Moreover, all actions that support the solid existence of Roma Toah automatically experience institutionalization or the concept of capitalization. It means that all these actions have become part of their daily lives and are institutionalized into habits or habits in the people of Bangkalan. The action has become mechanical and carried out without other conceptual considerations. The whole process depends on the agent's role in carrying out its function in the awareness, development, and capitalization of the Roma Toah inheritance. The more often this process is carried out, the stronger the existence of Roma Toah will be. It means that the value and spirit of Roma Toah, which is in the form of a symbol of interaction between families, will be carried out more frequently so that automatically instilling awareness, institutionalization, and capitalization will become more robust in the reality of the life of the people of Bangkalan, Madura. The agents referred to in this process are religious authority figures in Bangkalan. In a smaller context, the agent is a kyai whose words the family refers to Kyai, used as a reference for his sayings, usually has teacher ties with the large Roma Toah family. Moments of general recitation, recitation at Islamic boarding schools, madrasah, musalla, manaqiban, imtihanan, yasinan, isra' mi'rajan are opportunities that clerics commonly use to discuss friendship, obedience to parents, respect for parental legacy, blessings, kualat and so on for him, all of which are values contained in the spirit of Roma Toah. In addition to the kyai as an agent in this objectification phase, oreng seppo or someone featured in the Roma Toah environment because all his words are used as a reference for the Roma Toah family, usually the oldest male family member. This role own because, in the Roma Toah tradition, the man manages the household even though he is outside the Roma Toah. The moment of socialization carried out by this agent is the modic moment on Eid al-Fitr or Idhul Adha. When all family gathers for the homecoming event, this agent conveys the importance of silaturrahim, abilir katoronan, telling stories about their parents' lives, bhuju' life. On that occasion, this agent usually mole to Roma Toah if he is not located in Roma Toah. Meanwhile, if appointed to be Pamolean, then the role of the agent to preserve Roma Toah by transforming the cultural values that form the basis for the formation of Roma Toah is easier to do. It means that the objectification process can occur whenever a family returns to Rome Toah. --- Internalization of Roma Toah's Inheritance Value After objectifying Roma Toah becomes an objective reality to give birth to a process of intersubjective interaction, the following process is the moment of internalization. At the moment, the individual carries out the process of selfidentification in his socio-cultural world. In this phase, the individual withdraws social reality, which has experienced objectification, into the individual's subjective world. The objective social reality of Roma Toah includes the world of subjective reality. Thus the human self will be identified in the socio-cultural world. This phase is a moment that occurs. The process of self-identification in the socio-cultural world is the withdrawal of the objective meaning of Roma Toah in the person of the family. Automatically this creates a demand to position oneself as part of the Roma Toah family, giving rise to a "real" attitude when together with fellow Roma Toah families. On the other hand, when we are with other people who are not part of Rome, we can show a different attitude. Such is the tendency of human nature, to tend to unite and group with fellow individuals who have something in common. In the context of Roma Toah, family ties make them have a special bond in social interaction and all aspects of life, referring to the family group in Roma Toah. Adegium rampak naong banyan korong describes a strong, mutually protective bond between families in one bond. Different attitudes when interacting with outside families, outside families unrelated to a Roma Toah family bond. That is, Madurese people have standards of behavior in their social interactions with "other" people. The attitude of ajegeh tengka, being careful in behavior, is always put forward to maintain the good name of the extended family. Avoiding the family from malo will be put forward in every interaction with the jeu family and outsiders. The condition of being hurt physically or to a good name by an outsider (not having family ties) leaves a wound in the extended family if the extended family knows it. Limiting attitudes and being careful with other people who do not belong to the family sphere is necessary to prevent things that damage or harm the person and the family. The internalization process also occurs in individuals between family members who are in a Roma Toah bond. Self-identification arises with an attitude that presupposes being a good family. It is done to embody their role in their status (position) in Roma Toah. For example, a person appointed by parents or extended family, such as Pamolean, will install a behavior "mirror" to control his interactions with other families. A Pamolean must be able to embrace all members of the Roma Toah family because an attitude that does not reflect Pamolean can result in its existence as Pamolean and Roma Toah. The planning attitude (easy in terms of communicating) to family, lemes beuh (not arrogant), andhep asor (polite), and having the attitude pa mappa pappanah geddeng it means that a Pamolean is "forced" to have an attitude like a banana leaf. He is not limp, quickly changes his opinion and stance, and is also not rigid in attitude. This attitude is then helpful in various matters relating to Roma Toah. For example, in dealing with various characters of family members, Pamolean will not be easily managed by one of the Furthermore, vice versa, a Pamolean who does not have an attitude that reflects a Pamolean can cause problems later in the continuity of Roma Toah. Other family members may feel uncomfortable, so the coveted family atmosphere, like when both parents were alive, cannot be obtained. Families who return to Roma Toah feel no different from visiting other people's homes. If this condition continues for a long time, it will affect the existence of the Roma Toah itself. Roma Toah rarely gets visits from family members, which can lead to disputes over their rights of Roma Toah. Such Pamolean is said to be lok kaop dedih pamolean (not worthy of being Pamolean). In addition to the process of internalizing Pamoleans, the Roma Toah culture also forces other family members to have attitudes that reflect the outside family. The attitude of andhep asor, taoh ajhinah dhibi', and prioritizing politeness prioritize in interactions with fellow families. It primarily interacts with the Pamoleans, the "hosts" at Roma Toah. Discrepancies with the value of decency in the family will receive a direct reprimand from other families, especially from the oldest male family with the right to rule in the Roma Toah family. Of all the descriptions about the internalization of the reality of Roma Toah in the life of the Bangkalan people, the value of family life (harmony) in one family bond is the starting point for the existence of Roma Toah in the extended family of the Bangkalan people. It can see that the harsh reality of Madurese life, the existence of a tradition of acts of violence (carok) among families caused by problems with inheritance, makes parents (the makers or first owners of Roma Toah) make their place of residence Roma Toah. The purpose of its formation was to protect the extended family from things that made family malos due to the description of the harsh social conditions of the Madurese people. The results of data mining prove this opinion. The first thing Madurese parents order for their children is a message about harmony, caring, and not conflicting with fellow families. --- CONCLUSION From the explanation above, it can conclude that the legacy of Roma Toah survives based on the Madurese community's local wisdom, based on two interrelated aspects; maintaining family ties and the family economy. By using social construction analysis in three phases; externalization, objectification, and internalization, the Roma Toah inheritance is built on a harmonious blend of culture, social, and religion through the legitimacy of traditionalist Madurese Ulama, so that it is considered a system that does not conflict with Islamic values in maintaining family and economic integrity. | Applying Islamic law within the framework of social culture is never complete for research. Certain social entities understand Islamic law, originating from revelation to produce unique legal products. Roma Toah inheritance in Madurese society is a division of inheritance intended for all heirs without being based on the individual principle as in Islamic inheritance law in general. This study aims to understand the construction of social, cultural, and religious inheritance of Roma Toah in the people of West Madura, who are known to be firm in upholding their religious teachings. This study uses a qualitative method approach using a socio-legal approach. This study uses social construction analysis so that the meaning of the inheritance of Roma Toah can be well understood. This research resulted in the following findings; first, the legacy of Roma Toah survives based on the local wisdom of the Madurese community, based on two aspects that align with Islamic values; maintaining family ties intact and family economic resilience. Second, by using social construction analysis based on three phases; externalization, objectification, and internalization, the Roma Toah inheritance is built on a harmonious blend of culture, social, and religion through the legitimacy of traditionalist Madurese Ulama so that it is considered a system that does not conflict with Islamic values in maintaining family and economic integrity. |
INTRODUCTION According to a view widely held in the media and in public discourse more generally, online hating is a social problem on a global scale. It has been claimed to be at least partly responsible for the assassination of a well-known politician [Pawe<unk> Adamowicz, the mayor of one of Poland's biggest cities (Nyczka, 2019)]; for various professionals having to flee their homes in fear for their safety (McDonald, 2014;Parkin, 2014), as well as for suicides among private individuals (Marcus, 2018). The negative influence of online hating seems to additionally extend even beyond those who are its direct target. Available data suggests that it is sufficient to merely witness online hating in order for one's levels of subjective well-being to significantly decrease (Keipi et al., 2017). Finally, online hating seems to be an integral element of the general phenomenon of post-truth and fake news, much discussed in both scholarly literature and media outlets. Consisting in expressing unargued for negative assessments of others, online hating thrives in an environment where "public opinion is more influenced by fascinating emotions and subjective beliefs than by objective facts" (Scardigno and Mininni, 2020). However, thus far there has been little scientific literature on the subject (Lange, 2007), and, to our best knowledge, there is even no established scholarly definition of online hating and online haters in the first place. The purpose of this manuscript is to address this by proposing an operational definition of both online hating and online haters. --- ONLINE HATING AND RELATED PHENOMENA One reason why there has been so little research dedicated specifically to online hating and online haters seems to be that online hating is often seen in academia as a mere variant of other online phenomena, most importantly trolling (March and Marrington, 2019), cyberstalking (Fearn, 2017), and online hate speech (Ortiz, 2019). This would explain the fact that while the public seems to see hating as a problem as serious as those three phenomena, the latter have been the subject of scholarly attention far more often than the former. In other words, to many scholars it is not yet clear that online hating is a separate phenomenon, so it is not yet clear whether a distinct definition is needed or even possible. However, the evidence we have obtained suggests that the terms "hating" and "online hating" are often used to denote a phenomenon that is distinct from what is usually called "trolling, " "hate speech, " or "cyberstalking" -and, in fact, any other online phenomenon identified in scholarly literature -and which is at the same time of considerable social significance. So this phenomenon is definitely worth scholarly attention on its own. Our evidence comes from three sources: scholarly articles, media accounts, and ethnographic interviews. We conducted systematic searches of both scholarly articles and media reports through the EBSCO Host database. We looked for scholarly articles/media reports published within the period of 2005-2021 that feature terms "online" + "haters" (33/154), "online" + "hating" (126/50), "online" + "hate" + "speech" (640/528), "online" + "trolling" (357/368), "online" + "trolls" (359/934), "cyberstalking" (406/134), and "cyberstalker" (15/14), with any duplicates being removed automatically by the system. Then we conducted a review of this material, taking into consideration additional articles and media reports on the above topics that we had known about from our previous work. The ethnographic interviews were conducted specifically for the purpose of this study. The respondents to these interviews (N = 67) were graduate and undergraduate students at the University of Wroc<unk>aw attending lectures given by one of the authors of the study. The interviews were conducted during classes, with the participants giving their responses anonymously on unsigned sheets of paper. For the sake of both time and anonymity, we abstained from asking demographic questions. However, the pool from which our sample was drawn allows us to estimate that most our participants must have been women and between 19 and 23 years old. Overall, our interviewees clearly recognized hating as a distinct phenomenon, were able to give its concrete examples, appeared to have witnessed it first-hand, and in some cases reported to have been its objects. Some of our participants admitted to having engaged in online hating in the past, expressing various attitudes toward these actions, ranging from regret to satisfaction. In what follows, we will use data from these interviews, as well as the data we obtained from surveys of the existing scholarly literature and relevant media accounts, to spell out the differences between "online hating" "trolling, " "hate speech, " and "cyberstalking" by comparing online hating with each of these three phenomena. The purpose of this comparison, however, is not only to distinguish hating from those other phenomena, but also to help reach a definition. The comparison will be organized along the following three axes: the purpose, the means, and the attitude. For convenience, from now on we will use terms "online hating, " "hating, " "online hater, " and "hater" interchangeably. The purpose of online hating, first and foremost, is to publicly express a negative attitude toward a given person or object. As such, an act of hating is considered successful even if it provokes no reaction in others whatsoever. This clearly distinguishes hating from trolling, hate speech, and cyberstalking alike, all of which do aim at provoking certain reactions in other people. The purpose of trolling is to provoke a verbal reaction from the users of a certain platform -engaging them in a debate (Golf-Papez and Veer, 2017;March and Marrington, 2019). Hate speech aims to induce negative attitudes toward a given social group (a race, a gender, a nation, and so on) by expressing a disparaging opinion about that group (Nockleby, 2000;Ortiz, 2019). The purpose of cyberstalking is to harass -to cause discomfort to and hurt the interests of -a given individual, community, or a legal entity (Bocij and McFarlane, 2002;Fearn, 2017). Granted, hating may result in reactions such as those that hate speech, trolling, and cyberstalking aim at. Some haters may even relish those. But this does not change the fact that such outcomes are neither the primary intention of haters nor the primary purpose of hating. As our respondents put it: "Hating does not require causing any reaction, or discussion"; "Hating does not aim to initiate an argument, it does not require causing any reactions... although [haters] sometime do not shy away from arguments"; "Hating is just an intense expression of one's feelings and thoughts." The primary purpose of hating is achieved through the means of communicating verbal messages that carry a negative attitude. Among the characteristic examples of hating our subject gave there are: "Shitty song. You should never sing"; "Go and kill yourself." Similar examples were given in an earlier study conducted in the United States: "This sucks. Go die" (Lange, 2007, 7). Most likely, it is this feature of hating that lies behind the custom of calling that phenomenon "hating" in the first place, referring to the common understanding of "hate" as "extreme dislike" (Merriam-Webster Dictionary Definition of HATE, 2021). Although trolling, hate speech, and cyberstalking may all involve communicating such messages, doing so is not necessary for engaging in these behaviors. This is clear from the fact that the goals these behaviors aim at may be, and often are, achieved by messages that express a positive attitude or do not express any attitude at all. These may be, for instance, statements concerning a given group that are ostensibly positive or neutral (Cohen-Almagor, 2017) but also false in a way that hurts that person's or group's interests (hate speech and cyberstalking) or provokes a heated debate (trolling), or both. In addition to that, cyberstalking does not need to involve any verbal messages at all and is often achieved through such actions cybervandalism or identity theft (Lange, 2007;Sheridan and Grant, 2007). Finally, one feature that distinguishes hating from hate speech specifically, is that, unlike hate speech (Nockleby, 2000;Ortiz, 2019), hating does not necessarily consists in expressing a disparaging opinion about a social group and neither is it necessarily related to any political ideology. It may be disparaging without in any way referring to any ideology or the social identity of a given person or object and/or aiming at diminishing the social position of a group. As our subjects put it, hating may be purely "egoistical, " for instance, by embodying an attitude of "it is bad because I do not like it." --- DEFINING HATING AND HATERS Given the above, as well as other evidence we obtained through literature surveys and our ethnographic interviews, we might define online hating as the activity of posting online an explicitly negative assessment of a person or an object primarily for the purpose of expressing one's negative attitude toward that person or object, independently of whether this will cause actual harm to a concrete person, provoke others to respond or whether it will diminish the value of a given social group. This purpose distinguishes hating not only from hate speech, trolling, and cyberstalking, but also from those forms of expressing negative attitudes such as critical reviews that aim to provide an informed opinion about a given person or object. Hating does not aim to provide an informed opinion but merely to express a negative attitude. This is why a typical manifestation of online hating is an explicitly negative assessment that is not argued for and therefore perceived as unconstructive. This defining feature of hating was stressed by participants in an earlier study on YouTube hating (Lange, 2007) and by our participants as well. A hater is a person who routinely engages in hating behavior and it is reasonable to assume that such persons typically possess a common set of psychological features. It is also reasonable to assume that the characteristics of haters would be different than those common for people who engage in the other kinds of online behavior that were described above. While haters are likely to share some features with trolls, for instance, they are unlikely to share all of them. For instance, as both hating and trolling may result in upsetting people, these are unlikely to be engaged in by people with high or typical affective empathy. But at the same, while a troll will likely score high on cognitive empathy -without this he or she will not be able to accurately predict what will provoke people (Golf-Papez and Veer, 2017;March and Marrington, 2019;Moor and Anderson, 2019), this is not necessary for a hater. Similarly unnecessary for a hater are Machiavellianism, i.e., "a tendency to strategically manipulate others, " and narcissism, which are in turn typical for cyberbullies (Goodboy and Martin, 2015), and those engaging in hate speech (Withers et al., 2017), respectively. Unfortunately, there is almost no research on the psychological features of haters, and the existing literature tells us only that haters are characterized by a low sense of self-identity, self-awareness, self-control (Chao and Tao, 2012), lack of confidence (Bishop, 2013), psychopathy (Sorokowski et al., 2020), high psychoticism mediated by cognitive distortion blaming others (Pace et al., 2021). The present research on hating (and its resulting definition of hating behavior) may anchor and provoke further studies, which could be based on the proposed systematization. --- DISCUSSION In this manuscript we have argued that there exists a distinct phenomenon of online hating and online haters that thus far has not been carefully discerned, and therefore studied, in scholarly literature. We would like to add to this now that while studying that phenomenon could yield results of scholarly significance, it is also difficult in methodological terms. The main difficulty here is related precisely to what, according to our interviews and literature review, distinguishes hating from the other forms of online harm that scholarly literature focuses on, that is, its intention. This is because that intention may often be difficult or impossible to deduce from a given utterance and the context that is accessible to the researcher. Some utterances, on their surface, may equally well qualify as hating, trolling, hate speech, cyberbullying, or some other form of discourse. But such cases should not discourage one from studying online hating. Firstly, such cases exist for any form of discourse that is defined in terms of intentions, including trolling and hate speech, yet many such forms, including trolling and hate speech, are studied despite of that. Secondly, one may give operational criteria that allow for qualifying an utterance as online hating based solely on their content, form, and the context that is easily available to researchers. If an utterance gives a negative assessment of a given person or object that is (a) not backed by any reasons, (b) does not appear controversial in a given environment, (b) does not have any explicit ideological content, then this is, most likely, an instance of hating. In closing, we would like to argue that despite all the methodological, difficulties, online hating definitely deserves to be studied. This is not only because of scholarly but also practical reasons. After all, one might reasonably assume that online hating causes severe social harm, and that preventing that harm will not be possible without understanding online hating as such and implementing measures that are designed specifically with that phenomenon in mind. --- DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. --- ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Research Ethics Committee at the University of Wroc<unk>aw's Institute of Psychology. The participants provided their written informed consent to participate in this study. --- AUTHOR CONTRIBUTIONS WM, PS, MK, and MD: conceptualization. WM: investigation and methodology. WM and MK: data --- Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Copyright <unk> 2021 Malecki, Kowal, Dobrowolska and Sorokowski. This is an openaccess article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | According to a view widely held in the media and in public discourse more generally, online hating is a social problem on a global scale. However, thus far there has been little scientific literature on the subject, and, to our best knowledge, there is even no established scholarly definition of online hating and online haters in the first place. The purpose of this manuscript is to provide a new perspective on online hating by, first, distinguishing online hating from the phenomena it is often confused with, such as trolling, cyberstalking, and online hate speech, and, second, by proposing an operational definition of online hating and online haters based on ethnographic interviews and surveys of the existing scholarly literature. |
Background Improving maternal and child health are critical priorities in enhancing the agenda of quality of healthcare to some of the most vulnerable groups [1][2][3][4]. Despite substantial progress and different strategies that have been implemented by different countries, decline in maternal and child mortality remains inadequate [5][6][7]. Maternal and child mortality is largely preventable with current technology and it is unjustly and inequitably borne by low and middle income countries with poorly resourced health systems [8]. Findings from the Kenya Demographic Health Survey (2014) confirm that more effort is still needed towards reducing child mortality and improving maternal health despite the progress that has been made [9]. The quality of healthcare services plays an important role in enhancing healthcare service delivery in low income countries [10]. Poor quality of healthcare may lead to under-utilization of services; and evidence shows that pregnant women are more likely to deliver in health facilities if they are content with the care that they receive at the service delivery points [11,12]. A study conducted in rural Zimbabwe found that poor quality of services and negative attitudes of health care workers hinder pregnant women from utilizing these services [13]. Where poor women have access to what they perceive as high quality health care services, they increasingly seek reproductive health care services and delivery in health facilities [14]. --- Overview of the output based approach reproductive health program Evidence from various studies has shown that there are significant direct and indirect cost barriers in seeking reproductive and maternal health services, including treatment of complications [8]. Furthermore, high expenditures arising from birth related complications hinder many poor mothers from accessing health care and may push households further into poverty [15]. Two governments, Kenya and Germany, came together in 2005 to jointly support reproductive health through the Output Based Approach (OBA) Program. The purpose of the program is to expand utilization of selected reproductive health services among women aged 15-49 years (reproductive age). The program targets mothers who are economically disadvantaged and living in the counties of Kisumu, Kitui, Kiambu, and Kilifi, in addition to those who are in Korogocho and Viwandani, which are informal settlements in the county of Nairobi. The reproductive services offered include: safe motherhood (SMH) which comprise antenatal care (ANC) attendance, caesarian section and vaginal delivery, birth related complication and post-natal care up-to 6 weeks after delivery. Additionally, the program supports long-term family planning (LTFP) methods such as intra-uterine contraceptive device (IUCD), implants, and tubal ligation. Equally, the program offers counselling, medical examination, and treatment to vulnerable mothers who encounter sexual gender-based violence as has been shown by other authors [16,17]. OBA aims to support the impoverished population through subsidized health services [17]. The program pays service providers on the basis of agreed outputs with pre-defined results, e.g. facility-based deliveries and antenatal care visits attended, rather than financing the inputs [15]. Under the OBA model, vouchers for safe motherhood (SMH) and long-term family planning (LTFP) services are sold at highly subsidized prices to prospective women (100 Kenya shillings for both Family planning and the safe motherhood in Kilifi County and 200 Kenya Shillings for safe motherhood and 100 Kenya Shillings for family planning in other counties -1 USD ($) is approximately 100 Kenya shillings). For each voucher presented to accredited health facilities (including private providers, government facilities, non-governmental organizations -NGOs, and faith-based organizations -FBOs), services are provided and facilities reimbursed at a fixed rate [8,15,16,18,19]. Facilities are expected to use the reimbursed funds to improve infrastructure, purchase some medical and non-medical supplies, and provide incentives to facility staff among other things. The program directly finances the beneficiaries with highly subsidized vouchers, and funding is reimbursed directly to accredited health facilities. Donabedian theory evaluates three categories of quality of care: structure, which include inputs such as equipment and personnel, process which focusses on the activities carried out by the personnel, and outcomes which focuses on improved patient health such as good recovery, survival, and client satisfaction [20][21][22]. While the program has been in existence since 2005, little research has been done on aspects of patient perception of quality of reproductive healthcare. For instance, one study on quality of the safe motherhood voucher schemes showed enhanced quality of post birth care and a likelihood of superior quality of care for clients who opted to participate in the voucher scheme for longer [23]. The study evaluated only the postnatal aspect of care and did not touch on quality issues in overall totality. Hence, there is a paucity of data on quality of reproductive care, satisfaction with OBA services, and the impact of such programs. Therefore, this study evaluated perceived quality and satisfaction of the services under the OBA voucher program in Kenya from a woman's perspective. Additionally, we evaluated predictors of the factors that are related to perceived quality of reproductive care in OBA facilities. --- Methods --- Study area The study was conducted in Kitui, Kilifi, Kiambu, and Kisumu counties as well as in the Korogocho and Viwandani slums in Nairobi which are the OBA program sites. The services in OBA sites are provided by public, NGOs, FBOs, and private service providers. All participating sites were offering SMH services (ANC, Delivery, treatment of delivery complications, and post-natal care up to 2 weeks), LTFP methods, and a small number was providing SGBV services. --- Study design and tool This was a cross-sectional study conducted in OBA sites using a semi-structured interview guide administered through face-to-face in-depth exit interviews. Participants receiving OBA services were asked to describe their perceptions of the quality of services and reasons for satisfaction with the quality of services they had received in their current and previous visits. Perception was measured using a questionaire (Additional file 1) that was developed on the basis of literature review and suited for a healthcare setting [10,24]. The questionaire consisted of a large number of items that were found to be imperative in measuring quality of and satisfaction with care. Women were specifically asked how they perceived the care they received during SMH visits, LTFP visits, and SGBV visits. Besides, they were also asked about the information they received, the conduct of the healthcare professionals, and adequacy of resources and services. The items were re-grouped into 23 items measuring perception. There were two additional questions; one, on whether the women were completely satisfied with the services and two, on the reasons for satisfaction or dissatisfaction. Perceived quality of services was rated on a five point Likert Scale 1 being "Completely Disagree", 2 "Disagree", 3 "Agree", 4 "Completely Agree", and 5 "Do Not Know". --- Sampling design In selecting participants, a multistage sampling technique was used to select the facilities offering OBA services. First, all OBA facilities were classified according to type of ownership-public and private and grouped according to County. Classification has been described elsewhere [16]. Within each County, a representative sample of facilities both public, NGOs, FBOs and private facilities was randomly selected. In the second stage, a conservative sample size was calculated to be 313 respondents. In order to determine the sample size the formula developed by Cochran [25] for proportion that are larger: n = z 2 pq/d 2, where n = was the number of clients/respondents, z = is the critical value for standard normal distribution for the 95% confidence interval around the true population (1.96), p = estimated proportion utilising OBA services (which was based on the proportion of women of reproductive age living below the poverty line in Kitui, Kiambu, Nairobi, Kisumu and Kilifi estimated at 28.56% [26]), q = represented 100-p, and d = was the degree of accuracy (5%). The number of clients were equally divided amongst the chosen facilities (5 clients). A simple random technique was used to select the OBA clients who sought SMH, LTFP, and SGBV care at the time of the study. To randomly select the participants at the facility, the researchers used Stat Trek Random numbers generators which have been applied in other cross sectional studies [27]. The method uses statistical algorithm to give random numbers and instructions on how to use it (http://stattrek.com/statistics/ random-number-generator.aspx). The researchers hit a calculate button and the number generator gave a random number table with five numbers between 1 and 20. Subsequently, the interviewers then interviewed the participants presented by these numbers on a single basis until the sample size was obtained. After data collection, the questionaires were then retured to the central OBA program management offices in Nairobi after which they were checked for completeness before inclusion into the database. Only fully completed questionaires with all essential details were included in the analysis and "do not know" response in the questionaire was treated as a neutral term for ease of interpretation. --- Data analysis The data were analysed using Statistical Packages for Social Scientists (SPSS) version 18. Descriptive statistical analysis was carried out to describe the respondents' social demographic characteristics and the time taken to reach the facility either by bus or by foot. Additionally, descriptive statistical analysis was conducted on the women' perceptions of OBA services. Data were then subjected to exploratory factor analysis (EFA) of the 23 items to break down the items into homogonous sub-scales coherent with the quality dimentions as proposed by Donabedian [20]. Principal component analysis with orthogonal varimax rotation was conducted. In addition, the Kaiser-Meyer-Olkin measure (KMO) was done to evaluate the suffiency of data for EFA and Bartlett's test of sphericity to evaluate the degree of patterned relationship between the items. Additionaly, reliability analysis was performed to test the reliability of the scale and internal consistencies of extracted factors; whereby Cronbach's alpha coefficient was calculated. The multivariate response model was used to study whether level of education, ante-natal clinic visit, marital status, age, and County of residence were predictors of the factors related to perceived quality of reproductive care (Table 1). The questions on overal satisfaction and reasons for satisfaction were analysed using Microsoft excel 2010 and Pareto chart [28] was obtained for the level of satisfaction. --- Ethical approval The authorization to carry out the study was obtained from the Ministry of Health-Kenya as part of routine monitoring of the process (Development of the Health Sector, Health Financing Support and Output Based Approach, Phase III, BMZ-No. KENYA 2010 65853) of the OBA services. The proposal was approved by the health research unit of the Ministry of Health Kenya (MOH/HRD/1/ (32)). Additionally, permission was obtained from the county headquarters and hospital administrators to proceed with the study. Verbal informed consent for the study was obtained from every woman who agreed to participate. The interviewers explained the purpose of the study to the mothers in their local dialect (language) and asked them whether they were willing to participate. For those who agreed, the interviewer indicated a unique patient identifier and the date of the interview on the front page of the questionnaire before proceeding with the interview and data were only used for the study. --- Results The study was conducted in 65 OBA accredited facilities (18 FBOs, 2 NGOs, 18 private, and 27 public) in Kiambu, Nairobi, Kilifi, Kisumu, and Kitui (Table 2). --- Socio-demographic data of the respondents Out of a sample of 313 respondents, 254 were included for analysis making the response rate 81.2%. Fifty nine questionnaires that had no imperative details on the independent variables (levels of education, attendance to ANC clinic, marital status, and age) and where more than two attributes of quality were missing, were excluded from the analysis. The details were considered important to avoid bias in the multivariate response model and exploratory factor analysis as was shown in other studies [10,29]. There were 198 women with Safe Motherhood (SMH) contacts, 55 with Long Term Family Planning (LTFP) contacts, and one with a Sexually Gender Based Violence (SGBV) contact. All respondents were female, most of them married (83.1%) with primary level of education (57.9%). Majority of the respondents were in the age group of 24 and below (53.9%) followed by those in the age group 25-34 years old (38.6%) (Table 3) below. Mean age of the respondents was 24.67 years old (SD 6.127), and mean time taken to get to the facilities by foot and bus was 93.95 min (SD 304.877) and 36.83 min (SD 43.993) respectively. Additionally, majority of the women had attended ANC clinics "three times or more" (76%). --- Women' perception of services provided The overall mean score for women' perception of quality of services was 3.43 (SD 0.629) (Table 4), implying that the majority perceived the quality of OBA services to be high. Specifically, women were happy with the way healthcare providers were handling birth related complications. Furthermore, women highly rated staff as "compassionate", "respectful", "able to prescribe drugs that are needed", and "able to examine post-partum women well." However, the adequacy of the number of facility staff was rated fairly low implying that some facilities did not have enough staff. --- Factor analysis results Principal component analysis with orthogonal varimax rotation was conducted where the Kaiser-Meyer-Olkin measure (KMO) was 0.893 well above 0.5 suggested by Kaiser, 1974 [30] as shown in Table 5 indicating that the data was sufficient for exploratory factor analysis (EFA). The Bartlett's test of sphericity X2 (276) = 2866.439, P <unk> 0.001 (Table 5) showed that there was some degree of patterned relationship between the items. Items that had measures of variance (eigenvalues) equal to or greater than 1, with factor loading above 0.4, and factors that had three or more items were retained and used for EFA [29]. EFA used five factors which accounted for 61.5% of variance explained by the data after extraction. These were used in defining five sub-scales (Table 5). All five factors were included in the analysis because each had more than three variables as suggested by Hair et al. [29]. The five factors were labeled as follows: F1-"Staff conduct and practice" which had five variables (Staff are compassionate, staff are respectful, staff are devoted to clients, staff are open, staff are honest) and explained most of the variance; F2-"Healthcare delivery" which had seven variables (Staff very capable of diagnosing patient's illness, complications handled satisfactorily, staff examined post-partum women well, client received adequate information for the services to help making informed decisions, equipment is well suited for detecting medical problems, staff prescribed drugs that are needed, and staff have adequate knowledge in dealing with family planning issues, vaginal deliveries, caesarean deliveries, sexual and gender based violence cases); F3-"physical facilities" which had five variables (Clean water is adequate, there is enough privacy while handling cases, toilet facilities are adequate, hand washing facilities are adequate, environment of the facility is clean); F4-"adequacy of resources" which had three variables (Information provided on danger signs is adequate, bathing facilities for clients is adequate, number of Staff in the facility is adequate); and F5-"Accessibility of care" which had three variables (Patient can easily obtain drugs from the facility, there is adequate supply of drugs in facility, waiting rooms, examination rooms, and other rooms are adequate). Most of the factor loading were greater than 0.4 and the communalities ranged from 0.815 to 0.499 showing that the factor solution had identified the variance associated with each factor. --- Reliability analysis results The reliability (internal consistency) of the sub-scales exhibited by Cronbach's alpha ranged from 0.525 for F5 (showing low internal consistence) to 0.904 for the total score (indicating high internal consistence) (Additional file 2: Table S1 shows this in more details). The slightly lower scores for F4 and F5 can be explained by the small number of items in the group and has been explained by writers such as Haddad et al. [24]. Means of all five factors were fairly above three and they were fairly equal to median scores showing that there was no skewed distribution on the perception of the women. --- Socio-demographic predictors of quality of reproductive health services Regression analyses were performed with the different sub-scales and the total score for perceived quality of OBA services as outcome variables. The B values (beta) were interpreted directly as shown in Additional file 3: Table S2 and Additional file 4: Table S3. The results of the regression analyses indicate that marital status and the number of Ante Natal clinic (ANC) visits play insignificant roles in determining the perception of quality of OBA services within different factors except for the overall perceived quality of reproductive health care (Additional file 3: Table S2 and Additional file 4: Table S3). However, counties (areas of residence) are a significant determinant of the level of perception of quality. For instance, four factors (staff conduct and practice, physical facilities, adequacy of resources, accessibility of care) and the total score are perceived poorly by women in Nairobi, Kitui, Kilifi, and Kisumu as compared to Kiambu County (reference category). The results showed that staff conduct and practice is perceived poorly by those aged 15-25 years as compared to those aged 25-34; and perceived poorly by those with primary education as compared to those with secondary education. Healthcare delivery is judged poorly by those with tertiary education as compared to women with primary education, and poorly by those aged 15-24 compared to those aged 25-34 years old. Additionally, physical facilities are perceived positively by those without education or with secondary education as compared to primary education. Those without education perceive adequacy of resources more favorably than those with primary education. Accessibility of care is judged negatively by individuals aged 15-24 and 34-44 years as compared to individuals aged 25-34 years old. Overall, the quality of OBA services was judged higher by both those without education and with secondary education compared to those with primary education, and those who have attended two or less ANC visits compared to those who attended three times or more. The variance explained by various factors (R 2 ) is higher than 10% for staff conduct and practice, healthcare delivery, physical facilities, and adequacy of resources. In general, this shows that only for perceived staff conduct and practice and for perceived adequacy of resources, a substantial part of the variance is explained by socio-demographic factors. --- Overall level of satisfaction All clients were asked whether they were completely satisfied with the services provided at the OBA sites. Ironically, 88.9% of the clients revealed they were satisfied despite the challenges with the issues that have been addressed above (Additional file 5: Figure S1). Satisfaction was presented using Pareto chart shown in Additional file 5: Figure S1 where reasons cited for satisfaction included courteousness by the staff and little waiting time to be seen by medical staff. Other reasons included welcoming and friendly staff (10%), free service (8.5%), and quality service (5.5%). On the other hand, two clients were dissatisfied with the service because of lack of transport to the facility while one client was dissatisfied because of long waiting time before being attended to by the staff (Figs. 1 and2). --- Discussion Our results show that F1-"staff conduct and practice" was judged relatively high. This shows that components of staff conduct and practice which are honesty, compassion, respect, openness, and devotion to work of healthcare workers provided a significant influence on the perceived quality of reproductive health services. Our findings are congruent with results from a study in Malawi which showed that women were overall satisfied with the level of maternal care at the facilities because they were respected, welcomed and listened to [31]. Our results also support the findings of a cross sectional study in Ghana of mothers who delivered vaginally in two public hospitals and revealed that they were treated with respect [32]. Additionally, the study is consistent with a study in Nicaragua where user satisfaction with vouchers was highly correlated to satisfaction with clinic reception and clarity of doctor's explanations [33]. From the findings, we elucidate that women tend to associate the attitude of healthcare workers with the quality of care. The quality of F2-"Healthcare delivery" was rated as relatively good. For instance, the respondents were happy with the competence of staff in the facilities who were capable of handling complications and giving enough information. This is analogous to a study in Malawi with respect to handling complications [34]. The findings were different from a study in Mulago, Uganda where only 38% of the mothers revealed that they had received adequate information on the symptoms and expected health problems [35]. However, in Serbia, mothers were content with the information given regarding their rights during and after delivery by the midwives.000 Bold shows items that converge to form a factor which partly support our findings [34]. Additionally, women perceived that staff had adequate knowledge in dealing with SMH, LTFM, SGBV issues. These findings suggest that strong focus on the quality of care has contributed to increased service delivery in OBA sites. Women judged F3 -"physical facilities", F4 -"adequacy of resources", and F5 -"Accessibility of care" as relatively moderate. Most women perceived that clean drinking water, availability of bathing facilities especially after delivery, and privacy when being examined, were essential components of a good healthcare facility. In essence, toilet and hand washing facilities enhanced the level of perceived quality of care. Moreover, within OBA sites, perceived quality of care was linked to adequate number of staff and the supply of drugs. Findings were comparable to a study in India, which indicated that women were happy with the readiness of primary drugs particularly during complications and availability of health workers [36]. Drugs are important determinants of quality of care and the absence of drugs could lead to impaired perception of the quality of services [10]. Our findings also reveal that women are content with majority of quality aspects despite the number of healthcare workers being low. This can probably be explained by the few number of health workers going way above their abilities and the workload to ensure that the mothers receive the services they need. Women seem to be aware of the shortage of workers, but appreciate the services they provide. An important finding from this study was that the majority of respondents were young people of 24 years and below who made at least three ANC visits, which is comparable with the Kenya Demographic Health Survey (KDHS) 2014 results [9]. However, women needed relatively long hours to reach OBA facilities which was comparable to other studies [15,19,36] and greatly influenced women' perception of the quality of care. The study has revealed that area of residence played a key role in determining the level of perception of quality of care of OBA services as compared to other socio-demographic characteristics. However, the study identified some impact of ANC visit numbers, level of education, and age on the perception of quality which is in congruent with results from other sub-Saharan African studies [32,37]. --- Study limitations In studies involving perception of quality and satisfaction with the level of care, there is a propensity to provide favorable answers to the questions [24]. Thus, in as much as the study is relevant, it should be used with caution. Besides, generalizing it to other countries is not warranted. Secondly, the sampling design provided enough users of OBA services to examine the research question; however, in some remotely located facilities, we did not find the designated number of women because they experienced difficulty in accessing the facilities. Thirdly, women were interviewed within the vicinity of the clinic or hospital and this may have influenced the way they answered the questions. --- Recommendations a. Health care managers can use our findings as a guide to evaluate different areas of healthcare delivery; thereby, improving resources and physical facilities that are crucial in elevating women' level of satisfaction with the quality of care. Moreover, healthcare workers can use the study as a guide to enhance accessibility of care so that improved levels of satisfaction can be obtained. b. It is imperative for future programs to inculcate transport vouchers to reduce time to get to the facilities, as it is a potential determinant of perception of quality. c. For the program management unit (PMU), the index for perceived quality and women' satisfaction should be incorporated into practice using the results from this study. While different facilities reacted differently to reimbursements and incentives, some facilities improved their structures and were able to attract more women who are more satisfied. Therefore, it is imperative to introduce mechanisms in the voucher strategies that can capture perceived quality and satisfaction routinely. The 23 item questions that have been translated into five factors shows the key areas that the PMU need to improve. --- Conclusion Conduct and practice of healthcare workers is an important determinant of women's perception of quality. Women take keen interest in evaluating staff attitudes. Healthcare workers within different areas of residence need to implement different strategies unique to the area that will pull and improve levels of satisfaction and perception of the quality of healthcare. Women were overall satisfied with the way they were being handled at the OBA facilities. A future study could also assess whether healthcare providers' perception of care is different from users' perception. Policy makers should respect women' quality perceptions within OBA services and work towards improving quality of care and enhancing utilization. --- Availability of data and materials Data for this report are under the primary jurisdiction of the Ministry of Health in Kenya. Enquiries about using the data can be made to the head of the Program Management Unit for the OBA study. --- Additional files Additional file 1: Data collection tool for RH-OBA clients. (PDF 455 kb) Additional file 2: Table S1. Reliability analysis of Factors and total score. (DOCX 13 kb) Additional file 3: Table S2. Factors related to perceived quality: Multivariate response model for F1, F2, and F3. (DOCX 14 kb) Additional file 4: Table S3. Authors' contributions BO and UK were involved in the conception and design, data analysis and interpretation, drafted the manuscript and are accountable for all aspects of the work. SK, CO, and SOM were responsible for data curation, formal analysis, and methodology. SMK, ST, NM, SG, BB, MR, JK and CN participated in the formulation of the methodology, investigation and revision of the manuscript. All the authors read and approved the final manuscript. --- Ethics approval and consent to participate The study was approved by the health research unit of the Ministry of Health Kenya (MOH/HRD/1/ (32)). Additionally, permission was obtained from the county headquarters and hospital administrators to proceed with the study. Verbal informed consent for the study was obtained from every woman that agreed to participate as approved by the ethics committee. The interviewers explained the purpose of the study to the mothers in their local dialect (language) and asked them whether they were willing to participate. For those who agreed, the interviewer indicated a unique patient identifier and the date of the interview on the front page of the questionnaire before proceeding with the interview and data were only used for the study. --- Consent for publication Not applicable. --- Competing interests The authors declare that they had no competing interests when conducting the research. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Background: This is a facility-based study designed to assess perceived quality of care and satisfaction of reproductive health services under the output-based approach (OBA) services in Kenya from clients' perspective. Method: An exit interview was conducted on 254 clients in public health facilities, non-governmental organizations, faith-based organizations and private facilities in Kitui, Kilifi, Kiambu, and Kisumu counties as well as in the Korogocho and Viwandani slums in Nairobi, Kenya using a 23-item scale questionnaire on quality of reproductive health services. Descriptive analysis, exploratory factor analysis, reliability test, and subgroup analysis using linear regression were performed. Results: Clients generally had a positive view on staff conduct and healthcare delivery but were neutral on hospital physical facilities, resources, and access to healthcare services. There was a high overall level of satisfaction among the clients with quick service, good handling of complications, and clean hospital stated as some of the reasons that enhanced satisfaction. The County of residence was shown to impact the perception of quality greatly with other social demographic characteristics showing low impact. Conclusion: Majority of the women perceived the quality of OBA services to be high and were happy with the way healthcare providers were handling birth related complications. The conduct and practice of healthcare workers is an important determinant of client's perception of quality of reproductive and maternal health services. Findings can be used by health care managers as a guide to evaluate different areas of healthcare delivery and to improve resources and physical facilities that are crucial in elevating clients' level of satisfaction. |
Introduction Health-seeking behavior is one of the major determinants of health outcomes in a community. It determines how health services are used which influences population health outcomes [1]. Health or care-seeking behavior is defined as any action undertaken by individuals who perceive themselves to have a health problem or to be ill to find an appropriate remedy [2]. Individual attributes, the essence of the community in which a person resides, and the relationship between individual and environmental factors are all linked to the health-seeking behavior of an individual [3]. There has been a growing interest in research related to health-seeking behaviors over the years locally and internationally [4,5]. Studies conducted locally among the urban population found that 63.5% of participants used self-medication for minor ailments [6], 85% consumed over-the-counter (OTC) medications [7], while 67% chose to consult the physician when they experienced any health problems [8]. A national study in Brazil indicated that the prevalence of use of medicine via self-medication was 18.3% [9]. Another study among the rural and urban population of Karachi, Pakistan reported that 93% of the respondents had practiced self-medication [10]. Several theoretical models explain health behaviors; Evans and Stoddart Model [11], Health Belief Model [12], Grossman Model of Health Demand [13], and Andersen's Behavioral Model [14]. Andersen's Behavioral Model of Health Care Utilization is one of the most widely used for predicting health-seeking behaviors due to the convenience of application and popularity in modeling studies involving healthcare accessibility and utilization [14]. Geographic location has a significant influence on the accessibility to healthcare services [15,16], and access to healthcare is reported as one of the many pivotal factors contributing to the gap in health equity among the urban and rural populations [17]. The Malaysian health system is based on a geographically widespread healthcare delivery system designed to provide the entire population with access to public health services, both in the rural and urban localities [18]. An urban area in Malaysia is classified as a gazetted area with a combined population of 10,000 or more, whereas a rural area is defined as a gazetted area with a combined population of less than 10,000 [19]. The equitable healthcare financing and structured public healthcare system in Malaysia [20] does not inherently translate to equitable access because geographical barriers exist [21], among other factors. In the health sector, access and utilization are interrelated concepts, with access playing a critical role in the utilization of healthcare services. Access to health care was defined as "actual use of personal health services and everything that facilitates or impedes their use" [22]. According to Levesque's Conceptual Framework of Access to Health, the five dimensions of accessibility are approachability, acceptability, availability/accommodation, affordability, and appropriateness [23]. Internationally, research has documented differences in access to and utilization of health care services between urban and rural populations, which consequently affected their health outcomes. Rural patients experienced more barriers to access health care (i.e., distance, travel time, transportation, infrastructure building, medical resources, staff distribution, and clinic distribution) as compared to their urban counterparts [18,[24][25][26][27][28][29], which resulted in having to restart the care-seeking process, inappropriate use of emergency departments, unmet need for care, or health problem exacerbation [25,30]. The introduction of the New Economic Policy in the 1970s has increased the urbanization rate from 26.8% in 1970 to 71.0% in 2010, which were expected to rise to 76.6% in 2020 and 88.0% in 2050 [31]. As of 2020, the Malaysian public healthcare system has a distribution of 3171 clinics and 154 hospitals throughout the country, which also provide mobile clinic services to remote areas. There were 7988 registered clinics and a total of 250 licensed hospitals, maternity homes, nursing homes, and hospices in the private healthcare facilities in Malaysia, which are mostly concentrated in the urban areas [18,32,33]. The allocation of healthcare services and resources within the public sector was uneven, favoring urban clinics heavily [24]. Compared to rural areas, urban areas have a greater density of primary care clinics and health workers per capita (2.2 clinics and 15.1 healthcare practitioners per 10,000 population in urban areas versus 1.1 clinics and 11.7 healthcare practitioners per 10,000 population in rural areas) [24]. Malaysia has a dual healthcare system, where the main providers of healthcare are public and private sectors [34]. In ensuring efficiency through decentralization, the hierarchical organization structure of the Ministry of Health (MOH) Malaysia is stratified into the Federal, State, and District levels [35]. Funded through general revenue, the public sector aims to provide universal access with a focus on low-cost but high-benefit health care programs. To keep up with the population growth, especially in urban areas, the dual healthcare system has developed with the private sector serving mostly urban regions and better-off patients with fee-for-service primary and secondary care, while the public sector maintains its social equity mission, including primary care services for poor and rural populations [34]. As an expansion of healthcare services in Malaysia, pharmacy practice has also evolved beyond traditional dispensing, from a product-oriented to patientoriented service where in-house pharmacists provided counseling in drug safety, poison information, and medication understanding. Some community pharmacies offered other services such as blood pressure monitoring, chronic disease screening [36], and weight management [37,38]. Although the expansion of community pharmacies in Malaysia means people may have more access to over-the-counter medicine, the MOH has implemented rules about prescription-only medicines, such as antibiotics. Other studies found that the causes of misuse and overuse leading to antibiotic resistance are various [39], and largely due to antibiotics dispensed without a prescription [40]. National guidelines on antibiotics have also been made accessible to the public and healthcare practitioners [41,42]. The duality of Malaysia's healthcare system is further magnified with the practice of both conventional Western medicine (also referred to as modern medicine) as well as traditional and complementary medicine (T&CM) as part of its healthcare services [18,43,44]. Under the enforcement of the T&CM Act, T&CM such as herbal therapy, acupuncture, and traditional massages were also incorporated in some public & private hospitals as supplementary treatment modalities [43,45]. This was in line with the World Health Organization's efforts to maximize the potentials of safe and quality T&CM services as a complement to modern medicine among its member states, to achieve holistic healthcare as part of the Universal Health Coverage (UHC) initiative [43,46,47]. Understanding health-seeking behavior and its associated factors would enable health systems to review strategies to accommodate healthcare expectations in the community [48]. Although this knowledge is vital in the proper designing of healthcare policies, very few studies have been conducted at the national level to explore the factors which influence health-seeking behavior among the adult population in the urban and rural areas in Malaysia. In this study, we aim to (1) determine the characteristics of respondents based on locality (urban-rural), (2) determine the prevalence of sick Malaysian adults based on locality, and (3) determine the factors associated with the health-seeking behavior of Malaysian adults who reported sickness, according to locality. --- Materials and Methods --- Study Design and Participants The data for this study was obtained from the National Health and Morbidity Survey (NHMS) 2019, a cross-sectional household survey with a two-stage stratified sampling method to ensure national representativeness. It was conducted among the population in Malaysia who were non-institutionalized and residing in the selected households for at least 2 weeks before the data collection. States and federal territories constituted the primary stratum, and urban and rural areas within the states were considered the secondary stratum. The sampling frame for this survey was provided by the Department of Statistics Malaysia using the National Population and Housing Census 2010. All 13 states and 3 federal territories were included in this survey. Within each state, the required number of Enumeration Blocks (EBs) from urban and rural areas were randomly chosen. First stage sampling involved a random selection of 463 EBs (350 urban and 113 rural) from the total EBs in Malaysia (over 75,000 EBs) via a probability proportional to size sampling technique. Subsequently, in each selected EB, 14 Living Quarters (LQs) were selected during the secondstage sampling. All households within the selected LQs and all members in the households were invited to participate in this survey. A total of 5365 LQs were successfully visited giving an LQ response rate of 92.6% and a total of 16,688 respondents were successfully interviewed giving an individual response rate of 90.0%. The overall response rate for this community-based survey was therefore 83.4%. A detailed methodology and sampling design of the survey is described in the NHMS 2019 official report [44]. A total of 10,933 Malaysian adults aged 18 years and over participated in the survey. Only data of respondents with complete responses on potential predictors (sociodemographic characteristics, enabling, and health need factors), experienced acute health problems, and health-seeking behaviors (seeking treatment from a healthcare practitioner and self-medication) were included in this study. In this study, the proportion of missing data was 4.11% (n = 449) and the missing data proportion of less than 5% was acceptable for complete case analysis [49]. When a preliminary analysis of all respondents was conducted, including those with missing data, no differences in results were observed. --- Data Collection In NHMS 2019, data were collected from July to October 2019 by trained research assistants, via face-to-face interviews using a validated questionnaire [50,51]. The questionnaire was programmed into an application and uploaded onto digital tablets as mobile data collection tools. The tablets were used to collect data, store and back up data in the SD cards, and upload data to the central system. To ensure the minimum sample size required is achieved, vacant or closed houses during the first visit were revisited up to at least three times. The tenets of the Declaration of Helsinki were followed during the study. Written informed consent was obtained from all participants before the interviews. The Medical Research and Ethics Committee (MREC), MOH Malaysia granted permission to carry out the National Health and Morbidity Survey 2019 (NMRR-18-3085-44207). --- Study Variables 2.3.1. Andersen's Behavioral Model of Health Care Utilization Andersen's Behavioral Model of Health Care Utilization was adapted into this study for its convenience of application and popularity in modeling studies involving healthcare accessibility and utilization. The model suggests that the health-seeking behavior of individuals is influenced by three groups of factors: sociodemographic characteristics, enabling, and health needs. Sociodemographic characteristics describe the tendency to use the services (i.e., sex, ethnicity, age, education level, and marital status), enabling factors describes the resources available to use the health services and facilities (i.e., wealth status, social support, and access to health resources), and health need factors represent perceived need for healthcare services [14]. --- Dependent Variables In this study, there are two dependent variables included which are: (1) seeking treatment from healthcare practitioners among those who reported sickness for the last 2 weeks before the interview, and (2) self-medication among those who reported sickness for the last 2 weeks before the interview. Those who reported sickness in the last 2 weeks before the interview were respondents who answered "yes" to the question "In the last 2 weeks, did you experience any of the following health problems such as fever, sore throat, difficulty in swallowing, running nose or blocked nose, cough, and others" Those who answered "yes", were then asked their health-seeking behavior (yes or no) based on the question "In the last 2 weeks, did you seek treatment/medication or advice from healthcare practitioners?" and "In the last 2 weeks, did you take medicine without advice from healthcare practitioners?" In this study, the term seeks treatment was used to refer to "seek treatment/medication or advice from healthcare practitioners" in short. Healthcare practitioners refer to modern healthcare practitioners including community pharmacists as well as traditional and complementary medicine practitioners (e.g., spiritual healer, Chinese herbalist, Ayurvedic practitioner, and Islamic medicine practitioner). Self-medication was used to refer to "take medicine without advice from healthcare practitioners" --- Independent Variables Sociodemographic Characteristics In this study, sociodemographic variables included were: sex (male or female); ethnicity (Malay or Non-Malay); age (a continuous variable, grouped into 18-34, 35-59, or 60+ years); education level (no formal education, primary, secondary, or tertiary education); and marital status (single, married, or widow(er)/divorced/separated). The age of respondents in years was grouped into "18-34", "35-59", and "60+ years" based on age distribution pattern. Education levels were categorized into four groups: no formal education, primary, secondary, and tertiary education. Respondents who had never been to school to get any form of education or did not complete primary school were categorized into 'no formal education, while those who completed Standard Six were categorized as 'primary' education level. 'Secondary' education level represented those with at least five years of schooling at secondary school, whereas 'tertiary' education level represented those who completed Form Six or received certificates, diplomas, or academic degrees. --- Enabling Factors The enabling factors included were: employment status (government employee, private employee, self-employed, or unemployed); income (quintile 1 (Q1), quintile 2 (Q2), quintile 3 (Q3), quintile 4 (Q4), or quintile 5 (Q5)), calculated based on total monthly household income and then were grouped into quintiles; and healthcare coverage (yes or no). Q1 represents the poorest 20% of the population and Q5, the 20% richest. Healthcare coverage was defined as having supplementary financial coverage for health care such as government employees' health benefits, pensioner cards, government-specific health fund, personal health insurance, employer-sponsored insurance, and panel clinic. --- Health Need Factors Proxy measures for health needs included were: self-rated health (good to excellent, fair, or poor to very poor); and presence of at least one long-term condition (yes or no), assessed from the questions "Have you ever been told by a doctor or assistant medical officer that you have diabetes?", "Have you ever been told by a doctor or assistant medical officer that you have high blood pressure?" and "Have you ever been told by a doctor or assistant medical officer that you have high cholesterol?" For the analysis, respondents who answered at least one "yes" to either one of the conditions, were coded as "yes" to "presence of at least one long-term condition" --- Statistical Analysis Secondary data analysis was conducted using STATA version 14 (Stata Corp, College Station, TX, USA). Complex sample descriptive statistics were used to illustrate the sociodemographic, enabling, and health need characteristics of the respondents, according to their locality (urban-rural). Sample weights and study design were taken into consideration using a complex sampling design in all data analyses. The products of the inverse of the probability of sampling, a non-response adjustment factor, and a post-stratification adjustment by age, gender, and ethnicity were used to calculate the weight used for estimation. Comparison of characteristics between urban and rural populations was performed using the chi-square test. Univariate with the chi-square test and multivariable logistic regression analysis, which presented as crude odds ratios (COR) and adjusted odds ratios (AOR) with 95% confidence intervals (CI), were used to predict characteristics of those who sought treatment from healthcare practitioners, and those who self-medicated, stratified by urban-rural locality. All variables with a p-value <unk> 0.25 in the univariate analysis were considered as predictive variables and entered into multivariable regression analysis [52]. The multivariable analysis was performed for urban and rural separately to examine the predictive factors for seeking treatment and self-medication using four models while adjusting for all other potential covariates such as sociodemographic characteristics, enabling, and health need factors. The AOR with a 95% confidence interval was determined where p-value <unk> 0.05 was considered statistically significant. The goodness of fit model was tested using Hosmer-Lemeshow statistics, and p-value > 0.05 was considered as a good fit. --- Results A total of 10,484 respondents representing 18.9 million population were included in the analysis. The respondents comprised of urban population (76.1%) and rural population (23.9%). Table 1 shows the sociodemographic characteristics, enabling, and health need factors of the respondents, stratified by locality. Both urban and rural populations had significant differences in all factors, except marital status and the presence of at least one long-term condition. Table 2 presents the prevalence of Malaysian adults who reported sickness. The overall prevalence of Malaysian adults who reported sickness was 16.1%. Of these, more than half (57.3%) sought treatment from healthcare practitioners, and about a quarter (23.3%) self-medicated. The prevalence of Malaysian adults in the rural areas who reported sickness (17.6%) was higher than the urban adults (15.6%). There were significant differences in the prevalence of those who reported sickness by different sociodemographic characteristics. Among the urban population, a higher prevalence of self-reported sickness was seen among females. Among the rural population, a higher prevalence of self-reported sickness was seen among non-Malays, aged 60 and over, those without formal education as well as a widow(er)/divorced/separated. Prevalence of self-reported sickness among those who self-rated their health as poor to very poor and those with at least one long-term condition was higher among both the urban and rural populations. Among those who reported sickness, more than half (57.3%) sought treatment from healthcare practitioners, and about a quarter (23.3%) self-medicated. Table 3 displays the results of the logistic regression analysis of health-seeking behaviors, with COR and AOR, and their CIs and p-values. The model I and II assessed the factors associated with seeking treatment among self-reported sick adults in urban and rural localities, respectively. The multivariable logistic regression revealed that employment status and self-rated health were significantly positively associated with seeking treatment among the urban population. Among urban dwellers, government employees were about 2 times (AOR = 1.82, 95% CI: 1.01-3.27) more likely to seek treatment than those who were self-employed. Urban dwellers who rated their health as poor to very poor were about 3 times (AOR = 2.94, 95% CI: 1.47-5.88) more likely to seek treatment than those who rated good to excellent. Whereas among the rural population, self-rated health and presence of any longterm conditions were significantly positively associated with seeking treatment. Urban dwellers who rated their health as poor to very poor were about 4 times (AOR = 3.68, 95% CI: 1.36-9.97) more likely to seek treatment than those who rated good to excellent health, whereas those with at least one long-term condition were about 2 times (AOR = 2.06, 95% CI: 1.23-3.45) more likely to seek treatment than those with none. Model III and IV assessed the factors associated with self-medication among selfreported sick adults in urban and rural localities, respectively. The regression revealed that education levels were significantly positively associated with self-medication among urban dwellers, where being without formal education significantly increased the likelihood of about 4.3 times (AOR = 4.29, 95% CI: 1.81-10.17) to self-medicate. Whereas self-rated health was significantly negatively associated with self-medication among the urban population. Urban dwellers who rated their health as poor to very poor were less likely (AOR = 0.40, 95% CI: 0.16-0.98) to self-medicate than those who rated good to excellent. However, in terms of self-medication among those who reported sickness in the rural locality, there was no significant association found. The Hosmer-Lemeshow test showed the goodness-fit of the models (p > 0.05). Thus, these models were considered a good fit. --- Discussion This study aimed to determine the characteristics of respondents and prevalence of Malaysian adults who reported sickness based on their urban-rural locality, as well as the factors associated with their health-seeking behaviors. All variables, excluding marital status and the presence of at least one non-communicable disease, were substantially different between the urban and rural populations. The overall prevalence of Malaysian adults who reported sickness was 16.1%, and higher among the rural population as compared to the urban population. Higher prevalence of self-reported sickness among those who self-rated their health as poor to very poor and those with at least one long-term condition were seen among both the urban and rural populations. More than half of those who reported sickness sought treatment from healthcare practitioners, while only about a quarter self-medicated. Self-rated health was one of the factors associated with health-seeking behavior among Malaysian adults who reported sickness from the urban and rural areas. Overall, less than a fifth of Malaysian adults reported sickness, with the rural populations exhibiting a higher prevalence than those from the urban areas. Similarly, other published studies found that illnesses were more prevalent among the rural population [27, 28,53]. As an upper-middle-income country, Malaysia's population has benefited from a well-developed health care system, together with improved access to clean water, sanitation, and better child nutrition, which was reinforced through programmes targeted at reducing poverty, increasing literacy, and providing modern infrastructure [54], and these developments may have an effect on the overall population health. Compared to Denmark (about 9 out of 10 respondents reported having experienced at least one symptom) [55] and Hong Kong (46.5% of the respondents aged between 16 and 54 years reported having any symptoms) [56], Malaysia's population had a better health status in terms of overall prevalence of reported recent illnesses. However, owing to variations in methodology and variables evaluated, these results are not directly comparable. More than half of those who reported sickness (58.9% of urban and 52.6% of rural) sought treatment from healthcare practitioners in the current study and the prevalence was lower among the rural population. Seeking treatment from healthcare practitioners was the first choice of health-seeking behavior reported by previously published studies [29,57]. However, given our results suggest that only slightly more than half of the population sought medical attention, this raises concerns about the proportion of people who did not seek appropriate treatment or care. A study conducted locally reported that 4.9% and 5.4% of urban and rural participants, respectively, did not seek treatment when they were sick [29]. Low perception of illness as a major health problem [44,58], low perceived need to seek care [59], work commitment [44], financial constraint [3,44,59,60], and geographical locale [61] were barriers reported in previous studies. As health needs and challenges have changed over the past decade, policymakers must consider the factors that influence people's health-seeking behavior. For the sustainable and equitable provision of health care to the disadvantaged and underserved groups, removing barriers and integrating public and private health services are crucial [62]. Malaysia is among the countries that have achieved UHC, with the vast majority of the population receiving comprehensive public healthcare services [63]. Malaysia, like most other countries, has a two-tiered healthcare system, with a highly subsidized public sector and a fee-for-service private sector [64]. However, this study findings showed that sick rural adults were less likely than their urban counterparts to seek healthcare from a healthcare practitioner. While most studies from other countries have identified sociocultural norms as determinants, distance and proximity to a healthcare facility also was identified as a significant factor for this behavior [65][66][67][68]. Within the public sector, the distribution of healthcare facilities and resources heavily favored urban areas [20,[25][26][27][28][29]31]. Furthermore, the current study found that a larger percentage of people in rural areas fall into the lower income quintiles. Inadequate access to health care and a lack of income are two reported factors that contribute to the rural population's poor health [65][66][67][68]. As the majority of Malaysians with low socioeconomic status came from rural areas [69], this calls for more efforts to promote healthcare utilization and enhance accessibility in the remote and rural areas. This study found that less than a fifth of the population who reported sickness practiced self-medication, which was lower than previous population-based studies [27,28] as well as other local studies [70][71][72], but higher in study conducted in Sri Lanka (Urban: 12.2%, Rural: 7.9%) [73]. This could be because self-medication in Malaysia is more costly compared to seeking treatment from healthcare practitioners, as patients are only charged minimal fee of Malaysian Ringgit (MYR1) [US dollars (USD0.24)] for visits to the public health clinics [74]. Although self-medication assists in the reduction of the burden on medical care, it is linked to many possible risks [70,[75][76][77]. This issue highlights the importance of healthcare practitioners in promoting rational use of medicines, including information on potential side effects, ensuring informed and responsible self-medication [77]. Moreover, public health awareness programmes can be organized as part of larger public health efforts, to help people understand disease processes and positive health behaviors. According to the World Health Organization, education is one of the key social determinants of health, and addressing it appropriately is essential to promote health and reduce long-standing health inequities [78]. Among the urban population in our study, those with no formal education were more likely to self-medicate than those with higher education levels. The influence of education level on self-medication practice is consistent with a study in Saudi Arabia [79]. In Malaysia, community pharmacies that are strategically and conveniently located in shopping malls and supermarkets [33] led to better access to OTC medications, especially among the urban population where amenities and infrastructure are more readily available. While OTC medications have been shown to be safe and appropriate for use without the supervision of a health care provider, unwanted effects may result if used irresponsibly [80]. Inadequate health literacy among the less educated coupled with easy access to medication may result in serious consequences, which prompts the need to improve health literacy, particularly the negative consequences of self-medication to one's wellbeing. Furthermore, people with a lower degree of education usually have lower health literacy [81]. Thus, the combined effect of easy access to the medications and the higher likelihood of self-medication among those with the lowest educational attainment of the urban population posed a worrying situation. Campaigns such as 'Know Your Medicines', in line with Malaysia's national health agenda, 'Agenda Nasional Malaysia Sihat' (ANMS), advocate the importance of knowing your medications to improve public awareness and empower personal health [82]. Results from our study indicated that self-rated health to be one of the important associated factors in health-seeking behavior. Those who self-rated their health as poor to very poor were more likely to seek care than those who self-rated their health as good to excellent, regardless of locality. Conversely, those who self-rated poor to very poor health was also less likely to self-medicate than those who self-rated good to excellent health among urban population. Published literatures highlighted the association of healthenhancing behaviors, utilization of health services [83,84], and self-medication [79], among those who rated their health as poor, however, the reported results are mixed. Previous studies have established that a single-item measure of self-rated health provides a holistic view of the population's physical and emotional well-being, as well as the ability to predict health-seeking behavior and healthcare use [50,83,85]. In our study, those who self-rated their health as poor to very poor were significantly associated with the presence of longterm condition(s) (Table S1), which is consistent with another large-scale study conducted in China [86]. The relationship between presence of long-term condition(s) and poorer health could explain the influence of self-rated health on health-seeking behaviors as the nature of the long-term condition itself, which demands follow-up appointments and a formal prescription to obtain the medications, that may cause this group of people used the health care services and less likely to self-medicate. Among the urban population, government employees were more likely to seek treatment than those who were self-employed when they were sick. The association between occupational status and treatment seeking behavior in this study result is consistent with another study conducted in China, which found that self-employed people were less likely to take remedial action and seek medical help after being ill [87]. This could be attributed to the time constraints to seek treatment when they are sick and financial issues, as selfemployed individuals were more likely to have irregular working hours. Furthermore, their incomes are directly dependent on their work [88]. Additionally, government employees are entitled to a higher number of days for paid sick leave [89] compared to those working in other than the public sector [90]. Rural population with at least one long-term condition were more likely to seek medical treatment than those without, which concurred with previous research that found an association between the presence of chronic illnesses and seeking healthcare services [50,91]. Two-thirds of public primary care clinics in Malaysia are in rural areas [24] and a national cross-sectional study of randomly selected clinics found that doctors in public clinics saw more chronic diseases like hypertension and diabetes, as well as follow-up cases, whereas doctors in private clinics saw more acute and minor illnesses [92]. This occurrence may largely be contributed by the heavily subsidized public healthcare services by the Malaysian government, that also covers the cost of lifelong medications, which is more economical for patients with chronic disease as the nominal fee granted access to the entire spectrum of public healthcare services in the clinics [18,21,64,74,92]. This economic factor may have driven private clinics away from the rural areas [18,24]. Perceived severity or fear of the consequences of the disease [93] might also be the reason for seeking treatment among the rural population. This study discovered that gender is not associated with health-seeking behavior among Malaysian adults who reported sickness. Although women were perceived more likely to seek treatment and utilize health services as compared to men [5], previous local studies found that, in general, there was no difference in terms of healthcare utilization across gender [94][95][96]. In addition, previous national health survey reported that there was no difference in the autonomy of decision making for healthcare between gender [28]. The sample size for this study was large, consisting of 10,484 adults who covered both the urban and rural areas. The proportion of respondents from the urban and rural areas in this survey was very close to Malaysia's real population in the same year [97]. Despite its strengths, this research has a number of limitations. Because of the cross-sectional nature of this research, no causal association between health-seeking behavior and associated factors could be established. Seasonal change could not be measured as the data was collected at just one point in time. Finally, since this analysis used self-reported data on previous events, there is a possibility of recall bias. --- Conclusions This cross-sectional study showed that sociodemographic, enabling, and health need characteristics were associated with health-seeking behaviors among Malaysian adults who reported sickness from both urban and rural localities, with education level, employment status, self-rated health, and presence of at least one long-term condition as the associated factors. This study revealed gaps in healthcare services and more rooms for improvements despite Malaysia has already achieved UHC status. Understanding the factors which influence health-seeking behavior among the urban and rural population could close the gaps in healthcare utilization among the Malaysian population. Future policies should move towards specific targeted approaches that focus on the rural and vulnerable population, especially regarding access to healthcare services as well as their knowledge and literacy on seeking proper medical care. Taking care of health should be a culture, a way of life. It should be embedded and be a shared responsibility across all sectors, in line with the Sustainable Development Goals. Social services actors and organizations, which administratively are not under the purview of the MOH Malaysia, are closer to the people's hearts as compared to governmental organizations. Political players are the main drivers with powers to influence the masses. --- Mainstream and social media are also key players in educating the nation regarding health matters. We recommend active two-way engagements, dialogues, and close collaborative efforts with these parties for a shared vision of a healthy nation. We also recommend further in-depth studies to be conducted on factors such as perceived quality of services received, which may provide a deeper understanding on the health-seeking behavior of Malaysia population. --- Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/ijerph19063193/s1, Table S1: Logistic regression model for self-rated health as poor to very poor among Malaysian adults who reported sickness. Informed Consent Statement: Written and informed consent was obtained from each respondent prior to the interviews, and confidentiality of respondents involved was assured throughout the conduct of the NHMS 2019. --- Author Contributions --- Data Availability Statement: To protect the privacy of the respondents, the data set that supports the findings of this article is not publicly available. Request for data can be obtained from the Head of Centre for Biostatistics & Data Repository, National Institutes of Health, Ministry of Health Malaysia on reasonable request and with the permission from the Director General of Health, Malaysia. --- Conflicts of Interest: The author(s) declare that they have no conflicts of interest with respect to the research, authorship, and/or publication of this article. | Understanding care-seeking behavior among urban and rural populations can help to support the planning and implementation of appropriate measures to improve health in the community. This study aims to determine the factors associated with the health-seeking behavior among Malaysian adults in urban and rural areas who reported sickness. This study used data of Malaysian adults aged 18 years and over from the National Health and Morbidity Survey 2019; a cross-sectional, national household survey that targeted all non-institutionalized residents in Malaysia. Respondent's characteristics and health-seeking behavior were described using complex sample descriptive statistics. Multivariable logistic regression analysis was conducted to examine the association between potential factors (sociodemographic characteristics, enabling, and health need) and health-seeking behaviors (seeking treatment from healthcare practitioners and self-medication). A total of 10,484 respondents, estimated to represent 18.9 million Malaysian adults aged 18 years and over, were included in the analysis. Prevalence of seeking treatment from healthcare practitioners and self-medication among Malaysian adults with self-reported sickness were 57.3% and 23.3%, respectively. Self-reported sickness among both the urban and rural populations who rated their health as poor to very poor was more likely to seek treatment than those who rated good to excellent. However, among the urban population, those who rated their health as poor to very poor were less likely to self-medicate. Among the urban population, government employees were more likely to seek treatment, and being without formal education significantly increased the likelihood to self-medicate. Among the rural population, those with at least one long-term condition were more likely to seek treatment than those with none. Understanding the factors which influence health-seeking behavior among the urban and rural population could close the gaps in healthcare utilization among the population in Malaysia. |
Introduction Since the publication of the Zimbardo Time Perspective Inventory (ZTPI) in 1999, hundreds of studies on time-perspective (TP) research have been conducted using this tool. The TP construct is conceptualized as a "... nonconscious process whereby the continual flows of personal and social experiences are assigned to temporal categories, or time frames, that help to give order, coherence, and meaning to those events." [1] (p. 1271), [2]. Time perspective is understood as the unique pattern which helps us assess and categorize our life experiences and is strongly related to every possible life domain, including psychological well-being, personal and professional achievements, and personality traits [3][4][5][6][7][8]. Taking this into consideration, it is important to find out whether it is a stable personality trait or can be changed, i.e., to improve one's life situation [9]. We are familiar with the recent literature criticizing TP theory and its shortcomings [10,11]. The ZTPI was the only inventory measuring TP in the Ukrainian language available at the time of the study. Our study complements the enormous amount of empirical research applying the ZTPI in studies in different languages and cultures [12]. According to cognitive social learning theories, we first learn to reflect on our past events, plan for the future, or assess the current situation from our parents or other significant adults [13][14][15][16][17]. The important role of the family and its socioeconomic status on one's time-perspective development is well documented in research on children who grow up in the deprived environment of an orphanage or in a socioeconomically deprived family. Such conditions often lead to a time perspective which is more biased towards the present, while the future time orientation is underdeveloped [18][19][20]. The family's socioeconomic status has been shown to influence the development of a future time orientation in adolescents. The subjective orientation towards the future depends on the feasibility of plans made and the level of endorsement from a certain culture with its particular characteristics [21][22][23][24][25][26]. For example, Seginer and Lens [27] discuss how the endorsement of cultural demands defines the strength of the future time orientation in the domain of education for adolescent girls in Israel. A comprehensive review by Fieulaine and Apostoloidis [28] showed that a privileged socioeconomic status in adulthood is linked to more pronounced past and future time orientations and more positive attitudes towards them compared to individuals with a lower socioeconomic status. According to the authors, focusing on the present orientation in individuals with a lower socioeconomic status may be an adaptive strategy to cope with disadvantageous situations during crises and insecurity when the future is uncertain [28,29]. These studies indicate that the individual's time perspective is strongly rooted in the social context of personal life experiences. There is clear supportive evidence that the characteristics of a person's background culture and socioeconomic status shape an individual's time perspective according to situational demands and the possibility of future rewards. An individual's time perspective should, in principle, be modifiable in line with a profile which is optimal for psychological well-being and effective functioning. However, the simple fact that the time perspective is connected to different social factors does not mean that the impact of these factors can be easily outweighed. Attitudes towards time and behavioral patterns learned in early childhood and practiced according to personal experience for decades are difficult to change. This suggests that the time perspective may be a relatively stable individual characteristic throughout one's lifetime. How stable is an individual's time perspective? Many studies supporting the time perspective as a stable individual characteristic have shown strong correlations between specific time orientations and different personality traits [30][31][32]. The different time-perspective dimensions (as measured by the ZTPI) are linked to all of the 'big five' personality traits (openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism), as well as to locus of control, optimism, self-efficacy, aggression, impulsivity (especially sensation seeking), and many more [32][33][34][35][36], indicating the construct validity of the time perspective. One of the first attempts to assess the stability of time perspective was made by Luyckx and colleagues [37], who studied the time-perspective dynamics of freshman students over a four-month interval. The time perspective and an individual's self-identity formation mutually reinforced one another, which led to significant changes in both constructs among those young people after a short period of time. Quite different results were found by Earl and colleagues [38]. They tested 367 retired Australians, all of whom completed the ZTPI three times, with two nine-month intervals in between. There was no strong modulation in the five ZTPI scales over the course of the 18 months, pointing to the stability of the time perspective construct in elderly individuals. The data were collected under similar national and global economic circumstances all three times [38], indicating that there were no significant external forces that might have led to changes in the time-perspective profiles. A different attempt was undertaken by Wiberg and colleagues [39], who investigated the stability of the balanced time perspective (BTP), the ideal combination of time orientations that enable one to flexibly switch among different time dimensions according to personal needs and situational demands [6,8,36,40]. Seven participants were tested with the BTP profile again after a year and a half. Four of them had a stable BTP, whereas the time perspective of the other three had changed. One participant had increased his level of balance, and the profiles of two others indicated a decrease in their levels of balance [39]. Although the small number of participants makes it difficult to generalize the findings, this study pointed out important questions about how to measure the stability of the time perspective, namely by comparing the separate time orientations over a particular time interval and by exploring the dynamics of the whole system of the time perspectives. Important for our study, it is known that the TP is changed in people suffering from loss and migration. In Syrian refugees in Greece, an increased past-negative and presentfatalistic and a decreased future time perspective was associated with post-traumatic stress disorder [41]. These three time orientations correlate similarly with people's general life satisfaction [42]. The present study explored whether time perspectives change significantly under radical modifications in life circumstances, like political and economic factors. As shown in earlier studies, the time perspective correlates with a person's social and economic status [29,43]. We also assumed that significant changes in the social and economic living conditions would provoke a change in time perspectives. The time perspective of Ukrainian students was observed before and during the period of profound national, social, economic, and political crises starting in 2014. The pre-crisis period was characterized by relative social, economic, and political stability. The crisis period was marked by a high level of social, economic, and political turbulence: the national currency rapidly depreciated by a factor of three, an administrative region was annexed, two other administrative regions were isolated by the war line, and the level of unemployment increased greatly. All these factors caused huge waves of mass migration from the annexed regions to other Ukrainian regions and abroad. According to official numbers, more than a million Ukrainians left their homes during the first two years of the crisis of 2014. Differences in social, economic, and political characteristics between the pre-crisis and crisis periods were sufficiently significant to expect changes in the residents' time perspectives if the TP construct is actually sensitive to situational factors. The TP data analyzed in the study were gathered by Ukrainian researchers from early 2010 to 2018 as part of an attempt to collect norm data of the ZTPI for Ukraine (see below). After the beginning of the crisis, a cross-sectional design of the study with two time points was applied to investigate whether there were any differences in time perspectives measured under different socioeconomic circumstances in two different Ukrainian regions. We were unable to implement a strict longitudinal design with intra-individual measurements across two time points. What happened in Ukraine could not be anticipated, which is why the presented study could not be planned in advance. The TP data analyzed in the study were gathered by Ukrainian researchers from early 2010 to 2018. --- Materials and Methods --- Participants A sample of 1588 individual students participated in a series of studies in different universities in Ukraine. Two regional sub-samples were formed: (1) 1037 residents from the Lviv region, the most western Ukrainian region and the one most distant from the war zone (Table 1); (2) 551 residents of the most endangered regions, the eastern Ukrainian regions closest to the war zone and the southern coastal region, which has been plagued by a highly unstable internal situation since the beginning of the crisis and was therefore psychologically similar to those close to the armed conflict (Table 2). The decision to analyze the samples separately was based not only on the proximity to the war zone and the internal socio-political situations in the regions, but also on their residents' political views. According to sociological research conducted during the precrisis period and after the crisis had begun, up to 90% of western residents supported the changes initiated at the beginning of the crisis, while south-eastern Ukrainian regions had quite opposite views. The majority of these residents (about 70%) were clearly against the ongoing changes and named the revolution one of the most negative events in Ukrainian history (Democratic Initiatives Foundation) [44][45][46]. Since the time perspective reflects individual views of one's own past, present, and future [1,47] and is connected to one's life situation, the different perceptions and views of the ongoing sociopolitical situation created an important distinguishing factor to analyze the data from different regions separately. The sample for the western Ukrainian region was comprised of students from two national universities (Ivan Franko National University of Lviv and the Lviv Polytechnic National University) who were residents of the Lviv region. The total sample consisted of 1037 students (41% males and 49% females; the gender of 10% of the participants was unknown because that information was taken from studies which did not include such information). A total of 432 students were questioned in 2010-2011 and 116 in 2012-2013 (pre-crisis), and 489 students comprised the "crisis" group that was collected in 2015-2016. We decided to distinguish two pre-crisis subgroups, since the interval between data collections (1 year 8 months) was almost equal to that between the second pre-crisis subgroup and the crisis group (1 year 4 months). The surveys contributing data to this study sample were performed in group settings. More detailed characteristics of the sample are presented in Table 1. The sample for the south-eastern Ukrainian region consisted of students from different universities located in the Dnipro, Kharkiv, and Odessa regions. A total of 154 were male and 397 female, with a mean age of 19.05 years. A total of 279 were questioned in 2013 (the pre-crisis year) and 272 during the years of 2014-2015 (after the start of the armed conflict, which marked the beginning of socioeconomic and political crisis). More detailed characteristics of the sample are presented in Table 2. Considering the above-mentioned assumptions about the regional characteristics, we decided to analyze the time perspectives separately by regions, assuming that different views of the sociopolitical situations and different proximities to the war zone would result in different time perspectives between the pre-crisis and crisis periods. The decision to explore the time-perspectives separately by regions was based on three premises: (1) the objective severity of the exposure to danger due to the proximity to the war zone or internal instability of the region; (2) the regional differences in the residents' political views, which represented a portion of their overall subjective views underlying individual time perspectives; and (3) the significant differences in time orientations between regions during the pre-crisis period. The independent variable of interest was the period of testing. --- Instrument The participants completed a form on their age, sex, and place of residence, as well as the Zimbardo Time Perspective Inventory in its Ukrainian or Russian adaptation, depending on their native language (Ukrainian adaptation by Senyk [48]; Russian adaptation by [49]). The results were calculated according to updated keys for Ukrainian and Russian versions of the ZTPI (validated on Ukrainian-and Russian-speaking Ukrainians) [50]. Participants responded to each of the 56 items on a 5-point Likert scale (1 = very untrue of me; 5 = very true of me). The results were calculated in accordance with the updated keys for the Ukrainian version of the ZTPI [50], which slightly differs from the first version in the present-fatalistic scale. The inventory itself measures five dimensions of the time perspective. The pastnegative scale reflects a generally negative, aversive view of one's own past. Due to the reconstructive character of the past, such negative attitudes could reflect real experiences of negative or traumatic moments in the past, a negative reconstruction of an actually not-soaversive past, or a combination of both. The present-hedonistic scale reflects a hedonistic, risk-taking attitude toward life and presupposes enjoying the present moment with little concern for the further consequences of one's behavior. The future scale measures a general future orientation, which suggests that behavior is dominated by the effort made to achieve the goals set and possible rewards in the future. The past-positive scale relates to fond and sentimental attitudes towards the past, when past experience and times are remembered as something pleasant, with a tendency towards nostalgia. The present-fatalistic scale reveals a fatalistic, helpless, and hopeless attitude towards the future and life in general; individuals with such a time orientation believe in fate and are certain that they cannot influence present or future events in their lives [1]. --- Results First, we compared time orientations from the pre-crisis period in the western region and in the south-eastern regions. The Student's t-test showed significant differences in present-hedonistic and future time orientations (t = -7.34 and t = -3.32 respectively, p <unk> 0.001) between regions. The hedonistic and future time orientations were more pronounced in the eastern regions (compare values in Tables A1 and A2 in the Appendix A). In the post-crisis period, south-eastern regions scored significantly higher on both negative and positive past orientations (t = -4.63 and t = -4.00, respectively, p <unk> 0.001), and significantly higher on both hedonistic and fatalistic present orientations (t = -9.20 and t = -7.16, respectively, p <unk> 0.001), while showing no difference in future time orientation, as compared to the western region (Tables A1 and A2 in the Appendix A). Then, we applied ANOVAs to analyze the variance in each time orientation separately across time and for region and controlled for gender. --- Western Region Figure 1 shows that there were no significant differences in time-orientation scores between the first and the second pre-crisis periods; the main change in time perspective was observed in the third period, which was characterized by the socioeconomic crisis. The future time orientation increased during the crisis period, while the scores on the present-hedonistic and past-positive scales decreased. There was a decrease in the scores on the present-fatalistic scale throughout all three periods. No dynamics were observed for the past-negative time orientation. was observed in the third period, which was characterized by the socioeconomic crisis. The future time orientation increased during the crisis period, while the scores on the present-hedonistic and past-positive scales decreased. There was a decrease in the scores on the present-fatalistic scale throughout all three periods. No dynamics were observed for the past-negative time orientation. The two pre-crisis subgroups were united into one group of 548 participants and then again compared with the crisis group (N = 489). Following separate ANOVAs, the future time orientation was significantly higher in the crises period as compared to the pre-crisis period (F(1, 929) = 10.88, p = 0.001, <unk>p 2 = 0.012). The present fatalistic (F(1, 929) = 11.87, p = 0.001, <unk>p 2 = 0.013), the present-hedonistic (F(1, 929) = 28.57, p <unk> 0.001, <unk>p 2 = 0.030), and the past-positive (F(1, 929) = 27.46, p <unk> 0.001, <unk>p 2 = 0.029) orientations were lower during the crisis as compared to the pre-crisis period. No difference for the past-negative time orientation was found over time (F(1, 929) = 0.18, p = 0.668, <unk>p 2 <unk> 0.001). The mean values for time orientations in the pre-crisis and crisis groups can be found in Appendix A. The two pre-crisis subgroups were united into one group of 548 participants and then again compared with the crisis group (N = 489). Following separate ANOVAs, the future time orientation was significantly higher in the crises period as compared to the pre-crisis period (F(1, 929) = 10.88, p = 0.001, <unk>p 2 = 0.012). The present fatalistic (F(1, 929) = 11.87, p = 0.001, <unk>p 2 = 0.013), the present-hedonistic (F(1, 929) = 28.57, p <unk> 0.001, <unk>p 2 = 0.030), and the past-positive (F(1, 929) = 27.46, p <unk> 0.001, <unk>p 2 = 0.029) orientations were lower during the crisis as compared to the pre-crisis period. No difference for the past-negative time orientation was found over time (F(1, 929) = 0.18, p = 0.668, <unk>p 2 <unk> 0.001). The mean values for time orientations in the pre-crisis and crisis groups can be found in Appendix A. --- South-Eastern Region Figure 2 shows that both past-negative and present-fatalistic time orientations were higher during the crisis period, while the scores of future time orientation were lower compared to the pre-crisis period. The mean scores for time orientations in the pre-crisis and crisis groups can be seen in the Appendix A. The variance for each time orientation was separately calculated with ANOVAs. This revealed significantly higher values in the past-negative (F(1, 547) = 16.17, p <unk> 0.001, <unk>p 2 = 0.029) and present-fatalistic time orientation (F(1, 547) = 25.63, p <unk> 0.001, <unk>p 2 = 0.045) after the onset of the crisis as compared to before. Values for the future time orientation (F(1, 547) = 6.70, p = 0.010, <unk>p 2 = 0.012) were significantly lower during the crises than before the crises. No differences for present-hedonistic and past-positive time orientations were identified (F(1, 547) = 1.09, p = 0.298, <unk>p 2 = 0.002, and F(1, 547) = 3.09, p = 0.079, <unk>p 2 = 0.006, respectively). was separately calculated with ANOVAs. This revealed significantly higher values in the past-negative (F(1, 547) = 16.17, p <unk> 0.001, <unk>p 2 = 0.029) and present-fatalistic time orientation (F(1, 547) = 25.63, p <unk> 0.001, <unk>p 2 = 0.045) after the onset of the crisis as compared to before. Values for the future time orientation (F(1, 547) = 6.70, p = 0.010, <unk>p 2 = 0.012) were significantly lower during the crises than before the crises. No differences for presenthedonistic and past-positive time orientations were identified (F(1, 547) = 1.09, p = 0.298, <unk>p 2 = 0.002, and F(1, 547) = 3.09, p = 0.079, <unk>p 2 = 0.006, respectively). --- Discussion The present study examined variations in Ukrainian youths' time perspectives measured under the different socioeconomic and political conditions prevailing during the preand post-crises surrounding the year 2014, eight years before the war started in 2022. Stated in longitudinal terms, the time perspectives of the Ukrainian youth shifted towards a decrease in the future and an increase in the past-negative and present-fatalistic time orientations with the beginning of the socioeconomic crisis in the most unstable and closest regions to the war zone (south-eastern). These findings coincide with the general concept of time perspective, which states that the future time orientation decreases in times of material or psychological deprivation, whereas the present time orientation becomes more pronounced [29,43,51]. Such a change in time perspective helps to effectively adapt to the new circumstances when the distant outcomes are impossible to anticipate and novel challenges need to be dealt with [29]. In the region most distant from the armed conflict in 2014/2015, the western Ukrainian region of Lviv, the direction of the change in time perspective was the opposite. After the crisis had started, an increase in the future, a --- Discussion The present study examined variations in Ukrainian youths' time perspectives measured under the different socioeconomic and political conditions prevailing during the pre-and post-crises surrounding the year 2014, eight years before the war started in 2022. Stated in longitudinal terms, the time perspectives of the Ukrainian youth shifted towards a decrease in the future and an increase in the past-negative and present-fatalistic time orientations with the beginning of the socioeconomic crisis in the most unstable and closest regions to the war zone (south-eastern). These findings coincide with the general concept of time perspective, which states that the future time orientation decreases in times of material or psychological deprivation, whereas the present time orientation becomes more pronounced [29,43,51]. Such a change in time perspective helps to effectively adapt to the new circumstances when the distant outcomes are impossible to anticipate and novel challenges need to be dealt with [29]. In the region most distant from the armed conflict in 2014/2015, the western Ukrainian region of Lviv, the direction of the change in time perspective was the opposite. After the crisis had started, an increase in the future, a decrease in the present-hedonistic and present-fatalistic time orientations was observed. The past-positive orientation also decreased. Apart from the size of the identified effect, another argument for the validity of the findings is their consistency. No difference was found between the two pre-crisis periods, although the time interval between them was almost equal to the time interval between the second pre-crisis and the crisis periods (1 year 8 months and 1 year 4 months, respectively). The two different pre-crisis periods in the western-region sample did not show a significant difference in time orientations. We can conclude that not the time interval per se contributes to the differences in time perspective, but the visible changes in life circumstances between different time intervals. One of the functions of the time perspective is to categorize one's personal and social experience [1]. If the social circumstances change enough to notably influence personal experience, the time perspective adapts accordingly, whereas it does not change significantly during stable conditions. This conclusion coincides with the findings of Luyckx et al. [37] and Earl et al. [38]. In their longitudinal study, Luyckx et al. [37] showed that the time perspective changed significantly under the intense influences of social experience, even after a comparably short interval. Freshman college students showed changes in time perspectives as they shifted towards an increase in the future and a decrease in the present time orientations after just four months. The authors explained that the dynamics of time perspective were due to a new social role of college students, who were in the process of preparing themselves for their careers and future adult lives [37]. The study by Earl et al. [38] showed that the time perspective does not change significantly if measured under stable conditions, even after a year and a half. The authors conducted three measurements of the time perspectives of 367 retired individuals at ninemonth intervals, each under similar global economic circumstances. Finding no changes over time, Earl et al. [38] concluded that time perspectives are difficult to change. We, however, argue that significant changes in time perspectives do occur only if visible changes in personal or social experience happen. The question is why different and even opposite directions in time-perspective changes were observed in the two Ukrainian regions. The future orientation decreased, and pastnegative and present-fatalistic time orientations increased in the south-eastern regions. In the western region, the future time orientation increased, and the present-fatalistic, present-hedonistic, and past-positive time orientations decreased. One possible answer lies in the economic situation. People closer to the war zone in the south-eastern regions might have experienced more severe consequences of the economic decline, whereas almost all economic sectors in the western region remained unchanged, including tourism from abroad. The proximity of south-eastern regions to the war zone might also have undermined the basic safety of the inhabitants. This could account for the south-eastern residents' shift in time perspectives towards an increase in the past-negative and present-fatalistic and a decrease in the future time orientations. When life becomes endangered by forces one cannot control, the time perspective adapts by increasing in fatalistic attitudes towards the present and by decreasing the future outlook, which no longer makes any sense due to its total unpredictability. A further influence on the time perspective may have been expectations of the political developments. As mentioned in the introduction, the views of the crisis and its political ramifications differed between residents in the western and the south-eastern Ukrainian regions. The western regions were characterized by mostly positive attitudes towards the events that preceded the crisis and their political consequences, expressing belief in a better future in accordance with the sociological survey (Democratic Initiatives Foundation) [46]. This corresponds to the identified increase in the future time orientation in the residents of the western region. The majority of residents of the south-eastern regions had mostly negative views of those events (Democratic Initiatives Foundation) [46], which corresponds to the revealed increase in present-fatalistic and past-negative time orientations. Our study shows that time perspectives change significantly due to notable changes in social, economic, and political processes. However, it is probably not the change in the situation itself, but the specific combination of factors and perceptions of them, that influences the time perspective. Among various predictors, economics, safety needs, and political preferences played a role in the observed differences in time perspectives measured before and after the start of the socioeconomic, political, and military crisis. The strength of this study is the large data set, which allowed us to assess differences in time perspectives measured under diverse socioeconomic and political conditions over time. However, there are also limitations. The data used in this study stemmed from different surveys with the aim to collect norm data of the ZTPI for Ukraine. It was impossible to plan the study in advance; as a result, a strictly longitudinal within-subject design was not possible. Therefore, any causal interpretations concerning the period of testing on the time perspective should be treated with caution. The changes in time perspectives due to changes in social, economic, and political conditions are suggestive, but longitudinal, within-subject studies are still needed to complement our findings. Another limitation is the fact that we did not include additional information about participants' subjective views of the ongoing situation. We could not examine whether there really were any clearly significant correlations between time perspectives and political attitudes. All we could rely on were the results of corresponding sociological surveys. We referred to the average population data by assigning students' time perspectives to average sociological, economic, and political indices in a region. A future study should include questions about feelings and judgments of personal welfare, safety, and personal expectations, as well as expectations of the country's political future. Finally, the study sample was comprised of students only and was characterized by a predominance of women, which could have impacted the results. Advanced planning and including different age groups and people from different social strata would increase the credibility of the research. Social, economic, and political turmoil cannot be easily anticipated, especially not for the sake of psychological studies. Even now, in 2022, the war took most people by surprise. We relied on serendipitous data, which we analyzed to the best of our knowledge. --- Conclusions The article examined differences in Ukrainian youths' time perspectives measured under diverse socioeconomic conditions. The time perspectives tested during the social, economic, and political crises in 2014/2015 significantly varied from the ones previously measured under stable circumstances. The revealed differences did not fall into one pattern, but they depended on the specific characteristics of the crisis and individual perceptions of it. In the southeastern regions, which suffered from severe consequences of the crisis, the time perspectives shifted towards a decrease in future and an increase in past-negative and present-fatalistic time orientations. In the western regions, where the crisis situation was characterized by a predominantly positive political attitude and was not as severely affected socioeconomically, the observed shift in the time perspectives took a positive direction, with a decreased emphasis on the negative present time orientation and an increase in the future orientation. The results of the presented study support the time perspective as a dynamic system which adjusts to visible changes in socioeconomic and political characteristics of the living situation. --- Data Availability Statement: The data presented in this study are available in the Supplementary Materials here. --- Supplementary Materials: The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/ijerph19127465/s1. Study data: ZTPIUkraine-Senyk. Funding: This research received no external funding. --- Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki. The scientific and value-oriented principles outlined by the Ukrainian Catholic University, Lviv, served as a fundamental ethical frame of reference for our study: On the prevention of academic plagiarism and other types of violations of academic integrity in the educational process, pedagogical and scientific activities in the Institution of Higher Education. At the time of the study, the Ukrainian Catholic University as well as the other universities had no ethical review board (IRB), which is in the process of institutionalization, but it has been delayed through the COVID-19 pandemic and the beginning of the war. --- Informed Consent Statement: Informed consent was obtained from all subjects involved in the study through their voluntary participation in the study, which consisted of filling out the ZTPI questionnaire in a group setting at an appointed time at the universities. --- Conflicts of Interest: The authors declare no conflict of interest. --- Appendix A | The characteristics of the individual's time perspective in relation to changes in social, economic, and political conditions are of major conceptual interest. We assessed the time orientations of 1588 Ukrainian students living in two different regions (western and south-eastern Ukraine) with the Zimbardo Time Perspective Inventory (ZTPI) before (2010)(2011)(2012)(2013) and during (2014-2016) the socioeconomic, political, and military crises which started in 2014, eight years before the war in 2022. We applied ANOVAs with the ZTPI dimensions as dependent variables and the period of testing (precrisis, postcrisis) as an independent variable for the two Ukrainian regions separately. The time perspectives of residents in the region most distant from the war zone (western), who positively assessed the change in the political situation around 2014, increased in the future time orientation and decreased in the present-fatalistic, past-positive, and 333 present-hedonistic time orientations. The time perspectives of residents in the regions closest to the war zone (southeastern) decreased in the future and increased in the past-negative and present-fatalistic time orientations, reflecting their negative judgments of the events. It is not the crisis itself, but the specific social, economic, and political factors and evaluations which define the time perspectives, which are flexible and adjust to changes during extreme life circumstances. |
Introduction Urban fish vending has become a critical source of livelihood for a significant population of urban dwellers, particularly in rapidly growing cities like Dar es Salaam. The rising urban population has led to increased demand for food, including fish products, creating a substantial market base for various commodities. Consequently, informal fish vending has emerged as an employment opportunity for many individuals, providing them with a reasonable income and contributing to the local economy (URT,2016) Tanzania's national population census conducted in 2022 indicated that 30% of the approximately 61,741,120 million people live in urban areas and food is highly required (NBS, 2022). To supply enough food that provides nutritious products to the rapidly growing population in Dar es Salaam fishing and trade on fisheries products should play a major role in the livelihoods of urban dwellers ( (Harper et al 2020;Kleiber et al 2015;Fröcklin et al., 2013;Getu et al., 2015). This further suggests that the presence of this population is a market base of various commodities including fish and other related products. In order to supply number of fish in the required quantity, a reasonable number of people have seen fish vending as an employment opportunity to them and have been making a reasonable income out of this activity (Weeratunge et al., 2014;Issa, Mazana, Kirumirah, & Munishi (2022). In addition to providing employment opportunities for a substantial number of people, fish vending contributes significantly to the country's economy ( (Barsoum, 2021;Sambuo & Kirama, 2018;URT, 2016). --- 338 In Dar es Salaam, informal fish vending has become a common activity for women providing them with opportunities for economic empowerment. Several initiatives have been undertaken to support and improve informal fish vending in the city. Despite the fact that, the government grants have been extended to women vendors to provide an avenue for capital availability, majority have been unable to access the credit due to bureaucratic procedures (Karani, & Faillure, 2020). Moreover, fish vending kiosk has been set up to create employment opportunities to the family as well as the community. In addition to that the establishment of the Tanzania Women Fish Workers Association (TAWFA) and the development of the National Fisheries Policies (NFS) of 2015 have aimed at promoting gender mainstreaming, equity in resource access, and awareness in fisheries and aquaculture (Kimasa, 2013;FAO 2022). Indeed, El-Azzazy (2019) found that women fish vendors do face street vending challenges and opportunities in Fayoum while Peke (2013), elaborated challenges faced by women fish vendor in Mumbai, however, none of these has been conducted in Ferry market Dar es Salaam. Therefore, this study aims at addressing this gap by attempting the following research question; what are the threats experienced by women fish vendors? what are coping strategies adopted by women vendors, and the implications for their resilience? By shedding light on the livelihood contributions of fishing activities among women and identifying barriers hindering their meaningful participation. Above all, the study is organized as follows; literature review with theoretical and empirical studies and empirical studies that shed a light on linkage between theory and practice. The third part contains the information on research methodology. After analysis and findings of the study, conclusion and recommendations are provided --- Literature Review This part of literature includes theoretical and empirical literature. It aims at discussing different studies undertaken previously by others in connection with the theory and to the topic and helps to obtain a gap for the current study. --- Theoretical and Conceptual Background This paper is guided by the multi-layered social resilience framework, which draws from various disciplines such as ecology, psychology, socio-anthropology, and sustainable livelihoods (Obrist, Pfeiffer, & Henley, 2010;Carpenter, et al, 2001Holling, 1973;Luthar, 2003;Masten, 2001;Bourdieu, 1984;DfID, 2000). The framework emphasizes the examination of resilience building in relation to different threats and the competencies needed to address them. It suggests that actors can mobilize economic, social, and cultural capital to increase their power and ability to cope with threats (Obrist, et al 2010). The framework also distinguished between reactive capacities which are immediate responses to threat, and proactive capacities, which involve anticipating and planning for threats in advance. The framework highlights the importance of positive adjustment and learning in building resilience, particularly in challenging livelihood conditions. It recognizes different forms of capital, including social, economic, cultural, and symbolic as prerequisites for resilience building (Munishi & Casmir, 2019). The framework prompts researchers to be explicit about the specific threat or risk being examined and whether the affected individual or groups are aware of these threats. It also emphasizes the multilayered nature of resilience building, involving networks at various levels, from individual to international (Obrist, et al 2010). Furthermore, the framework takes a strengths-oriented approach, focusing on support from institutions, rather than a deficit approach that emphasizes risk and inability to cope. It promotes a positive perspective on the ability of urban food street vendors to adjust to threats such as evictions and reallocation framework (Dongus, Pfeiffer, Metta, 2010). Lastly the framework offers a solution -oriented and mitigationfocused approach, which can guide researchers and policy makers in identifying corrective measures to enhance the resilience of food street vendors in the face of evictions and reallocation. --- Empirical Review --- Threats associated with fish vending business among women fish vendors One of the threats related to urban informal fish vending is related to vendors evictions and reallocations worldwide as evidenced in Asia, Latin America, and Africa. These continued evictions have enormously disturbed the business of the urban street vendors in urban areas (Kirumirah & Munishi, 2022;Munishi & Casmir, 2019). Urban fish vendors have also experienced fish scarcity due to the centralization of landing centres, unhealthy competition from newly entered fish merchants, and the new entrance of fish vending males in domestic markets (Kantor andKruijssen, 2014, FAO 2022). Another threat has been exploitative practices at various stages of the fisheries business starting from shore to domestic market. This has been coupled with the absence of infrastructure and amenities in the fish marketplaces (Kantor and Kruijssen, 2014). Other challenges experienced by the urban-based women fish vendors are harassment from various authorities, deflated fish prices, denial of public transportation, excessive rate of interest by money lenders, unhygienic market conditions, lack of facilities for rest and refreshment, and others (Munishi & Casmir, 2019). Due to the nature of their work, the fish-vending women are neither able to care for their children properly nor able to lead a peaceful family life (Issa, et al, 2022). Women fish vendors are faced with an additional threat stemming from the lack of legal status as they have no license or registered vendors' identification cards (Kantor and Kruijssen, 2014). Consequently, they lack secure claim to space from which to vend whether in markets or on streets. Another challenge is lack of access to credit through microfinance institutions or other support or services (Munishi & Casmir, 2019). The vendors may as well be vulnerable to harassment and exploitation (Munishi & Casmir, 2019). Women involved in fish vending also face a threat of social stigma that emanates from a belief that fishing vending in some communities is mainly associated with men (Munishi & Casmir, 2019 ;Aswathy & Kalpana, 2018). In line with that fish vendors also are faced with lack of knowledge about financial institutions that are relevant for small business owners which are very few (Munishi & Casmir, 2019). Most financial institution deals with credit-worthy customers of which women fish vendors do not qualify for the loan. Inadequate education level plays a significant role in limiting women fish vendors from access to capital as well as a market for their product (Shayo, Munishi & Pastory, 2022). In most cases less support from the government impeded women's fish vendors from coping with threat. --- Research Methodology --- Study area The study was conducted in the Ferry Fish Selling Market located in the Ilala District of Dar es Salaam City, which is a well-known centre for oceanic fishes and fish products in the area. --- Study Design The study employed a qualitative design with a phenomenological inquiry strategy. This approach, as suggested by Creswell (2014), aimed to document the experiences of women fish vendors regarding the threats they face, their coping mechanisms, and the resilience implications within their specific context. --- Sampling Techniques and Data collection Convenient sampling was utilized to recruit fish vendors who were willing to share their views on the study topic. In-depth interviews were conducted with a total of 30 women fish vendors from the Ferry Fish Selling Market in Dar es Salaam. Probing questions were used when necessary to gather more detailed information. The researchers facilitated the discussions, allowing participants to contribute their ideas, while research assistants recorded the proceedings. --- Ethical consideration The study adhered to ethical principles for qualitative research. Permission was obtained from market authorities, and all responsible women fish vendors were informed about the research. Participants were informed about the research considerations prior to sharing their information. Ethical rules, including the right to remain anonymous and withdraw from the study, were upheld. --- Data analysis The gathered information from the interviews was transcribed and saved as text documents. Swahili transcriptions were then translated into English to facilitate analysis. Content analysis of the transcriptions was performed using MAXQDA 10 [VERBI Software, Marburg, Germany]. The researcher read and re-read the data to familiarize themselves with the collected information and capture relevant issues. Open coding was used to ensure that no critical issues related to the guiding framework were overlooked. The researcher used the main issues derived from the framework and identified important supporting content by coding the data. This information was then used to establish the main themes. --- Validity and Reliability To ensure the reliability and validity of the findings, researchers practiced peer debriefing. This involved engaging more than one peer or qualified expert to objectively relate the obtained themes to the predetermined ones and assess the extent of agreement or divergence. A qualified, impartial colleague reviewed and assessed all coded segments, as well as the methodology used to derive the final themes. --- Findings and Discussions A: Fish Vending Business Threats Among the Women Fish Vendors Based on the data collected and analysed, it was noted one of the threats facing the women's fish vendors was a stigma that emanated from patriarchy. This is because it is believed that culturally, fishing activities were typically male-dominated activity and therefore women involved in fish vending were looked at with a suspicious eye. Moreover, women found themselves in difficult work environments because the majority of the fish sellers were men who sometimes exploited women in various ways such as harassment from various parties, and denial of public space and transportation, (Aswath & Kalpana, 2018). Another threat experienced by the women vendors was a scarcity of fish which was mainly caused by seasonality reasons as well as men and other powerful traders controlling the availability of fish in the market. Male and big business people bought fish in large quantities and kept on rationing it as it pleased them. In this case, they controlled the market, and women vendors who mainly possessed little capital experienced fish scarcity. Previous studies have also contended that women vendors experienced fish scarcity due to seasonality factors, centralization of landing centres, unhealthy competition from newly entered fish merchants, and the new entrance of fish vending males with moped in domestic markets (FAO, 2022). Women also experienced an additional threat of higher fish prices that had mainly emanated from fish scarcity. Indeed, it was further observed from the data that during fish scarcity prices raise more than double which make it difficult for the customer to afford. This threat hindered women from buying reasonable amounts of fish for sale due to their relatively small business capital. Most of the fish vendors have inadequate funds for business as well as a record small amount of profit a situation that further downgrades their businesses. These findings, corroborate well with some previous studies in Tanzania which mentioned inadequate Business capital as one of the acute business difficulties experienced by street vendors and notably female fish vendors face in Dar es Salaam (Munishi & Casmir 2019). Furthermore, women experienced a threat of inadequate business capital to expand their business. One of the respondents provided a detailed explanation, stating that their inability to secure loans is not solely determined by their personal network's influence on finding someone to assure for them, but also by the specific loan amount required by the lender: "The inability to access loans from financial institutions made our business difficult to undertake, so we establish a business with very small capital acquired through collateral means. The threat remains that, the size of capital you get does not satisfy your initial plans and this in return makes the business hard to undertake, we request the Government and other stakeholders support our efforts so that we can improve our business and family life as well..." (Interview, March 2023). The various forms of gender-related violence, such as sexual harassment, that the female fish vendors faced were worsened by the lack of social protection and authority support. As a result, women fish vendors tended to forego their business due to constant assault that they faced. It was further revealed that some women vendors were sexually exploited as one of the conditions for these men either to buy fish from the women or sell fish to the women at relatively low prices more especially the time of fish scarcity characterized by higher fish prices. Furthermore, women explained further that they lack quick response and inadequate support from the relevant authority when reporting different cases of sexual abuse in workplaces. Almost similar kind of threats had been observed in India by (Kantor and Kruijssen, 2014). It also corroborates with another study by Aswathy & Kalpana, (2018) which maintains that informal female fish vendors constantly face sexual harassment, and assault and are considered to be among the vulnerable social groups. "Some men are always disturbing us for the purpose of seeking relationships even if you are not interested, they tend to force and harass us and at the end of the day you find yourself into the trap. They use a weakness we have such as low capital or threats we face like accessing fish, especially during the low catch, so they tend to supplement either capital or fish for us, unfortunately, some of us agree with the situation and establish extra-marital affairs which put us in many risks such as contracting diseases, marriage conflicts, unexpected pregnancies that made us make abortions or bear a child with a man who is not your real husband, but all of this happens as we are breadwinners and there is no hope of success if we disagree with the circumstance".... (Interview, March 2023)" Moreover, women fish vendors revealed additional threats related to lack of legal status that denied them business license, documents or identification cards in carrying out their business in the urban setting. Lack of this legal status otherwise this person must be registered. Consequently, this subjected women fish vendors to insecure space business from which to conduct their business, as well as inability to access services such as access to credit through microfinance institutions or other support or services. According to some earlier studies, informal women fish vendors frequently operate their businesses without a license, which makes them vulnerable to threats when they approach the appropriate authorities and, more often than not, places them in awkward situations. (Kantor and Kruijssen, 2014). In the course of conducting their business, female fish vendors were also subjected to various types of crime, such as robbery, burglary, theft, pickpocketing, kidnapping, and abduction. Theft and pick pocketing happened at the market places are well as where they were going or returning back from the business. In most cases, female fish vendors must work very early in the morning and returning very late in the evening. Criminals take advantage of these hours due to the darkness and because there are fewer people on the road. One of the vendors states as here under: "Most of vendors reach ferry market early at 4:00am which made it easier for them to get affordable price. In most cases price is determine by fish supply in the market, therefore, when getting at the market will find retailers selling fish at a higher price. When we buy fish at a higher price the profit margin out of it is very little." (Interview, March, 2023) Another respondent adds: "We are in security dangers especially early in the morning when we rush to ferry market and during the midnight when we close our businesses as many petty thieves and robbers assumes we have enough money, they tend to invade and injure us to take away the money we have sometimes back we do remember that our fellows were invaded early in the morning before reaching the main road and every penny they have was taken by robbers."... (Interview, March, 2023) Last, but not least women vendors experienced a challenge of massive evictions and reallocations have been found to be threats faced by women fish vendors of which because of missing some important documents, especially licenses and other business permits. Women are forced to evict their business areas of which the flow of customers is assured to where they don't know. Previous studies in Dar es Salaam, Tanzania have also evidenced that vendors experience evictions and reallocations that lead to the decline of their business and as a result, slows down personal development (Kirumirah & Munishi, 2022;Munishi & Casmir, 2019). "We don't have vendors' identity cards to conduct our business and especially in the city centre, this threatens our business, and it's like we have no legal rights to undertake this activity......and for sure we are discouraged when the city militia (Mgambo) invade us and harass us while risking our products. The government has to consider our petty trade since we are breadwinners of our families, we need financial freedom, we don't want to depend on men for everything, and some of us are...widows and single parents...our children need to go to school, they need uniforms, school contributions, and food......we need rent and other expenditures, in short, we need recognition and support from the government"...... (Interview March, 2023) B: Coping with Fish Vending Threats and The Capacity to Cope I: Reactive capacity to cope with the fish vending-related threats Accordingly, women vendors managed to develop a number of reactive capacities for coping with fish vending-related threats as discussed here under. One of the common reactive strategies developed by the women was joining small self-helping groups popularly known as Village Community Banking (VICOBA). According to the vendors, this strategy would help them cope with threats related to business capital inadequacy and sexual exploitation which they experienced. They justified that; adequate business capital would support their business prosperity thus reducing their dependence on men who had been exploiting them sexually. This is a similar case to Munishi (2017) on the reactive capacity of coping with the threat developed by Maasai. One of the women states here under: "We have decided to overcome the threat of acquiring capital and depending on men to guarantee us business capital through the establishment of small self-help groups popularly known as VICOBA. These groups are really supportive of our business growth because they provide us with soft loans and some business development ideas. This has increased our self-confidence when interacting with men as we no longer depend on them too much. You see, inadequacy business capital partly pushed into sexual exploitation and sexual harassment." (Interview March, 2023) Formation of groups as strategy for coping with both business capital inadequacy and sexual exploitation has been previously noted among the urban street vendors in Dar es Salaam and elsewhere in Africa (Munishi, 2019;Aswathy & Kalpana, 2018;Mrindoko, 2022) Another reactive strategy developed by the women is seeking closely relative security support when they get out early in the morning as observed in Munishi, (2016); & Munishi,(2022). This strategy was developed to cope with the crime threats and helped them be at the ferry in time and get fresh fish at a good price. One of the respondents pointed it this way: "You see as women fish vendors we must seek security support from our family members i.e. husbands, children, and even neighbours. We request them to escort us to commuter bus stations during morning hours to ensure our safety from petty robbers who are constantly threatening us in our daily endeavours." (Interview March, 2023) II: Proactive strategies to cope with the fish vending related threats Securing business financing from credible and relevant institutions: Firstly, women vendors proactively, planned to secure big loans from more advanced and relevant financial institutions such as Banks and social security organizations. According to the vendors, this long-term plan would eventually save them from inadequate business capital threat. Business diversification and adopting new kind of business: One of the proactive strategies developed by women against the fish vending related threats was business diversification and adopting new kind of business that would guarantee them more sells and profit. This strategy would help them overcome the threats related to inadequate business capital as well as avoid sexual exploitation and harassment threats that they experienced. One of the respondents explains it: "To me I think the only reliable way to overcome business capital inadequacy is retirement from the fish vending business and consider doing more paying and decent business. In this case I now need to more efforts in raising capital from various sources as well as working hard in my current business. We also need to consult differ government authorities such as the local government to supporting us in setting up and supporting our future business." (Interview March, 2023) Anticipating to engage in different kind of business as a proactive strategy of coping with undesirable and less decent job was also previously captured among the Maasai migrants engaged is security work in Dar es Salaam (Munishi, 2023) as well as the Motorcycle taxi riders in Dar es Salaam who were uncomfortable with their former jobs (Munishi & Kirumirah, 2023) Business registration and licensing: Another proactive strategy developed by women fish vendors was an attempt to undertake business registration and licensing so as they could be recognized and avoid threats related to lack of legal status of their business in the urban setting. This strategy aimed at overcoming the threat related to urban informal trading in the urban areas. One of the women puts it this way: 342 "There is nothing good like being recognized as well as being respected in your business. So, we as women fish vendors don't have a recognition in our business. We considered very local and because of that our customers value our services. So, we need to register and be recognised by the government. In this case we shall work with the government and other authorities to obtain business identity cards, formal places to run our small businesses." (Interview, March, 2023) These strategies have also been captured by (Munishi & Kirumirah 2020), in Dar es Salaam Tanzania who noted that, issues of vendor licensing and permit issuing is among the critical policy issues in the urban settings. It has attracted different actor's initiatives including the President himself (Munishi, & Kirumirah, 2020). Search for and participate in education and capacity building programmes: Another proactive strategy anticipated by the women was searching for as well as participating in education and capacity budding programmes. Vendors pronounced that, this strategy would help them to either improve their current business or engage in other more paying business. One of the respondents puts it this way: "We shall consult various training institutions and local governments authorities to organise relevant training on how to expand more on our business and investment. Through this, I am sure we shall be included in various business forums organized by the government in where we shall obtain relevant business knowledge and skills to improve our business even more." (Interview, March, 2023) Another respondent adds on the importance of business education: "Sometimes thinking about the coming old days stresses us, as we don't have enough education, we don't have formal employment and we lack enough capital to run big business that can support us to earn a handsome profit for investing in social schemes such as NSSF. We are worried as we have experienced very old fish vendors are still dealing with the business while they are physically and emotionally tired but they don't have an alternative way to run their lives. It's now a right time to re think on investing for our future..........." (Interview, 2023). Engaging in Business advocacy: Another proactive strategy developed by the women fish vendors was advocating for conducive business environment and institutions notably policies and regulations that would favour their business. This would go hand in hand in ensuring recognition of their small informal business by the various relevant government authorities, including the local government: Recognizing and joining hands to efforts women made in small business is crucial since their growth mean a lot to the government in terms of employment creation, tax payment and strengthens family welfares, therefore it's important to initiate strategies to register and boost women investments towards realizing their efforts to sustainable livelihood (Grantham &others, 2021) A strategy of coping with inadequate business capital has been through business advocacy was earlier captured among the Maasai security guards, street vendors in Dar es Salaam and Morogoro as well as the youth engaged in the motorcycles taxi riding Business in Dar es Salaam conducive This would ensure supportive business environment and institutions notably policies and regulations that would favour their business (Munishi, 2022;Munishi & Casmir, 2019;Munishi& Kirumirah 2022). Efforts is needed by the government and other stakeholders to ensure the future investment through social security systems is well planned and seriously undertaken. --- C: Factors constraining the vendors capacity to cope with the threats One of the constraints to coping capacity was lack of adequate government support. Women lamented that they did not receive adequate financial support from the government. They acknowledge the local government business grants arrangements. However, they said that the amount was too little as well as marred with a number of bureaucratic procedures when it was eventually forthcoming. These findings are well supported by some previous finding that maintain that majority of the women street vendors have not had adequate financial support from the local government are still eager to receive government support through local government grants which hopefully could boost them economically and with low stress (Mrindoko, 2022). Moreover, it was noted that, local government was either unaware or not doing anything to intervene on the issues related to crime against women street vendors. Local authorities in streets and wards need to ensure enough security to their residents and this will enhance economic growth (Jegadeswari & Kumari, 2019) Above all lack of knowledge about financial institutions was also noted to be a constraint. It was observed that women fish vendors had inadequate knowledge about financial institutions which limit their access to soft loans that would help to increase the size of their capital (Munishi & Casmir, 2019). --- Conclusions The study aimed to identify and propose strategies for overcoming threats faced by urban-based women involved in the informal fish vending business. The findings revealed various threats experienced by vendors, including stigma, fish scarcity, higher fish prices, inadequate business capital, gender-related violence, lack of legal status, various forms of crime, and evictions/reallocations. The study highlighted both reactive and proactive strategies employed by vendors to cope with these threats. Reactive strategies included joining self-help groups and seeking support from family, while proactive strategies involved securing financing, diversifying businesses, registering and licensing, participating in education and capacity building programs, and engaging in business advocacy. The study also identified factors that hindered the vendors' capacity to cope, such as lack of government support and inadequate business skills. In conclusion, it is recommended that supporting women fish vendors to cope with the threats they face in their fishing business activities should involve addressing the identified constraints. Adequate government support and provision of business skills are crucial for enhancing the vendors' coping abilities. By addressing these factors, policymakers and support organizations can contribute to empowering women fish vendors and enabling them to more effectively deal with the challenges they encounter in their businesses. --- Recommendations Based on the foregoing discussion and conclusion the following recommendations are made in light of improving the women vendors increase their resilience against the various fish vending threats they experience; i. Government should work on the issue of ensuring legal status of the of street vendors through issuing of issue identity cards to all informal traders in the urban setting including the female fish vendors ii. In order to alleviate the threat related to lack of knowledge about financial institutions among the vendors there should awareness campaign on the importance of financial institutions as well as properly be linked to fish vendors. This would guarantee them soft loans from banks and other financial institutions. iii. Government should also enact proper policies that support street vending that would also facilitate provision of support to women fish vendors notably reducing bureaucratic procedures that limit women vendors from access to the loan. iv. On top of that the government should provide enough entrepreneurship education to the vendors through capacity building and training sessions among women vendors. This would guarantee proper management of their business. v. On stigmatization, there should be awareness campaign to the community to eradicate cultures that treat women as disregarded in the fishing business and appreciate their contribution at family levels. --- Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restrictions. --- Conflicts of Interest: The author declares no conflict of interest. | The aim of this study is to examine threats related to informal fish vending business among urbanbased women and to propose strategies for overcoming the threats. Specifically, the study ascertains the threats associated with the fishing business and strategies for coping and the capacity to cope. Based on the multi-layered social resilience framework, this study used a qualitative design and a sample size of 30 participants obtained both randomly and purposively. The findings revealed that threats experienced by the vendors include stigma, fish scarcity, higher fish prices, inadequate business capital, various forms of gender-related violence such as sexual harassment, lack of legal status, and various forms of crime including robbery, theft, and pickpocketing. Findings further indicated that vendors managed to develop both reactive and proactive strategies for coping with the threats. Reactive strategies include joining small self-helping groups popularly known as Village Community Banking (VICOBA) and soliciting family and relative support against insecurity threats. Proactive strategies include securing business financing from credible institutions, business diversification and adopting new kinds of business, business registration and licensing, searching for and participating in education and capacity-building programs, as well as business advocacy. It is recommended that government support, provision of education, and access to credit, should be considered in supporting women fish vendors to cope with the threats in their fishing business activities. |
Introduction Populations exposed to mass conflict and persecution commonly experience extensive losses [1,2], experiences that are likely to provoke feelings of injustice and anger associated with symptoms of grief [3]. Yet there is a dearth of research investigating a possible nexus between grief and anger amongst populations living in post-conflict environments. We attempt to identify a subpopulation experiencing combined symptoms of grief and anger amongst survivors of prolonged persecution and conflict in Timor-Leste, and whether that putative pattern is associated with particularly high levels of traumatic loss, persisting preoccupations with injustice, and ongoing family conflict. Anger as an unwanted and commonly dysfunctional emotional reaction has been associated with feelings of injustice amongst populations that have been exploited and persecuted. Having one's human rights violated or economic goals systematically undermined can understandably lead to normal anger reactions, however, anger can also be associated with a loss of control, aggression and harm to others, including community members, intimate partners and children [4][5][6][7][8][9][10]. Anger has also long been regarded as a core component of the normal grieving process [11]. Moreover, clinical observations have suggested that a failure to resolve anger associated with a bereavement may contribute to the persistence of the grief reaction [12,13], presumably because of strong feelings of grievance and injustice associated with the loss. In that regard, it is notable that studies examining the factorial structure of the persisting grief reaction have consistently identified anger and bitterness as core components [14,15]. For example, a confirmatory factor analysis (CFA) conducted amongst bereaved adults in the USA identified anger/bitterness as one of six symptom domains of the construct of prolonged grief [14]. In keeping with this and other research, the constellation of anger-bitterness has been included in the categories of complex bereavement disorder (CBD) [16], defined as a diagnosis requiring further empirical evidence in DSM-5, as well as in the proposed ICD-11 definition of prolonged grief disorder (PGD) [17]. Nevertheless, controversy continues about the nosological status of these categories, particularly insofar as they distinguish pathological from normative forms of grief [18,19]. Studies amongst post-conflict populations exposed to repeated traumatic losses may shed further light on the role of anger in the grief response. Our past research in Timor-Leste identified what appeared to be a high rate of explosive anger in response to trauma exposure. Explosive anger can express itself as physiological arousal and either verbal or physical aggression, the response characteristically being out of proportion to environmental triggers and experienced as uncontrollable, the subject reacting without immediate thought to the consequences [16]. Although in the aftermath of attacks, the person may feel a degree of relief or vindication, feelings of exhaustion, remorse and/or embarrassment are also common [20]. A population study in a rural and an urban village of Timor-Leste undertaken in 2004 recorded a prevalence of explosive anger of 38%, based on the international threshold of at least one attack of explosive anger a month (noting that the majority of these persons experienced much more frequent episodes) [9]. In a six-year follow-up study, the prevalence of explosive anger remained high (36%), suggesting that, at a population level, the reaction had a strong tendency to persist over a prolonged period of time [21]. Applying the stringent DSM-IV definition of intermittent explosive disorder (IED) which mandates the occurrence of acts of aggression in conjunction with anger, the prevalence if explosive anger was 8%, a high rate compared to other countries where the diagnosis has been studied at a population level [22][23][24][25]. A consistent finding of our studies in Timor-Leste is that women reported higher rates of explosive anger and IED than men, the converse of the usual gender pattern recorded in other countries [20,22,25]. Although a mixed methods study indicated that a range of experiences (exposure to conflict-related trauma and violent death of others, ongoing adversity, exposure to intimate partner violence) were associated with IED amongst women [20], these factors applied to other morbid mental health outcomes including post-traumatic stress disorder (PTSD) and depression, suggesting that the risk factors identified to date are not specific to anger [26,27]. Doubts remain, therefore, about the origins and nature of explosive anger and its high prevalence in Timor-Leste, and why it is particularly common amongst women. In our endeavour to understand this phenomenon, we draw on the Adaptation and Development After Persecution and Trauma (ADAPT) model [28,29] which highlights the core roles of interpersonal bond disruptions and experiences of injustice, amongst other domains, as major psychosocial challenges confronted by populations exposed to conflict. Although the model suggests that grief and anger represent the quintessential responses to disruptions in bonds and acts of injustice, respectively, these two experiences are likely to overlap given the inter-related nature and meaning of the traumatic events of conflict [29]. Specifically, traumatic losses are likely to occur in settings of gross injustices, thereby provoking simultaneous reactions of anger and grief. Other forms of adversity, for example conditions of material deprivation during and in the aftermath of conflict, may compound and prolong anger and grief. Symptoms of grief and anger in survivors of trauma may lead to ongoing conflict within families, representing one of the more severe longer-term psychosocial consequence of earlier exposure to mass violence [30]. The history of persecution and conflict in Timor-Leste provided a setting to investigate possible associations between grief and anger amongst a population exposed to extensive traumatic losses. The invasion and occupation of the territory by Indonesia in 1975 provoked a low-grade resistance war waged by members of the indigenous independence movement. During the period of conflict, which culminated in a humanitarian emergency in 1999, an estimated quarter of the indigenous population (of 600,000 persons at the time) died as a consequence of atrocities, warfare, burning of villages, murder, famine and untreated illness. In addition, there was widespread loss of property and livelihoods, and forced displacement of whole communities, with kinship and family groups being dispersed, some as refugees to other countries. In the post-conflict phase, further episodes of violence occurred, particularly in 2006-7, when a period of sustained internal conflict led to extensive injuries, deaths and displacement of communities into makeshift refugee camps. Socio-economic development in the newly independent country has been slow, with many families confronting extreme levels of poverty and deprivation. Our aim was to test whether it is possible to identify a combined pattern of explosive anger and grief symptoms (grief-anger) amongst the Timorese population. We hypothesized that a subpopulation with grief-anger would report high levels of traumatic losses, preoccupations with injustice and ongoing adversity including family conflict in the post-conflict environment. We also examined whether women were more likely than men to experience the putative grief-anger constellation. --- Materials and methods --- Participants Between June, 2010 and July, 2011, we conducted a survey of all adults, 18 years and older, living in every household in two administrative villages (sucos), one in Dili, the capital, the other, a rural site an hour's drive away. Each suco is defined by contiguous hamlets (aldeias) falling under the administration of one chief (chefe). GPS and aerial mapping produced by the government for census purposes allowed us to identify all households in a setting where there is an absence of street names and many dwellings are located in remote wooded and mountainous areas. Both study sites were extensively affected by mass violence during the Indonesian occupation and the subsequent internal conflict (2006-7). --- Field team and procedure The team included 18 Timorese field workers with prior survey experience and/or psychology/ public health degrees. They received a two-week training course followed by two months of field testing and piloting of survey measures under supervision. Pairs of interviewers were required to achieve a consistent 100 percent level of inter-rater reliability on the core measures. One hour long interviews were conducted in participants' homes or another location if preferred by respondents, the procedure ensuring maximal privacy and confidentiality. In villages where families live in close proximity to each other, and where overcrowding is a problem, we sought to ensure privacy by taking participants to garden areas or away from the household to somewhere shaded and quite. We also arranged for children to be entertained by one of our colleagues if they were likely to cause a distraction to participants. Households were visited up to five times in order to meet potential participants. --- Ethics statement The study was approved by the ethics committee of the University of New South Wales, the Ministry of Health of Timor-Leste, and the chiefs of each village. The majority of respondents gave written consent prior to commencement of interviews. Verbal consent was obtained in some cases where respondents were illiterate, trusted witnesses co-signing the forms. The procedure was endorsed by the community and received ethical approval from the University of New South Wales and the Ministry of Health of Timor-Leste. --- Measures Our selection of constructs and the appropriate measures to assess them was based on theoretical considerations and the empirical findings in our past studies examining explosive anger in Timor-Leste. The protocol, including the grief measure, was iteratively field tested amongst communities geographically adjacent and similar in sociodemographic composition to the sites of the definitive survey. In piloting, we applied an iterative process of feedback in which responses and solicited comments by respondents in the field were analysed and considered by a committee comprising Timorese of diverse backgrounds (age, gender, education, position in the community) and expatriate researchers. Measures were reviewed and revised to ensure that the constructs were understood by the community, items were readily comprehended, both semantically and linguistically, and response options (such as likert scales) were appropriately graduated according to the language and culture. Exposure to conflict-related traumatic events. The 17 conflict-related traumatic events (TEs) listed in the Harvard Trauma Questionnaire (HTQ) [31] were modified to ensure their congruence with the historical context of Timor-Leste. TEs were recorded for two periods: the Indonesian occupation and the subsequent period (including the internal conflict) leading up to the study. We derived four broad TE domains based on their common nature and characteristics: conflict-related trauma, witnessing murder and atrocities, traumatic losses, and extreme deprivations (Table 1). Each TE item was scored 0-2, the maximum score being assigned if participants endorsed a TE for both time periods. We then generated a summary index for each of the four TE domains based on the addition of endorsed items. Ongoing adversity. An inventory of daily adversities was developed based on extensive community consultations and refinement of items during piloting [20] (Table 2). All participants rated each adversity item on a five point scale (1 = not a problem, 2 = a bit of problem, 3 = moderately serious problem, 4 = a serious problem, 5 = a very serious problem). The adversity items were assigned to thematic domains: 1.poverty (insufficient food, lack of money for school fees and to meet traditional obligations to family, poor shelter, unemployment); 2. conflict with family (spouse, children, and extended family); and 3. conflict with community (with young people, and the wider community). The score for each domain was based on the summary score of constituent items (0 for lower levels of seriousness, 1 for moderate through to a very serious problem) Preoccupations with injustice. Respondents were asked to identify and describe the worst human rights violation or other event associated with injustice they had experienced during three defined historical periods: the Indonesian occupation, the period of internal conflict, and contemporary time. Ratings were assigned as 1 for assigning an unjust event; 2 for experiencing preoccupations relating to the event; and 3 for distress related to these preoccupations. The composite index of injustice reflected the addition of scores for each of the three historical time periods (range 0-3). Symptoms of explosive anger. Our community measure of explosive anger was developed, tested and modified serially during piloting to ensure its cultural appropriateness and comprehensibility in the local language, Tetum [32]. The screening questions inquired whether participants had ever experienced sudden episodes or attacks of anger and if so, how frequently these attacks occurred. Participants who endorsed attacks at a frequency of at least once a month were then asked about associated characteristics of loss of control, destruction of property, verbal aggression, and physical aggression towards others. We then applied an algorithm to derive a diagnosis of intermittent explosive disorder (IED) according to DSM-IV [32]. In a convergence study, we compared our community index of IED with a blinded diagnosis made on the Structured Clinical Interview for the Diagnostic and Statistical Manual for DSM-IV assigned by experienced psychologists [32]. There was a high level of concordance between the two measures: Area Under the Curve 0.90 (95% CI: 0.83-0.98). in the latent class analysis (described hereunder), we included the five core items of explosive anger as defined by IED each scored categorically (1 = present; 0 = absent): explosive anger attacks, loss of control of anger; destruction of property during attacks; verbal aggression during attacks; physical aggression towards others during attacks. Grief symptoms. We inquired of all participants whether they had experienced a loss, defined as an event (since 1975) in which someone (e.g. family member, relative, or friend) close to the individual had died or been killed. Those who identified multiple losses were asked to identify the death that had the most impact on their lives, then recording the cause and time of the death. Almost all of these identified losses were related to traumatic deaths or untreated illness occurring during periods of mass conflict. Based on the identified loss event, participants were then asked to rate each of four grief items on a five-point frequency scale (0 = almost never, 1 = rarely, 2 = sometimes, 3 = often, 4 = always) as experienced in the past four weeks. The initial item pool was derived from the literature and contemporary criteria for assessing prolonged grief [17], the process of piloting reducing the number of symptoms to those that were widely recognised and regarded as core experiences of the Timorese people. The derived three symptom items were: persistent yearning/longing for the deceased, feelings of intense bitterness, and feelings emptiness in relation to the death. The fourth item assessed the level of functional impairment associated with the endorsed symptoms. For the latent class analysis, we assigned a score of 0 for symptoms scored not at all, rarely or sometimes and 1 if rated as often (3 on the scale) or always (4 on the scale). Post-traumatic stress symptoms and psychological distress. Posttraumatic stress disorder (PTSD) symptoms and general symptoms of psychological distress (comprising depression, anxiety, somatic complaints) were assessed using the Harvard Trauma Questionnaire (HTQ) and Kessler-10 respectively, widely used measures applied in our previous studies in Timor-Leste [26]. In our aforementioned convergence study using the SCID, a satisfactory level of concordance was achieved for PTSD (AUC 0.82,95% CI: 0.71-0.94) and severe distress (compared to major depressive disorder) (AUC 0.79,CI, 0.67-0.91). A score of! 2.2 for PTSD, and! 30 for severe psychological distress (matching the international cut-off) produced the best balance between specificity and sensitivity for each index. Cronbach's alpha for the PTSD scale was 0.95 and for the K10, 0.90. All measures were translated into Tetum, the most widely spoken language in Timor-Leste. Minor inconsistencies were addressed during piloting and the final versions were translated and back-translated into English [33]. --- Statistical analysis We calculated intra-class correlations to assess for possible clustering within households of indices of grief, psychological distress, PTSD and explosive anger. All correlations were low (<unk>0.05) indicating negligible clustering by households We used latent class analysis (LCA) to identify clusters of participants according to their pattern of symptoms of explosive anger and grief (each item scored in a binary manner as present or absent). We tested sequential models (one class, two classes, three classes, seriatim) examining a suite of conventional model fit indicators to assess for the best class solution: The Bayesian Information Criterion (BIC), sample size-adjusted Bayesian Information Criterion (SS-BIC), and the Akaike's Information Criterion (AIC) [34,35]. Lower values of these indicators indicate a better fit in comparing successive latent class models. In addition, we applied the Vuong-Lo-Mendell-Rubin (VLMR) and the Lo-Mendell-Rubin (LMR) adjusted likelihood ratio tests, both of which compare the fit of a latent class model of n classes to one with n+1 classes [36]. In judging the best-fitting model, we took into consideration the principle of parsimony, the degree of class separation, homogeneity of posterior probabilities within classes, and the interpretability of the classes yielded [35]. We draw on conventional criteria [37] in which conditional probabilities of 0.60 or above indicate a high probability of endorsing a particular symptom; values falling between 0.59 and 0.15, a moderate probability; and a value of 0.15 or less, a low probability. After selecting the best-fitting model, we examined for associations between class membership (with the low symptom class as the reference category) and a range of relevant predictors using multinominal logistic regression analysis. The covariates included: sociodemographic characteristics of gender, residency in urban or rural areas, educational attainment, and employment; traumatic domains comprising conflict-related trauma, witnessing murders and atrocities, traumatic losses, and extreme deprivations; current adversities including indices of poverty, family conflict, and communal conflict; and preoccupations with injustice (during the Indonesian occupation, the internal conflict, and in contemporary times). Analyses were performed in STATA version 13 and Mplus Version 7. --- Results --- Sociodemographic characteristics Of the 3368 respondents approached, 2964 (men, n = 1451, 49%; women, n = 1513, 51%) completed interviews, a response rate of 83.6% (inability to contact residents was by far the major reason for non-participation). Table 1 indicates the socio-demographic characteristics of the sample. The mean age was 36.4 years (SD = 14.4), and a larger portion (n = 1844, 62%) resided in the rural area. Twothirds (n = 2013, 67.9%) were married, the remainder being single/never married (n = 756, 25.5%), widowed (n = 171, 5.8%), divorced (n = 5, 0.2%) or separated (n = 19, 0.6%). In relation to education, 11.6% (n = 343) had completed primary, 12.3 (n = 364) junior, and 26.3 (n = 779) senior high school, whereas 10.7 (n = 317) had received post-school education (college/university). Nearly half (n = 1278, 43.1%) engaged in subsistence farming, domestic duties, or were retired; 34.8% (n = 1032) were occupied with paid employment (in a range of enterprises including government and private sectors); and the remainder were students or unemployed (n = 654; 22.1%). --- Prevalence of explosive anger, prolonged grief, PTSD, and severe distress Two hundred and fifty persons (8.4%) met criteria for explosive anger according to IED criteria. A quarter (n = 779, 26.3%) endorsed one or more symptoms of explosive anger, including sudden anger attacks (n = 1074, 36.2%), loss of control (n = 662, 22.3%), verbal aggression (n = 637, 21.5%), destruction of property (n = 527, 17.8%), and physical aggression (n = 423, 14.3%). Over half (n = 1544, 52.1%) endorsed one or more symptoms of prolonged grief, including persistent yearnings or longings for the deceased (n = 2178, 73.5%), feelings of bitterness about the death (n = 1293, 43.6%), and feelings of emptiness (n = 1152, 38.9%). A third (n = 957, 32.3%) reported functional impairment associated with these symptoms. A similar number (n = 453, 15.3%) met the threshold for PTSD (>2.2) and severe psychological distress (n = 447, 15.1%) (!30). --- Exposure to conflict-related traumatic events and ongoing adversity Over half of participants (56.1%) reported experiencing one or more conflict-related traumas including political imprisonment, combat, physical assault, torture, and trauma related to involvement in the resistance movement (Table 2). Four out five persons reported witnessing murders and atrocities and two fifths traumatic losses, including forced separations and disappearances. Ninety percent experienced extreme deprivations related to access to urgent health care (for self or family), food, water and shelter. --- Ongoing adversity Table 3 shows the frequency of adversity items. In order, poverty-related items endorsed were: shortage of electricity (n = 1983, 66.9%), no access to clean water (n = 1872, 63.2%), insufficient food (n = 1617, 54.6%) and money (n = 11586, 53.5%), problems accessing transport (n = 1489, 50.2%), environmental problems (n = 1527, 51.5%), lack of shelter (n = 1372, 46.3%), being unable to meet traditional family obligations (n = 1333, 45%); conflict with spouse (n = 446, 15.1) and extended family members (n-= 397, 13.4); youth conflict (n = 574, 19.4%); and safety issues in the community (n = 579, 19.5%). --- Preoccupations with past and present experiences of injustice Distressing preoccupations with events associated with injustice were reported by 13.1% (n = 388) for the Indonesian occupation (1975-1999), 24.6% (n = 729) for the period surrounding the internal conflict (2002) and 18.5% (n = 549) in contemporary times (Table 1). --- Latent class analysis Serial model testing concluded after assessing a four class LCA model (Table 4). Fit indicators improved up to the three-class model, the gains then being only marginal when progressing to a four class model. Importantly, the VLMR and the LMR adjusted likelihood ratio tests showed no statistical changes in progressing from a three to four class model. Given these findings and the ready interpretability of the classes, we adopted the three-class model. Table 5 shows the item probabilities for each class based on symptoms of grief and explosive anger. In the grief class (class 1, comprising 25% of the sample), item probabilities for preoccupations and bitterness were in the high probability range, and feelings of emptiness and functional impairment were in the moderate range. In contrast, all items of explosive anger items in this class fell into the low moderate or low probability range. In the combined explosive grief-anger class (class 2), comprising 24% of the sample, grief symptoms fell into the high (preoccupations) or moderate (bitterness, emptiness, functional impairment) ranges. In contrast to class 1, explosive anger symptoms fell into the high (episodes of explosive anger, verbal aggression) or high-moderate (loss of control, destruction of property, physical aggression) probability ranges. In the low symptom class (class 3), comprising 51% of the sample, there were low probabilities for the majority of symptoms of grief and explosive anger, with only two exceptions: preoccupations/yearning were in the moderate range and the generic item for explosive episodes was in the low/moderate range. --- Comorbidity In comparison to the low symptom class, both the grief and grief-anger classes were associated with PTSD (grief class: OR = 1.68, CI = 1.26-2.25; grief-anger class: OR = 1.99, CI = 1.49-2.67) and severe psychological distress (grief class: OR = 1.61, CI = 1.20-2.16; grief-anger class: OR = 2.42, CI = 1.82, 3.21). --- Associations with past trauma, ongoing adversity and preoccupations with injustice Table 6 presents the findings of the multinomial logistic regression analysis testing for associations between the designated covariates (trauma, ongoing adversity, preoccupations with injustice) and the LCA classes. In comparison to the low symptom reference class, women and urban dwellers were more likely to be assigned to both the grief and grief-anger classes. The two TE domains of witnessing murder and atrocities, and traumatic losses were both associated with the grief and grief-anger classes (relative to the reference class). In addition, however, the grief-anger class reported greater exposure to traumatic losses than the grief class. Also, the grief-anger class alone reported greater exposure to extreme deprivations related to conflict in comparison to the reference class. In relation to ongoing adversities, both the grief and grief-anger classes exceeded the reference class on the index of poverty; the grief-anger class in turn reported higher rates of poverty than the grief class. Only the grief-anger class reported greater levels of family conflict, in comparisons with both the reference low symptom class and the grief class. Compared to the low symptom class, both the grief and grief-anger classes reported greater preoccupations with injustice for the two historical periods of conflict (the Indonesian occupation and the later internal conflict). Only the grief-anger class, however, reported a higher level of preoccupations with injustice for contemporary times compared to the reference class. --- Discussion Our analysis in post-conflict Timor-Leste, identified a typology comprising three subpopulations including those experiencing grief, grief-anger and low symptoms, the first two categories affecting a quarter of adults in the sample. Women and urban-dwellers were more likely to be assigned to both the grief and grief-anger classes. Compared to the low symptom reference class, both the grief and grief-anger classes reported greater exposure to conflict-related murders/atrocities and traumatic losses, more extreme levels of poverty, and distressing preoccupations with injustice related to two successive historical periods of conflict. There were important distinctions between the two morbid classes however, in that the grief-anger class reported greater exposure to traumatic losses (compared to the grief class), greater deprivations during the period of conflict (compared to the reference low symptom class), higher stress levels related to poverty (compared to the grief class), ongoing family conflict (compared to both the reference and grief classes), and preoccupations with injustice for contemporary times (compared to the reference and grief class). Prior to discussing our findings, we consider the strengths and limitations of the study. The sample is one of the largest in the contemporary post-conflict mental health field and we achieved a high response rate. Although sampling was restricted to two localities, the sites were identified initially as being broadly representative of the socio-demographic profile of Timor-Leste as a whole [38]. Nevertheless, replication of the study in other areas of Timor-Leste and in post-conflict countries worldwide will be needed to test the generalizability of our findings. Caution needs to be exercised in inferring causal relationships from cross-sectional data of this kind. Longitudinal studies may assist in delineating the chronological sequencing of the relevant symptom constellations, in particular, whether anger precedes and thereby acts to prolong symptoms of grief. Recall of traumatic events can be subject to amnestic bias, although there was a notable consistency in the pattern of traumas documented and the known history of Timor-Leste. A systematic approach was followed in the transcultural adaptation, translation and testing of measures. Although the majority of losses identified as triggers of grief symptoms occurred several years earlier, our measure did not record the course of grief symptoms (whether fluctuating or chronic) so that judgement is reserved as to whether the reaction was prolonged or not. Caveats notwithstanding, our findings cast new light on the high prevalence of explosive anger previously identified in community samples in Timor Leste [5,9,20], a phenomenon that has yet to be fully unexplained [20]. Even though previous studies had shown associations between explosive anger and common stressors such as conflict, poverty and injustice, these factors were common to other patterns of mental distress including symptoms of PTSD and severe psychological distress. Yet there were reasons to suspect that explosive anger had distinctive (albeit unidentified) antecedents given that the reaction appeared to be relatively independent as a construct from those of PTSD and severe psychological distress. In that regard, the identification of a subpopulation comprising a quarter of the sample that manifested the constellation of grief-anger offers a potential explanation for the high prevalence of anger identified in this society. Notably, although a grief class (with low anger symptoms) of equal size emerged, there was no independent explosive anger class, further accentuating the close association between anger with grief. The grief-anger class reported the greatest exposure to traumatic losses, an important finding given that murder, atrocities and death by untreated illness and famine were widespread during the prolonged period of conflict in Timor-Leste. It may be that in collectivist societies such as Timor-Leste, losses that provoke strong and enduring feelings of injustice are particularly potent in generating the identified combined pattern of grief-anger. To confirm this, replication of our findings will be needed in other conflict-affected settings where traditional family and community values prevail. Importantly, our regression analysis involving relevant covariates added credibility to the distinction we found between the grief-anger and grief classes. Specifically, the grief-anger class stood out in reporting high levels of traumatic loss, extreme deprivations during the period of conflict, severe ongoing poverty and family conflict, and preoccupations with injustice extending over three contiguous historical periods. In relation to the latter finding, we have reported a similar association between the anger component of persistent complex bereavement (PCB) disorder and a sense of injustice amongst refugees from West Papua, a neighbouring territory that has experienced a comparable level of prolonged mass conflict under Indonesian occupation [3]. The finding that a half of the population experienced relatively low levels of grief and anger symptoms offers some insights into the factors that protect post-conflict populations from these adverse psychological outcomes. It is notable that the low symptom group reported a similar level of exposure to the general traumas of conflict, indicating that they had not been sheltered from these events. It was only in the TE domains of witnessing murder/atrocities and traumatic losses that the low symptom group reported lower exposure, suggesting that protection from these salient forms of trauma may act to avert risk of developing the specific griefanger constellation. Being male, living in a rural environment, experiencing lower levels of poverty and not experiencing family conflict were other factors that appeared protective, noting however, that cause-effect relationships involved remain to be confirmed given the crosssectional nature of the study. Our findings have potential implications for the individual, the family and the society as a whole, not only in Timor-Leste but in other post-conflict settings worldwide. In particular, confirmation of a grief-anger class and the social factors associated with the pattern, has the potential to add support to a cycles of violence model which postulates that exposure to the traumas of past conflict (in this instance, specifically traumatic losses and deprivations) may contribute to risk of subsequent family conflict in the aftermath of the violence [30]. We note however, that explosive anger associated with grief may be both a cause and a consequence of family conflict, resulting in a complex reciprocal and interacting effect that generates a vicious cycle of instability in the household. Our past qualitative data indicated that Timorese women with IED frequently recognized that their explosive anger led to harsh parenting behaviours which in some instances had an adverse effect on the health and well-being of their children [20]. It is possible therefore, that the grief-anger pattern we have identified contributes to the transgenerational transmission of trauma in a manner that impacts adversely on the psychosocial development of the next generation. In relation to ongoing adversities, there appeared to be a stepwise relationship between the severity of poverty and the grief-anger, grief and low symptom classes respectively. These observations underscore the interaction between trauma-related mental health problems and socioeconomic factors in post-conflict societies. Poverty places stress on individuals, families and communities, compounding past interpersonal and material losses in generating a sense of injustice and anger. In that sense, apart from the immediate hardship incurred by poverty, conditions of extreme material deprivations jeopardise recovery from trauma-related mental health conditions which in turn can impair functioning and reduce the capacity of survivors to engage in gainful employment or other opportunities to improve their economic well-being. [39]. --- Conclusions Our study identified a grief-anger constellation comprising a quarter of the study sample in post-conflict Timor-Leste. There were commonalities with the grief group in reporting greater exposure to witnessing murder, traumatic losses and poverty, and experiencing persisting preoccupations with injustice related to two consecutive historical periods of conflict. The griefanger group was unique however, in reporting extreme levels of traumatic losses, exposure to material deprivations during the period of conflict, preoccupations with injustice in contemporary times and ongoing family conflict. It is a cruel irony that the traumatic rupture of interpersonal bonds during periods of mass conflict can generate a psychological reaction pattern (grief-anger) in survivors which in turn may undermine the survivor's capacity to achieve a stable family environment in the post-conflict period. --- All relevant data are within the paper and its Supporting Information files. --- Supporting information S1 Dataset. This is the S1 Dataset. (DTA) --- Author Contributions Conceived and designed the experiments: SR DS. --- Performed the experiments: ES ZDC. Analyzed the data: AKT. --- Contributed reagents/materials/analysis tools: AKT. Wrote the paper: SR AKT DS. | Previous studies have identified high rates of explosive anger amongst post-conflict populations including Timor-Leste. We sought to test whether explosive anger was integrally associated with symptoms of grief amongst the Timorese, a society that has experienced extensive conflict-related losses. In 2010 and 2011 we recruited adults (n = 2964), 18-years and older, living in an urban and a rural village in Timor-Leste. We applied latent class analysis to identify subpopulations based on symptoms of explosive anger and grief. The best fitting model comprised three classes: grief (24%), grief-anger (25%), and a low symptom group (51%). There were more women and urban dwellers in the grief and grief-anger classes compared to the reference class. Persons in the grief and grief-anger classes experienced higher rates of witnessing murder and atrocities and traumatic losses, ongoing poverty, and preoccupations with injustice for the two historical periods of conflict (the Indonesian occupation and the later internal conflict). Compared to the reference class, only the grief-anger class reported greater exposure to extreme deprivations during the conflict, ongoing family conflict, and preoccupations with injustice for contemporary times; and compared to the grief class, greater exposure to traumatic losses, poverty, family conflict and preoccupations with injustice for both the internal conflict and contemporary times. A substantial number of adults in this postconflict country experienced a combined constellation of grief and explosive anger associated with extensive traumatic losses, deprivations, and preoccupations with injustice. Importantly, grief-anger may be linked to family conflict in this post-conflict environment. |
children with disabilities" (IDEA, 2006;34 CFR 300.512 [a][1]). Thus, in most states, nonattorneys, albeit with special knowledge or training, can support families in school disputes. While the IDEA permits a non-attorney advocate with special knowledge to accompany and advise families during formal dispute processes, the IDEA regulations also indicate that whether a parent may be represented by a non-attorney in these processes is a matter of state law (34 CFR 300.512 [a][1]). In this way, the regulations to IDEA provide a distinction between non-attorney advocacy and legal representation. While non-attorneys do not have licenses to practice law, they should have training in special education law and advocacy. Currently, there is no professional license or certification for special education advocates, even though nonattorney advocates may charge for their services (Council of Parent Attorneys and Advocates, 2012a). Non-attorney advocates can engage in a number of activities that do not constitute the practice of law, including: assisting with record collection, organization, and review; developing position statements and letter writing; providing parents with copies of the law; and making suggestions about a child's educational program. The advocate can also attend meetings as an individual with specialized knowledge of the child (IDEA, 2004(IDEA,, 20 U.S.C. 1414 [d] [d][1][B]), and may be a more economical and less adversarial alternative to legal representation. To serve this need for special education advocates, several models of training have recently emerged. Although training models share the common goal of teaching advocates necessary skills, models vary widely in duration, content, and training activities (Burke, 2013). For example, one such training, the Special Education Advocacy Training (SEAT; Wheeler & Marshall, 2008) merged practices from three different professional communities (special education attorneys, consumer advocates, and paralegals) to develop competencies and a code of ethics for non-attorney advocates. Supported by the Office of Special Education Programs (OSEP), this 230-hour training was primarily used by experienced advocates to gain legitimacy as professionals (Burke, 2013). Another such training, the Volunteer Advocacy Project (VAP), consists of 40-hours of instruction about providing instrumental and affective support concerning the child's education to families of students with disabilities. Although less targeted toward professional special education advocates, the VAP training primarily focuses on special education law (IDEA). Training topics include: evaluation and eligibility; the components of an Individualized Education Program (IEP); free and appropriate public education (FAPE); least restrictive environment (LRE); discipline and functional behavior assessment; assistive technology; and extended school year services. In addition to these topics, sessions on nonadversarial advocacy and dispute resolution are presented by local experts from the state Protection & Advocacy agency (P&A) and The Arc. The VAP training has been shown to be effective in increasing participants' special education law knowledge and comfort with nonadversarial advocacy activities such as effectively participating in IEP meetings (Burke, Goldman, Hart, & Hodapp, in press). Although, upon completion, trainees are prepared to advocate for families, it remains unclear how program graduates use these skills over time. As part of the VAP, program graduates are asked to volunteer (for free) as an advocate for at least four families. But the exact nature of such educational advocacy remains unclear due to differing meanings of the term itself. According the Council of Parent Attorneys and Advocates (COPAA), which focuses on special education advocacy, an advocate is defined as, "...someone who speaks, writes in favor of, supports, advises or urges by argument in support of another person," (COPAA, 2012b). In the parent advocacy literature, however, the term "advocate" has been described differently in a variety of contexts (Wright & Taylor, 2014), approaches (Trainor, 2010), and activities (Balcazar, Keys, Bertram, & Rizzo, 1996). To fulfill the post-graduation requirement of the VAP training, graduates may engage in a broadly defined range of advocacy activities related to the special education needs of the child, with the goal of working towards provision of FAPE for students with disabilities (COPAA, 2008). However, it remains unknown how these advocacy activities group together; if graduates of advocacy training programs engage in the full range of these advocacy activities that support the education of students with disabilities and the wider disability community; and if such activities change over time. Further, although research examining volunteering more generally indicates that volunteer activity often continues over a relatively long period (Penner, 2002), the field has only begun to identify the correlates of long-term volunteer advocacy. For example, Balcazar et al. (1996) found that future advocacy was predicted by prior experience and involvement with advocacy organizations, but sustained volunteering may also be predicted by three other variables: (a) motivation/function (Katz, 1960), (b) satisfaction (Penner & Finkelstein, 1998), and (c) role identity (Piliavin, Grube, & Callero, 2002). The first of these variables, motivation/function, concerns why people volunteer. According to Clary and Synder (1999), six personal and social motivations are served by volunteering: (a) Values, expressing or acting on important values; (b) Understanding, seeking to learn more; (c) Enhancement, growing and developing psychologically through volunteer activities; (d) Career, gaining career-related experience; (e) Social, strengthening social relationships; and (f) Protective, reducing negative feelings (e.g., guilt) or addressing personal problems. In most studies of volunteers in the wider (i.e., non-disability) literature, volunteers reported that the most important functions served by volunteering were Values, Understanding, and Enhancement (Clary & Snyder, 1999). Although these functions may also exist among volunteer advocates, we do not know whether advocates prioritize or even share the same motivations. A second of these hypothesized correlates concerns satisfaction. Not surprisingly, those people who are more satisfied with volunteer activities show longer lengths of service and more continued volunteering (Finkelstein, Penner, & Brannick, 2005). Motivation and satisfaction may also work together to predict sustained volunteering. Thus, if one's motivation to volunteer aligns with the act of volunteering, the degree of satisfaction may be greater because the volunteer has met these motivations/functions. Therefore, volunteers who are satisfied because their motivations are met are more likely to continue volunteering in the long-term (Clary & Snyder, 1999). A third predictor involves role identity as a volunteer, or the degree to which one "identifies with and internalizes the role of becoming a volunteer; that is, the extent to which this role and the relationships associated with it become part of a person's self-concept" (Penner, 2002, p. 463). According to Penner (2002), if one maintains an initial level of volunteering, a volunteer role identity will develop, which may, in turn, relate to both the number of hours of volunteering and the degree to which one intends to remain a volunteer (Grube & Piliavin, 2000). Consistent with the wider volunteer literature, role identity has been proposed as one important dimension in the development of special education advocates (Balcazar et al., 1996). But particularly for volunteer advocates, role identity may be a broader construct which also encompasses involvement in a broader disability community that affords the opportunity for group membership and leadership. This study, then, examines the post-training advocacy of program graduates of a volunteer special education advocacy training. To determine the correlates of sustained advocacy, we surveyed six cohorts of graduates from a three-year period to answer the following research questions: (1) What do sustained volunteer advocacy activities look like over time? (2) Do existing measures of volunteering apply to volunteer advocates? (3) Are greater amounts of advocacy correlated with role identity, motivation, and satisfaction? and (4) After completing the training, are there differences between program graduates who volunteer as advocates compared to those who do not volunteer? --- Method Participants Respondents included 83 graduates of an advocacy training program from 2009-2012. Participants were primarily White, non-Hispanic females who had at least a college degree (see Table 1 for participant descriptives). Most respondents were family members or parents of individuals with disabilities (59.0%), and many were non-school service providers (27.7%), such as non-profit employees or healthcare providers. On average, program graduates reported being very to extremely satisfied with the training on a five point scale (M = 4.26, SD = 0.78). --- Procedure Training-All respondents attended 40 hours of training on special education law and advocacy skills in a Southeastern state. These sessions were provided either as twelve 3-hour weekly trainings or six 6-hour bi-weekly trainings (the extra time was spent on readings and out-of-workshop activities). Participants either attended the sessions in person or at distance sites from which they viewed the web-cast content. The training was offered once every academic term, with survey respondents recruited from six cohorts from fall 2009 through spring 2012. During this time period, the training was run by a single coordinator, with no major changes to content or mode of training delivery. At the conclusion of the training, graduates were asked to volunteer as an advocate for four families of students with disabilities. Graduates were either referred to families that contacted the VAP for advocacy help or connected with families independent of the training program. Survey-In collaboration with past and current training program coordinators, an online survey was created to better understand the advocacy activities and support needs of program graduates. After feedback was obtained from the training advisory board and the survey was pilot tested by three individuals with knowledge of the training, we created an online version of the survey in Research Electronic Data Capture (REDCap; Harris et al., 2009). Once paper and online versions were approved by the University Institutional Review Board (IRB), REDCap was used to disseminate the survey and store responses in an online, secure database. Participants were recruited from a list of all program graduates who completed the training between 2009 and 2012. As our interest was in volunteer advocates, we excluded any university students who completed the course for credit (N = 9). We then attempted to contact via e-mail the remaining 169 graduates from six training cohorts. Six paper surveys were mailed to participants who did not have access to the internet or did not provide an email address in their contact information. After multiple recruitment attempts, we were unable to reach 11 graduates for whom we did not have up-to-date contact information. We thus received responses from 83 of the 158 graduates (52.5%) who we could contact and invite to complete the survey. Response rates were approximately equal across sites and the number of respondents was also proportional to the size of each cohort over the 3-year period. --- Measures The survey consisted of four sections relating to: (1) demographic information, (2) questions about advocacy activities since completing the training, (3) motivation for volunteering and satisfaction, and (4) information about advocate role identity. Demographic information-Respondents provided information about their gender, race/ ethnicity, highest level of education, and present occupation. They also answered questions about their role including if they were: (a) parent or family member of an individual with a disability, (b) school personnel, and (c) non-school service provider. Respondents could select as many roles as applied. Advocacy-Respondents were asked if they had advocated for any families since completing the training. If they answered yes, they were considered "post-training volunteer advocates" and were then asked how many families they had advocated for overall since completing the training program and in the last six months. To learn more about the advocacy activities in which they engaged with each family, respondents were then asked a series of follow-up questions about the frequency of specific advocacy activities (overall and in the last 6 months). As no validated scale exists of activities performed by special education advocates, these questions were based upon the training curriculum, along with input from special education advocates. Specifically, if a trainee had advocated postgraduation, we asked about the number of times they had advocated by: (a) Writing a letter to the school or helping a family write a letter to the school; (b) Communicating directly with the school on behalf of the family; (c) Meeting in person with a family to discuss the special education needs of the child; (d) Talking over the phone with a family to discuss the special education needs of the child; (e) Completing a record review; (f) Helping coordinate or speaking at a special education training; (g) Helping coordinate or developing a forum or parent support group; and (h) Referring a family to another advocate, agency, or attorney. Motivation and satisfaction-To measure the motivation of advocates, the Volunteer Functions Inventory (VFI; Clary et al., 1998) was modified to use language more specific to advocates. The VFI measures the six personal and social functions served by volunteering, which include Values, Understanding, Enhancement, Career, Social, and Protective (Cronbach's alphas for all scales between.80 and.89; Clary et al., 1998). This measure consists of 30 items (five items for each of six scales) rated from 1 (not at all important/ accurate) to 7 (extremely important/accurate). In addition, on a scale from 1 (not at all) to 5 (extremely), respondents were asked to rate their satisfaction with the training and their satisfaction with volunteering as an advocate. Role identity-A five-item measure of volunteer role identity (Cronbach's alpha =.81; Callero, 1985) was modified to apply to advocates. These items were rated on a 5-point Likert scale from 1 (strongly disagree) to 5 (strongly agree). Items included: (1) Advocacy is something I rarely think about (reverse coded), (2) I would feel at a loss if I had to give up advocacy, (3) I really don't have any clear feelings about volunteering as an advocate (reverse coded), (4) For me, being an advocate means more than just advocating for individuals with disabilities, and (5) Volunteering as an advocate is an important part of who I am. Participant identity-In addition to the role identity scale, respondents were asked to rate their identity through a series of questions about their involvement in the advocacy community currently, in the future, and as it has changed since training. These items were developed based on input from special education advocates and the VAP advisory board about opportunities for involvement in the local and state advocacy communities. First, respondents were asked about their current involvement in different types of advocacyrelated activities on a scale from 1 (not at all involved) to 5 (extremely involved) including: (a) Involvement in disability advocacy networks such as the Disability Coalition on Education (DCE); (b) Involvement with other disability organizations such as the autism or Down syndrome societies, Special Olympics, or The Arc; (c) Involvement in a disability advocacy social media group such as an advocacy Facebook page; and (d) Being in touch with other advocacy training program graduates. In addition, respondents were asked to rate the likelihood that, one year from now, they would be doing each of the following activities: (a) Advocating through the VAP, (b) Advocating through another organization, and (c) Informally working with families of individuals with disabilities. They also answered the question, "As a result of completing the training, how has your involvement in the disability field changed?" on a scale from 1 (decreased a lot), 3 (no change), to 5 (increased a lot). In order to compare "advocates" to "non-advocates," participants were categorized based on their response to the question, "Have you advocated for any families since completing the VAP training?" Those who answered yes will be referred to throughout the paper as "advocates" or "volunteer advocates"; those who answered no (i.e., did not advocate postgraduation) will be referred to as "non-advocates." Because respondents varied in their amounts of time since graduation, we calculated an average 6-month advocacy rate for advocates by dividing the overall number of families helped by the number of 6-month periods since graduation. This variable was also calculated for each of the eight individual advocacy activities (i.e. the overall since-graduation times each activity had been performed divided by the number of 6-month periods). We also made several methodological decisions. As 6-month advocacy rates were not normally distributed, for all correlations involving advocacy rates we used non-parametric statistics (i.e., Spearman's rho). To understand relations among the eight types of advocacy activities, we conducted a principal component analysis (PCA) with varimax rotation; for each factor of existing scales (e.g., VFI; Role Identity), we calculated Cronbach's alphas, then each factor's average item score which was used for analyses. When comparing those who had (vs. had not) advocated since completing the training, we performed t-tests (with Cohen's d for effect sizes) and chi-squares. Finally, to control for multiple hypothesis testing, we used a Benjamini-Hochberg correction procedure (BH correction; Benjamini & Hochberg, 1995) for all analyses. --- Results --- Sustained Advocacy Since completing the training, 63.9% (n = 53) of trainees reported having advocated for at least one family, with 36.1% (n = 30) not advocating for anyone. The median of the average 6-month advocacy rate for all program graduates was 0.50 families, with a range from 0 to 200 families. Frequency-Upon further examination of the 53 participants who advocated for families after completing the training, 18.9% (n = 10) advocated for 1-2 families, 22.6% (n = 12) for 3-4 families, 18.9% (n = 10) for 5-7 families, and the remaining 39.6% (n = 21) for 10 or more families. For these 53 trainees who volunteered after completing the training, advocacy frequency was steady across time. These volunteer advocates reported helping a median of 5.5 families since completing the training, 2 families over the last six months, and an average 6-month advocacy rate of 1 family. All three measures were highly correlated (from r s =.819 to.946, all p's <unk>.001); across their time since graduation, volunteer advocates were consistent in the numbers of families that they helped. Activities-Since completing the training, volunteer advocates reported engaging in a median of six different types of advocacy activities out of the eight listed in the survey, with a median of five types performed in the last six months; the total number of types of advocacy activities in the last six months and since graduation were also highly correlated (r s =.717, p <unk>.001). Similarly for each of the eight advocacy activities individually, the frequency over the last 6-months and the average 6-month rate were highly correlated (see Table 2). All volunteer advocates (100%) reported that, since completing the training, they had talked to a family over the phone to discuss the special education needs of the child, with almost all having met with a family in person to discuss the special education needs of the child and completed a record review (96.2% and 88.0% of advocates, respectively). The least common advocacy activity was coordinating or developing a forum or parent support group, which was performed by 40.0% of all volunteer advocates since completing the training. To understand relations among different types of advocacy activities, we then performed a principal component analysis using the average 6-month rate for each activity. The eight advocacy activities loaded onto two factors. The first factor, named family-focused, explained 72.09% of the variance with an eigenvalue of 5.77, and consisted of the following five behaviors: Referring a family; Coordinating or developing a forum or parent support group; Coordinating or speaking at a special education training; Talking over the phone with a family to discuss the special education needs of the child; Meeting in person with a family to discuss the special education needs of the child. The second factor, school-focused, explained an additional 20.14% of the variance (eigenvalue = 1.61) and consisted of the three remaining advocacy activities (Communicated directly with the school on behalf of the family; Wrote a letter to the school on behalf of the family or helped a family to do so; Completed a record review). Taken together, the two factors accounted for 92% of the variance. --- Volunteer Scales To determine whether items grouped similarly for volunteer advocates compared to volunteers on whom measures were originally analyzed, we calculated Cronbach's alphas for sub-scales of the VFI and for the Role Identity scale. Volunteer Functions Inventory-For this sub-sample of volunteers (n = 53), alphas for the six volunteer functions ranged from.79 to.90, with volunteer advocates rating most highly the functions of Values, Understanding, and Social. As Table 3 shows, these rankings are consistent with those from the wider (i.e., non-disability) literature; both groups rated Values and Understanding as the most important functions. For our sample of advocates, 94.3% (50 of 53) identified two or more "important" motivations from the VFI (i.e., mean factor score > 4 on a 7-point scale; see Table 3). In the wider volunteering literature, twothirds (67%) of volunteers reported two or more "important" motivations out of the six possible functions in the VFI (Clary & Snyder, 1999). In addition, for these volunteer advocates, all three of the highest rated functions (i.e., Values, Understanding, Social) were rated as some degree of "important." Further, all volunteer advocates (100%) rated Values as an important motivation. Role Identity scale-Volunteer advocates had a high Role Identity scale score, with an average of 4.25 out of 5, indicating agreement with role statements. The five items of the Role Identity scale had a Cronbach's alpha of.62. Fourteen respondents (27.5%) strongly agreed (i.e., scores of 5) with all five Role Identity items. Item 1 was rated most highly (reverse scored), with 86.8% (n = 46) strongly disagreeing with the statement, "Advocacy is something I rarely think about." The lowest rated item, "I would feel at a loss if I had to give up advocacy," had a mean score of 3.44, still signaling general agreement (i.e., mean rating > 3 on a 1-5 scale). --- Correlates of Sustained Advocacy Although the amount of post-graduate advocacy was not correlated with the Role Identity scale, VFI, or satisfaction, the average 6-month advocacy rate for volunteer advocates did correlate with several aspects of involvement. Greater amounts of advocacy were positively correlated with the extent to which the individual was involved with other disability organizations (r s =.435, p =.001) and in touch with other program graduates (r s =.319, p =. 02). The average 6-month advocacy rate was also significantly correlated with the likelihood of advocating through another organization (r s =.485, p <unk>.001) and informally working with families of individuals with disabilities in a year (r s =.355, p =.01). In addition, the average 6-month advocacy rate was related to the degree to which advocates' involvement in the disability field changed as a result of completing the training (r s =.317, p =.026). --- Comparison Between Advocates and Non-Advocates Although univariate analyses showed proportionally more parents advocating and fewer school personnel advocating, graduates who did (versus did not) advocate after completing the training did not differ significantly on any demographic or training characteristics (after BH Correction; see Table 1), or on the six volunteer functions scales. Those who volunteered post-training were more satisfied with volunteering as an advocate, t (70) = -3.71, p <unk>.001, d = 0.89, with a mean score of 4.33 (SD = 0.79) compared to 3.45 (SD = 1.15) on a scale from 1 to 5. Role identity-Program graduates who advocated post-training had significantly higher volunteer Role Identity scores, t (78)= -3.44, p =.001, d = 0.79. These differences were also mirrored in individual items including, "I really don't have any clear feelings about volunteering as an advocate," (reverse coded), "For me, being an advocate means more than just advocating for individuals with disabilities," and "Volunteering as an advocate is an important part of who I am." Comparing the number of advocates and non-advocates who rated their role identity as highly as possible (i.e., score of 5 on all individual items), over a quarter (27.5%) of volunteer advocates reported this highest level of role identity, compared to only 3.4% of those who did not advocate post-training, <unk> 2 (df = 1, n = 80) = 6.99, p =. --- 008. Significant differences were again noted for those items that more indirectly reflected increasing identities as a volunteer advocate through involvement. Since completing the training, post-training volunteer advocates (vs. non-advocates) reported greater change in their involvement in the disability field, t (78) = -2.76, p =.007, d = 0.64, and, one year in the future, they were more likely to predict their own sustained long-term advocacy both through the training organization (VAP) and through other organizations (see Table 4). Volunteer advocates also reported being more involved in disability advocacy social media groups, such as advocacy Facebook groups, t (78) = -3.25, p =.002, d = 0.71. As shown by the large effect sizes, the advocate role identities of graduates who volunteered post-training were substantially larger than those of program graduates who did not volunteer as advocates. --- Discussion This study examined the post-training advocacy activities of volunteers to understand the correlates of sustained advocacy. It is important to train volunteer advocates who possess special education knowledge and advocacy skills, and who use these skills to support families of students with disabilities over time. This study has four main findings. First, this volunteer advocacy program seemed effective in producing advocates who demonstrate sustained volunteering in a variety of ways over time. Almost two-thirds of program graduates went on to volunteer as advocates, and (for those who did) numbers of families helped in the last 6 months, in the time since the training, and on average for 6month periods were all highly correlated. Thus, advocates continued advocating at similar rates over time. The types of broadly-defined advocacy activities were also stable, with a median of six (of eight) types of activities completed since graduation, and types of advocacy activities grouping into those that were family-focused and those that were schoolfocused. Such maintenance of the nature and frequency of advocacy activities over time is a key methodological issue and a challenge for many interventions (National Research Council, 2001). In addition, with almost two-thirds of program graduates continuing to maintain near-constant advocacy rates over time, the cumulative number of families helped continues to grow. Although one might strive for higher percentages of graduates advocating post training, such maintenance argues for the continued offering of special education advocacy training programs such as the VAP (Burke, 2013). Second, the established measures of volunteer motivation and role identity seemed valid for this population of volunteer advocates. High Cronbach's alphas were found for each of the five factors of the VFI and the motivations rated highest by volunteer advocates, Values and Understanding, were identical to the most important functions rated by (non-disability) volunteers more generally (Clary & Snyder, 1999). Similarly, for this sample, the five Role Identity scale items converged on a single factor, with advocates reporting high role identity. Overall, then, volunteer advocates, although focused on the highly specialized area of special education, were similar to volunteers in general for several major volunteering constructs (Penner, 2002). Third, we identified important correlates of sustained advocacy. The average 6-month advocacy rate significantly correlated with several participant role identity involvement questions, indicating that role identity may be a more complex construct for volunteer advocates than for volunteers more generally. Specifically, those who advocated at higher rates were more involved with disability organizations and with other program graduates. Participants also predicted that, in one year's time, they were more likely to be advocating informally or through another organization. Those who advocated more also indicated that their involvement in the disability field increased more as a result of completing the advocacy training. Finally, post-graduate volunteer advocates rated their role identity significantly higher than did those who did not advocate. In addition to mean differences between groups, we also noted differences in percentages of extreme scores. Thus, although 3.4% of those who did not advocate rated their role identity as highly as possible (i.e., a "5" on all role identity questions), over a quarter (27.5%) of post-graduate volunteer advocates considered themselves at this highest possible level of role identity. Greater amounts of advocacy were also positively related to involvement in the wider advocacy and disability community. Taken together, these findings highlight the importance of role identity and the potential for advocacy training programs to either change or intensify such identity. Indeed, role identity has recently received attention throughout a variety of fields. Within disability-family studies, for example, parents of children with disabilities often report positive changes in their lives; their new roles as parents of children with disabilities may serve to redirect their life choices and identities, even including acquiring a new vocation as a result of their parenting experiences (Scorgie & Sobsey, 2000). Such changes are similarly noted for identity formation in professional training (Mor Barak & Brekke, 2014). In both cases, a discrete experience-becoming a parent of a child with disabilities or undergoing training in a particular field-leads to clear changes in one's behaviors and values. In other ways as well, volunteer advocates experienced change. Thus, in the professional socialization of graduate students, discussion has focused on both identity formation and on intellectual communities (Mor Barak & Brekke, 2014). Among program graduates as well, successful advocates were, like parents of children with disabilities (Scorgie & Sobsey, 2000), both adopting the "acquired role" of an advocate and discovering a community of like-minded advocates. They were more involved in the disability field, more likely to advocate in one year's time (either with this or another organization), and more likely to participate with other advocates in disability social media groups. In this sense, then, becoming a long-term, committed advocate involves both personal and interpersonal transformations-it encompasses changes or intensifications in who one is and with whom one associates. --- Implications for Practice This study has several implications for practice. When training volunteers who will use their skills in the community to help other families navigate the special education process, programs should consider trainees' motivations, satisfaction, and role identity. As indicated by our findings, graduates who were more satisfied and had a stronger identity as an advocate were more likely to advocate post-training. To foster such post-training advocacy, the role identity of advocates-in-training should be explicitly developed during training. Specifically, advocacy training programs might develop group membership by referring to trainees based on their cohort (e.g. Spring 2014 VAP trainees), which helps volunteers to internalize the volunteer role as part of a personal identity (Stryker, 1980). A training program might also have trainees sign a list with the names of past graduates at graduation, or provide an advocacy related memento that is only given to program graduates at training completion. A training model should work directly to foster the development of this advocate role identity, which is considered one of the best predictors of sustained volunteering (Penner & Finkelstein, 1998). For those graduates who did advocate after graduation, greater amounts of sustained advocacy also correlated with more involvement in the wider disability community. This context for role identity demonstrates the importance of a social structure to attach meaning and expectations to identity (Stryker & Burke, 2000). To build on this part of role identity, training programs might, in the days and weeks after graduation, more explicitly involve trainees in the larger disability network. Training programs might collaborate with existing family support agencies such as Parent Training and Information Centers, Protection and Advocacy agencies, and University Centers for Excellence in Developmental Disabilities. Trainers might also help to engage program graduates in disability social media, support groups, and social networks. By forging relationships with other disability agencies and intentionally involving trainees in the larger disability network, participants may feel more connected to the wider disability community, thereby increasing their post-graduation advocacy activities. Additionally, considering the distinction between familyand school-focused advocacy activities, the trainers who direct such programs might foster in their trainees particular areas of advocacy specialization. For example, certain advocates might specialize in familyfocused advocacy activities, learning more how to work with families on their child's special education needs and mentor families as they gradually become their own self-advocates in the special education system. Trainees might also specialize in more traditionally defined school-based special-education advocacy, more intensively learning to communicate with the school and attend IEP meetings on behalf of families. Although our program graduates engaged in both types of advocacy activities, by developing clearer, more intensively trained areas of expertise, volunteer advocates might have even greater impact on both parents and schools. --- Future Research and Limitations Beyond replicating these findings with other programs and types of participants, future studies might examine whether volunteer motivations also exist for other advocacy trainings which do not emphasize volunteerism. For example, the SEAT program (Wheeler & Marshall, 2008) does not require a volunteer component; it would be interesting to see which motivations affect the post-training advocacy rates of SEAT participants. Additional research is also needed to better understand the motivations of volunteer advocates, especially given that many volunteer advocates are themselves parents or family members of an individual with disabilities. Are volunteer advocates predominantly motivated to help their own child or to support other families, and does this "self vs. other" balance change with program participation and identity development? Also, for participants who are parents of students with disabilities, how do their experiences in special education affect their motivations as advocates or their area of specialization? In addition, although the items of the Role Identity scale were consistent and "hung together" in identical ways as in the general (i.e., non-disability) volunteer literature, the role identity construct may be more complex and varied among volunteer advocates. Findings from this study were also correlational, and directionality of findings cannot be inferred. It is unknown whether advocate role identity developed over time as program graduates engaged in formal and informal advocacy activities, or if certain trainees had previously identified themselves as advocates and felt strongly about this role identity when they began the training. Although we measured changes over time in advocacy activities, we did not measure advocacy role identity prior to training or at program graduation. Longitudinal research can also be used to better understand differences in the characteristics of those program graduates who go on to advocate for many families (i.e., more than 10), compared to those who only advocate for the four families as part of the program requirement. If we are able to identify such characteristics of more active advocates, we will better understand how advocacy training programs can recruit certain participants who are likely to be most active in special education advocacy. Through such research we can also learn how to tailor training materials to participants along this continuum of sustained advocacy. Beyond sustained advocacy, future research is also needed to examine the outcomes of the actual volunteer advocacy process for program graduates and families. For those families with whom the trainees engaged in formal, school-focused special education advocacy, what were the outcomes? Were the families satisfied with affective and informational support? What were the procedural outcomes? Many unanswered questions exist regarding potential positive and negative results of using special education advocates to fill the role of attorneys at different points in the dispute-resolution | Parents of students with disabilities often receive support from special education advocates, who may be trained through a variety of programs. Using a web-based survey, this study examined the post-graduation advocacy activities of 83 graduates of one such volunteer advocacy training program. In the 1-4 years after program graduation, 63.8% (53 of 83) of the graduates advocated for one or more families; these sustained advocates reported stable rates of advocacy over time, and advocates performed activities that were either family-focused or school-focused. For graduates who advocated post-training, amounts of advocacy were positively related to satisfaction with advocating and with higher levels of involvement with other advocates and with the broader disability community. Compared to those not advocating after graduating, sustained advocates reported greater advocacy-role identities, increased involvement in disability groups, and higher likelihood to advocate in the upcoming year. Future research and practice implications are discussed. |
felt strongly about this role identity when they began the training. Although we measured changes over time in advocacy activities, we did not measure advocacy role identity prior to training or at program graduation. Longitudinal research can also be used to better understand differences in the characteristics of those program graduates who go on to advocate for many families (i.e., more than 10), compared to those who only advocate for the four families as part of the program requirement. If we are able to identify such characteristics of more active advocates, we will better understand how advocacy training programs can recruit certain participants who are likely to be most active in special education advocacy. Through such research we can also learn how to tailor training materials to participants along this continuum of sustained advocacy. Beyond sustained advocacy, future research is also needed to examine the outcomes of the actual volunteer advocacy process for program graduates and families. For those families with whom the trainees engaged in formal, school-focused special education advocacy, what were the outcomes? Were the families satisfied with affective and informational support? What were the procedural outcomes? Many unanswered questions exist regarding potential positive and negative results of using special education advocates to fill the role of attorneys at different points in the dispute-resolution process (e.g., potential issues involving the unauthorized practice of law). The risks and benefits for the advocates themselves, as well as for the child, family, and school district must all be considered. Additional limitations relate to the specifics of this study itself. Our study had a relatively small sample size and moderate response rate (83 out of 158 respondents). It is possible that respondents represent a particular subset of graduates from our training. However, respondents were proportionally distributed across cohorts and similar response rates were demonstrated across sites. In addition, given that only 64% of respondents reported having advocated for any families, which is a requirement of the training, it seems unlikely that this sample represents a selection bias. Further, efforts were made to disseminate surveys to all graduates, including those who lived in more rural areas and did not have access to the internet. Regardless, caution must be exercised in interpreting these results, which may reflect only a particular subset of volunteer advocates. In addition, VAP graduates were primarily female, White, and highly educated, and results cannot be generalized to a more culturally and economically diverse sample. Despite these limitations, this study is one of few to examine the activities of volunteer advocates over time. Given that the need for such formal and informal supports seems unlikely to diminish, we as a field need to develop a better understanding of special education advocacy and how trained advocates perform over time. Families face many challenges in interacting with schools and understanding complicated special education law. At a time when special education policy and practice are only growing more complex, programs need to successfully train advocates in special education law. We need to know more about-and to provide support for-those individuals who, once trained, engage in sustained volunteer advocacy for families of students with disabilities. --- Author Manuscript Goldman et al. | Parents of students with disabilities often receive support from special education advocates, who may be trained through a variety of programs. Using a web-based survey, this study examined the post-graduation advocacy activities of 83 graduates of one such volunteer advocacy training program. In the 1-4 years after program graduation, 63.8% (53 of 83) of the graduates advocated for one or more families; these sustained advocates reported stable rates of advocacy over time, and advocates performed activities that were either family-focused or school-focused. For graduates who advocated post-training, amounts of advocacy were positively related to satisfaction with advocating and with higher levels of involvement with other advocates and with the broader disability community. Compared to those not advocating after graduating, sustained advocates reported greater advocacy-role identities, increased involvement in disability groups, and higher likelihood to advocate in the upcoming year. Future research and practice implications are discussed. |
Introduction Suicide is the second leading cause of death among ages 10 to 34 and a major crisis among adolescents and young adults (National Center for Injury Prevention and Control (U.S.). Division of Violence Prevention, 2015). Although the causes for suicide are multifactorial, most cases are linked to psychopathology (Gould & Kramer, 2001), and particularly to depression. As depression also continues to rise among adolescents and young adults (Mojtabai et al., 2016), it is important to develop an understanding of factors that may contribute to, or buffer against, depressive symptoms and/or suicide risk in order to prevent the continued acceleration of these interconnected threats. Social relationships with family and peers have been identified as particularly important categories of risk and protective factors (e.g., Sun & Hui, 2007), but most research has examined these factors as a few independent indicators of risk, rather than as a complex and interactive microsystem. This method limits both theoretical understanding and applicability of findings to improvements in identification and treatment of at-risk youth. Therefore, the current study seeks to explore unique and complex associations (such as non-linear associations or interactions) between family and peer factors with depressive symptoms and suicide risk in a high-risk residential sample. The microsystemic social environments of adolescents and young adults have a profound effect on psychological development (Vieno et al., 2007). Ecological models (e.g., Bronfenbrenner, 1977) of development and health illustrate the influence of context on individual health and psychosocial well-being. Such models encourage the examination of both proximal and distal factors surrounding a person to understand the interrelatedness of multiple embedded systems of influence (e.g., culture, society, neighborhood, family networks). Extant research suggests that there are two critical microsystems that are especially important for understanding common but pervasive mental health symptoms (e.g., depressive symptoms, suicide risk) in adolescents: the family and the peer group. The quality of relationships with family and peers are particularly potent factors contributing to risk for depression and suicide (Diamond et al., 2021). Close and trusting relationships with family members and peers build support (Bögels & Brechman-Toussaint, 2006;Drake & Ginsburg, 2012), facilitate coping (Compas et al., 2017), and promote a sense of belonging (Vitaro et al., 2009). The opposite is also true: when adolescents and young adults experience conflict with their families and isolation from peers, this contributes to stress and impacts psychosocial functioning (Orben et al., 2020;Sheeber et al., 2001). Indeed, family and peer factors have been associated not only with the development of clinically-relevant symptoms (Prinstein et al., 2000) but also with treatment trajectories and outcomes (Baker & Hudson, 2013;Rapp et al., 2021). Family factor research has identified several specific and potentially important risk factors that may help prevent or contribute to the development of depressive symptoms and associated suicide risk in adolescents and young adults. First, parental criticism can be a potent risk factor; youth who perceive high levels of parental criticism are at increased risk for depression (Rapp et al., 2021) and suicidal thoughts and behaviors (Campos et al., 2013). Similarly, conflict in the home is robustly associated with depressive symptoms (Rice et al., 2006) and suicide risk (Randell et al., 2006). Furthermore, a lack of perceived parental support is consistently associated with risk for developing depression (Baetens et al., 2015) as well as risk for suicide attempts (Sheftall et al., 2013). Finally, youth who perceive less parental monitoring (e.g., not being aware of the youth's whereabouts) may be at greater risk for depression (Yu et al., 2006). Importantly, longitudinal studies suggest that psychological symptoms often follow, not precede, these types of family factors (Cummings et al., 2015). That is, family conflict and criticism may be risk factors (and family support and monitoring protective factors) for later development of psychological symptoms, and these associations are not merely reflecting deterioration in close family relationships following the onset of symptoms. Positive and negative experiences with peers can also influence psychological well-being, including risk for depressive symptoms and suicide. Positive experiences, such as friendships, have small, but consistent, negative associations with depressive symptoms (Schwartz-Mette et al., 2020). Longitudinal research suggests that high-quality friendships may protect against later depression symptoms (Jacobson & Newman, 2016), whereas lack of friendships and feelings of isolation may damage youth psychosocial health (Ueno, 2005;Vitaro et al., 2009). Negative experiences, such as bullying (verbal, physical, and cyberbullying) are also associated with depressive symptoms and suicide risk over time (Brunstein Klomek et al., 2007;Kaltiala-Heino et al., 2009). Notably, it is important to distinguish between different types of bullying, such as physical and cyberbullying, which may have different associations with depressive symptoms (Wang et al., 2011). However, these family and peer factors should not be understood as simply a collection of factors that may be added and subtracted to understand individual risk of depressive symptoms or suicide. First, much of this research has examined family and peer factors separately, without accounting for possible overlap (unique associations). This makes it difficult to determine which factors may be the most important. Second, and even more crucially, more complex effects like interactions and non-linear associations have been underexplored. While many studies suggest direct, linear relationships between family and peer factors and youth mental health, others suggest that, in actuality, these mechanisms interact in complex ways (Bradley & Corwyn, 2000;Ciairano et al., 2007). For example, one study found that supportive peer relationships were associated with lower depressive symptoms only under conditions of low family support. These supportive peer relationships were associated with an increase in depressive symptoms for adolescents with a high degree of family support (Barrera & Garrison-Jones, 1992). Interestingly, other studies have found the opposite: supportive peer relationships were only associated with lower depressive symptoms under conditions of low family conflict (Ciairano et al., 2007). Some studies also suggest that family factors may have curvilinear, rather than linear, associations with psychological outcomes. For example, poor family control (e.g., lack of parental monitoring) may be a risk factor for poor adjustment, but higher levels of family control have diminishing returns (Kurdek & Fine, 1994). Therefore, examining factors in isolation and excluding the possibility of non-linear and interactive associations may lead to incorrect conclusions about the role of family and peer factors in depressive symptoms and suicide risk. There is a clear need for a detailed examination of potentially unique, non-linear, and interactive associations between family and peer factors with depressive symptoms and suicide risk. However, for this examination to be meaningful for research, theory, and practice, it is important to account for a few additional considerations. First, studies suggest the effects of family and peer factors on youth mental health may be moderated not only by other social relationship factors but also by demographic factors. Gender appears to be a particularly salient moderator in previous research (Kerr et al., 2006;Lewis et al., 2015); depressive symptoms are more common among girls than boys, and differentiated social roles for boys and girls may result in family and peer factors affecting youth differently (Parker & Brotchie, 2010). Second, the majority of the research that examines combined and complex effects of family and peer factors has only considered depressive symptoms, not suicide risk (Hawton et al., 2013). As these outcomes are often linked, it is critical to determine whether these family and peer factors are uniquely related to suicide or primarily through increases in depressive symptoms. Finally, these associations are particularly important to explore in higher-risk clinical populations, due both to the severity of risk and the potential for differences in how family and peer factors are associated with risk in clinical samples, compared to more general youth and young adult samples (e.g., Queen et al., 2013). --- Current Study Taken together, it is important to examine previously-identified family and peer factors (parental monitoring, family support, family conflict, parental criticism, frequency of interactions with friends, verbal bullying, physical bullying, and cyberbullying), with the expectation that they will each be associated with depressive symptoms and/or suicide risk in adolescents and young adults. Furthermore, based on previous research and theoretical understanding, non-linear and interactive effects are anticipated between these factors, which must be understood to draw conclusions about the true effects of these family and peer factors. Based on previous research, interactions are also anticipated with demographic factors, which may shed light on which family and peer factors may be uniquely important for certain demographic groups. Finally, given the association between depressive symptoms and suicide risk, at least some associations between family and peer factors with suicide risk are expected to be mediated by depressive symptoms. --- Method --- Participants The final sample consisted of 939 adolescents and young adults ages 10 to 23 years old (M = 15.84, SD = 1.53). Among the 1,550 residential patients who opened the survey, 31 patients were removed from analysis because they completed no survey items, and 567 were removed from analysis because they had missing gender information due to a survey administration error. Participants missing the gender question did not significantly differ on depressive symptoms or suicide risk. The sample was approximately 97.7% white, 99.5% non-Hispanic, 55% female and 45% male (0.1% non-binary). --- Procedure The de-identified data used in this current study comes from a larger, quality improvement project at a privately-owned multisite psychiatric residential treatment center, which provides both outpatient and inpatient care to youth with different and co-occurring conditions (e.g., depression, anxiety, and substance use). Data were collected from 2019 to 2020. Staff administered the assessment battery at the intake meeting using the electronic BH-Works platform (www.mdlogix.com). The assessment takes approximately 15 minutes, and scores are automatically computed and uploaded into patients' electronic medical record system. As part of their research agreement with Drexel University, Newport Institute provides Drexel University with de-identified data for analysis and publication; approval for use of this data for the current study was given by the treatment center, and the Drexel University IRB deemed that this was not research activity that needed IRB approval ("Not Human Subjects Research"). --- Measures All variables were drawn from the Behavioral Health Screen (BHS), a tool developed by Diamond et al. (2010) to increase detection of behavioral health problems in medical settings. Questions were derived from the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR; American Psychiatric Association, 1994) criteria and other public domain psychosocial assessment tools. The BHS includes 13 modules assessing demographics, medical, school, family, safety, substance use, sexuality, depressive symptoms, anxiety, nutrition/eating, suicide, psychosis, and traumatic distress. There are 55 core questions with an additional 38 follow-up items (asked if certain core items are endorsed.) The BHS is currently used in 40 medical sites and 500 schools across Pennsylvania and is being rolled out in four other states. Psychometric validation has supported the validity and reliability of the scales (Bevans et al., 2012;Diamond et al., 2010). --- Parental monitoring Participants self-reported how often their parents knew their location on a three-point Likert-type scale ("never," "sometimes," and "often"). --- Family conflict Participants self-reported frequency of arguing in the home on a three-point Likert-type scale ("never," "sometimes," and "often"). --- Family support Participants self-reported frequency of turning to parents or other adult family members for support on a three-point Likert-type scale ("never," "sometimes," and "often"). --- Parental criticism Participants self-reported perceived frequency of parental criticism on a five-point Likert-type scale ranging from 1 ("not critical at all") to 5 ("very critical"). --- Interactions with friends Participants self-reported frequency of interactions with friends on a three-point Likert-type scale ("never," "sometimes," and "often"). --- Verbal bullying Participants self-reported frequency of being victimized by verbal bullying on a three-point Likert-type scale ("never," "sometimes," and "often"). --- Physical bullying Participants self-reported frequency of being victimized by physical bullying on a three-point Likert-type scale ("never," "sometimes," and "often"). --- Cyberbullying Participants self-reported frequency of being victimized by cyberbullying on a three-point Likert-type scale ("never," "sometimes," and "often"). --- Depressive symptoms Depressive symptoms were assessed using the BHS depressive symptoms subscale. This measure has shown strong reliability, factor validity, and criterion validity in previous studies (Bevans et al., 2012;Ruan-Iu et al., 2021). Using a three-point Likert-type scale ("never," "sometimes," and "often"), patients rated how often the following five depressive symptoms occurred within the past 2 weeks: consistent feelings of being down, loss of interest in things previously enjoyed, unexplained irritability or anger, loneliness, and feelings of failure. Reliability was good (alpha = 0.83) in this study. Items were averaged to produce a single score for variable selection. In structural equation models, items were treated as indicators of a latent construct. --- Suicide risk Suicide risk was assessed using the BHS current suicide risk subscale. This measure has shown strong reliability, factor validity, and criterion validity in previous studies (Bevans et al., 2012). Patients rated whether they experienced suicidal ideation, made plans to commit suicide, or attempted suicide over the past two weeks (all dichotomous indicators). Reliability was acceptable (alpha = 0.79) in this study. For variable selection, a single dichotomous score was created indicating the presence or absence of any suicidal indicators. In structural equation models, items were treated as indicators of a latent construct. --- Results --- Preliminary Analyses Mahalanobis Distance test detected and removed 13 multivariate outliers (based on combinations of age and symptom scores on depression, anxiety, substance use, and other scale scores). Means, standard deviations, and intercorrelations are found in Table 1. Approximately 72.3% of the sample reported current suicide ideation, 47.3% suicide plans, and 42.7% suicide attempts. The average depressive symptom score of 2.14 was just below the previouslyidentified cutoff of 2.20 for "moderate" depressive symptoms (Ruan-Iu et al., 2021); approximately 52.4% of the sample was above this cutoff, and 30.8% of the sample was above the cutoff for "severe" depressive symptoms. Less than 1.3% of participants had missing data on any of the key variables; these participants were included in pairwise analyses wherever possible. Identifying Possible Main, Interactive, and Non-Linear Effects Sparse interaction models were estimated using hierarchical lasso in the R package hierNet (Tibshirani, 2020). The package hierNet tests all possible two-way interactions and quadratic effects and allows for "weak" or "strong" hierarchy to ensure that only meaningful second-order terms are included (Bien et al., 2013); in strong hierarchy, interaction terms are included in the lasso only if both constituent main effects are selected for the model, whereas in weak hierarchy, interaction terms are allowed if at least one of the main effects is selected. Weak hierarchy was specified, and 10-fold cross-validation was used to select the best value of <unk> (the regularization parameter, which determines how stringently coefficients are forced to zero) using the "lambda.1se" criterion (e.g., Soehner et al., 2019). A total of 132 possible terms were tested: 11 main effects, 110 twoway interactions, and 11 quadratic effects. Age was continuous. Gender and race were dichotomized, given the predominantly binary-gendered and white sample. Of the predictors, all except race, parental monitoring, family support, and physical bullying were selected as main effects by the lasso procedure for depressive symptoms. Four interaction terms were selected: Family Support x Gender, Cyberbullying x Gender, Interactions with Friends <unk> Cyberbullying, and Interactions with Friends <unk> Physical Bullying. There was also one quadratic effect, for cyberbullying; this was positive, suggesting that the impact of cyberbullying increased with frequency. As a follow-up analysis, a lasso was tested for current suicide risk; however, this only indicated gender. Therefore, nine main effects, one quadratic effect, and four interactions were included in all subsequent analyses. --- Interactions The two gender interactions are plotted (without control variables) in Fig. 1 using the R package "sjPlot" (Lüdecke et al., 2021). The Family Support <unk> Gender interaction suggests that, in conditions of low family support, female respondents reported more depressive symptoms than did male respondents. At moderate or high family support, there was no difference in depressive symptoms between male and female respondents. There is a similar finding for cyberbullying; among those never cyberbullied, female respondents have increased depressive symptoms, but there is no difference among those "sometimes" or "often" cyberbullied. The two interactions between bullying and friendship are plotted (without control variables) in Fig. 2. Both suggest that youth "often" spending time with friends are at lower risk for depressive symptoms only if they are never physically or cyberbullied. --- Associations with Latent Depressive Symptoms and Suicide Then, a structural equation model was tested in the R package lavaan (Rosseel, 2012) using the diagonally weighted least squares estimator, wherein latent depressive symptoms were predicted by identified predictors, and latent suicide by the same pool plus depressive symptoms. Good fit was predetermined as CFI <unk> 0.95, SRMR <unk> 0.08, RMSEA <unk> 0.06 (Hu & Bentler, 1999), and scale items were expected to have "good" loadings (above 0.55; Comrey & Lee, 1992). Indirect effects on suicide through depressive symptoms were also tested; standard errors were computed using 5,000 bootstrap draws. All candidate variables selected were then entered into a single structural equation model, wherein all candidate predicted both depressive symptoms and suicide risk. Overall fit was good, <unk> 2 (103) = 124.69, p = 0.072; CFI = 0.997, RMSEA = 0.015, SRMR = 0.058. All items had "good" loadings on their factors; the lowest loading was 0.67. The model explained 29% of the variance in depressive symptoms, and 50% of the variance in suicide. Standardized estimates are shown in Table 2. Nearly all included predictors were significant, except for cyberbullying (and its associated quadratic effect) and the Friendship <unk> Cyberbullying interaction. Taken together, older age, female gender, family conflict, parental criticism, a lack of interactions with friends, and the experience of verbal bullying all explained unique variance in depressive symptoms. Moreover, as depicted in Figs. 1 and2, the relationship between depressive symptoms and gender was moderated by family support and cyberbullying, and the relationship between depressive symptoms and lack of friendship interactions by physical bullying. On the other hand, outside of the sizable association between depressive symptoms and current suicide risk, only age and family conflict shared unique (and notably, negative) associations with suicide risk. --- Mediation by Depressive Symptoms Indirect effects were also tested, as shown in Table 2, indicating whether associations of family and peer factors with suicide risk were mediated by depressive symptoms. Several variables (including family conflict, parental criticism, interactions with friends, verbal bullying, age, and gender) had significant indirect effects on suicide through depressive symptoms, suggesting possible downstream associations. The final model, dropping nonsignificant paths, is illustrated in Fig. 3. This model also fit well, <unk> 2 (108) = 123.59, p = 0.054; CFI = 0.996, RMSEA = 0.016, SRMR = 0.057. --- Alternate Model Analyses Additional analyses examined the lasso model for each suicide risk indicator separately (ideation, plans, and attempts), with and without the inclusion of depressive symptoms. This did not select additional variables not already included by the depressive symptoms lasso. --- Discussion Approaches for understanding and predicting risks for adolescent depressive symptoms and suicide are still evolving. The current study used a multidimensional approach by studying the interconnected nature of family and peer influences on individual health (King et al., 2014). Following variable selection, parental criticism, family conflict, verbal bullying, and interactions with friends, alongside demographic factors of gender and age, were all found to be uniquely associated with depressive symptoms. Gender and frequency of interactions with friends were significantly moderated by other family and peer factors (family support and cyberbullying, and physical bullying, respectively). Although only family conflict and age directly predicted suicide risk above and beyond depressive symptoms, indirect associations through depressive symptoms were supported for other variables and should be explored further in longitudinal research. There are several strengths of the current study. First, the study examined a high-risk clinical sample of adolescents and young adults, many of whom reported severe depressive symptoms and suicide risk. Therefore, family and peer factors that emerged as particularly salient in this sample are likely to be relevant for identifying those adolescents and young adults at greatest risk for severe outcomes. Although this is a cross-sectional study, better understanding of these factors may lead to advances in prevention, intervention, and treatment. Particularly in the era of COVID-19, which has greatly disrupted interpersonal relationships (Orben et al., 2020), the robust association of interactions with friends with depressive symptoms suggests that methods for developing and maintaining these potentially protective relationships are crucial for the psychological health of adolescents and young adults. Finally, the methodology of the current study also follows recent recommendations involving the use of data-driven approaches to examine multiple variables and complex relationships (Franklin et al., 2017). Given the difficulty in predicting suicide and other severe consequences of depressive symptoms, studies that examine multiple interactive risk factors are crucial for advancing understanding of how these relational processes may influence psychological well-being (Hawton et al., 2013;Restifo & Bögels, 2009). However, several limitations of this study should also be noted. First, the sample was highly racially homogeneous. Although race was not selected by the lasso, this may be attributed to low power and the reduced sensitivity of this dichotomous variable. Second, only patient report with single items was used; multi-informant methods could also be used to gain a better understanding of relational processes beyond the patient's own report. Third, this study utilized a cross-sectional approach, and conclusions about directions of effects cannot be supported. Previous longitudinal research suggests that family and peer risk factors often predate mental health symptoms (Cummings et al., 2015;Jacobson & Newman, 2016), but these associations are also likely to be bidirectional. Similarly, indirect effects suggested potential mediating pathways of family and peer factors on suicide risk through depressive symptoms, but these should not be interpreted causally. Finally, given the complex, multifactorial causes of suicide (Franklin et al., 2017), it is crucial for future research examining more proximal family and peer factors to include other categories of risk factors, including genetic factors (Levey et al., 2019) and family context (Denney, 2010), which may interact with the microsystemic social environment and depressive symptoms. The current findings have implications for understanding how family factors related to depressive symptoms in adolescence and young adulthood. First, negative family experiences, including parental criticism and family conflict, emerged as particularly relevant for depressive symptoms. This echoes previous research (Rapp et al., 2021), but further suggests that these effects are unique; that is, independent of factors like family support, these two types of negative family experience appear to pose distinct risks. On the other hand, although the association between conflict and depressive symptoms was in the expected direction, family conflict appeared to share a negative association with suicide risk after accounting for depressive symptoms (i.e., higher conflict was associated with lower risk), suggesting more complex processes worthy of further investigation (e.g., family detachment). Second, positive family factors (parental monitoring and family support) were not robustly associated with depressive symptoms or suicide. This may be due to the clinical severity of the sample, the developmental stage, or the specific indicators. Other studies have found mixed results regarding parental monitoring (Yap et al., 2014), and it is possible that other assessments of parental involvement may be more appropriate for older adolescents or young adults. Family support was measured by inquiring about interactional frequency (i.e., how often youth spoke with adult family members about their concerns). While measures of interactional frequency might indicate support in normative samples, families of distressed youth may be more likely to fail to respond to support-seeking or to respond negatively (Gambin et al., 2015;Preyde et al., 2011). Therefore, it is important to examine multiple dimensions of family support and cohesion in order to understand how these function among at-risk youth. There are also important implications regarding how peer relationships are associated with depressive symptoms. In this study, interactions with friends and verbal bullying emerged as particularly salient processes for depressive symptoms. The role of peers becomes increasingly more important in adolescence and young adulthood (Magson et al., 2021;Neale et al., 2018), and feelings of acceptance or isolation from peers can be highly consequential for youth mental health. The moderation of interactions with friends by bullying (or vice versa) suggests youth with both frequent interactions with friends, and the absence of bullying, are especially unlikely to endorse depressive symptoms. On the other hand, the benefits of friendships were not moderated by family factors, in contrast to previous research (Barrera & Garrison-Jones, 1992). However, these interactive effects have been less robust in more severe, clinical samples, perhaps due to the greater likelihood and severity of family dysfunction in these populations (Kerr et al., 2006). In the presence of these dysfunctional families, peers may serve a particularly important role in providing support and stability to distressed youth. Finally, only verbal bullying was directly relevant for depressive symptoms; this is somewhat surprising given previous research suggesting that cyberbullying poses a particularly large risk for depression (Wang et al., 2011). --- Conclusion Family and peer factors are known to be associated with youth depressive symptoms and suicide risk, but most studies examine these factors in relative isolation and without accounting for their interdependence. Without acknowledging the context in which family and peer relationship factors emerge, it is difficult to estimate the unique contributions of factors like support and conflict, particularly when the effects are not linear or depend on the levels of another factor. The current study analyzed unique, interactive, and non-linear effects of several peer and family factors associated with depressive symptoms and suicide risk in a high-risk residential sample of adolescents and young adults. Building on previous research, the current results suggest that negative family processes (like conflict and criticism) and verbal bullying, are associated with more severe, and interactions with friends less severe, depressive symptoms. Moreover, gender differences were mediated by family support and cyberbullying, and interactions with friends by physical bullying, suggesting that examining individual peer and family factors in isolation may produce misleading results. Contrary to expectations, however, few factors were directly associated with suicide risk, but several shared possible indirect pathways through depressive symptoms. These results underscore the difficulty in identifying youth with suicide risk, but also provide directions for advances in identification, research, and treatment. For high-risk adolescents and young adults, negative aspects of the family environment may be likely to outweigh any positives as distressed youth may receive support primarily from their peers. However, the increased importance of peer relationships also has a dark side; youth with a history of peer victimization may be at high risk of depressive symptoms even when they have frequent interactions with their friends. In sum, relational factors with implications for depression and suicide do not occur in a vacuum, and it is important to understand this complex microsystem to estimate the true impact of these factors on the psychological well-being of adolescents and young adults. supervision and resources, and helped draft the manuscript. All authors read and approved the final manuscript. Funding Funding was provided by the Newport Institute, which has been using the Behavioral Health Screen to evaluate patient outcomes across its entire organization. Data Sharing and Declaration This manuscript's data will not be deposited. --- Compliance with Ethical Standards Conflict of Interest The Behavioral Health Screening tool was developed by GD and colleagues but is owned by Children's Hospital of Philadelphia. They license the tool to Medical Decision Logic, Inc., a health science informatics and computer science engineering company. GD may receive a small royalty payment for his part in developing the tool. ASR and the other coauthors do not report financial interests or potential conflicts of interest. Ethical Approval The Drexel University IRB deemed this research not requiring IRB approval ("Not Human Subjects Research"). Informed Consent Patient consent for treatment and data collection was obtained by Newport Institute at admission. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. --- Authors' Contributions A.S.R. conceived of the study, performed the statistical analysis, and drafted the manuscript; J.R. helped conceive in the design and interpretation of the data; P.W. helped draft the manuscript; L.R. designed measurement and collected data and helped draft the manuscript; G.D. helped conceive in the design, provided Alannah Shelby Rivers is a post-doctoral researcher at the Center for Family Intervention Science, Drexel University. Her major research interests include the influence of close relationships on mental and physical health, and psychometrics. Jody Russon is an Assistant Professor of Human Development and Family Science at Virginia Polytechnic Institute and State University. Her major research interests focus on vulnerable youth, particularly LGBTQ + adolescents and young adults. Payne Winston-Lindeboom is a project coordinator at the Center for Family Intervention Science, Drexel University. Her major research interests are in the mental and behavioral health of adolescents and young adults, especially those who have dealt with family issues or trauma. Linda Ruan-Iu is a post-doctoral fellow at the Center for Family Intervention Science at Drexel University. Her major research interests focus on cross-cultural assessment, psychiatric diagnosis and assessment, and suicidal behavior among youth. --- Guy | Close relationships are consequential for youth depressive symptoms and suicide risk, but nuanced research examining intersecting factors is needed to improve identification and intervention. This study examines a clinical, residential sample of 939 adolescents and young adults ages 10 to 23 years old (M = 15.84, SD = 1.53; 97.7% white, 99.5% non-Hispanic, 55% female). The final model found that family conflict, parental criticism, verbal bullying, and interactions with friends were associated with depressive symptoms in the expected directions, and there were significant interactions with family, peer, and demographic variables. However, most associations with suicide risk were indirect. Associations involving family factors, peer factors, depressive symptoms, and suicide are not always straightforward, and should be understood within a microsystemic context. |
Manju Kapur is an acclaimed Indian writer who possesses a high rank among the contemporary women novelists of 20th century. As a literary artist, she is a writer with purpose as her novels reveal the fact that she deals with serious issues related to women in Indian society. As a woman, she is conscious of the gender discrimination in society. It is an accepted fact that women have been exploited and treated unequally by men for years.They have been the victims to the so-called traditional patriarchal society for years. Though India got freedom in 1947, the condition of women is unchanged as they have to struggle hard to get an individual identity in society. Inspite of making various efforts, they are the sufferers and have to face many challenges to cross the boundary of the conservative patriarchal society as patriarchal norms do not allow a woman to think and act freely. As a writer, Manju Kapur can be put in the category of those Indian writers who brought a great transformation in Indian writing as she deals with the dreams, aspirations, struggle,and problems of women in a realistic manner. As a woman, she believes in feminine power and presents her female characters as bold and strong. Her novels reflect how the women pass through a period of transition and break the confining four walls of conservative patriarchal society. As a woman, she is concerned with women' need for self expression, self -fulfillment, and self-realization. Her protagonists fight for freedom and individuality as they are in search of their true identity in patriarchal society. Through her female protagonists, she presents the picture of society where women are aware of gender discrimination, exploitation, social injustice. Her novels are based on the actual incidents as the story of Virmati is based on the story of her own mother' In an interview with Jo Stimpson, she reveals the fact "I based my first novel on her. I admire her fighting spirit, her generosity, her capacity to endure. She irritated me when she was alive, but now I see these things more clearly. I think of her every day."( One Minute With: Manju Kapur) She deals with real characters representing the contemporary Indian society. Her female characters are assertive and raise voice against the social injustice and patriarchal norms. Her works reveal the feminist struggle against patriarchy, exploitation, social restrictions, sufferings of women, identity crisis etc. They crave for self identity, self-fulfillment and self-autonomy. The novel *Difficult Daughter's* is about the struggle of a woman through tradition to modernity. It is the story of Virmati, an Indian woman craving for freedom and self-identity. Through Virmati, the writer presents the concept of New Indian woman who has a longing for love, freedom and individuality. As a thinking woman,she raises the questions -Why is a woman not free to take her own decision for her life? Why is she not free to live life of her own choice? Why is she forced to follow the patriarchal norms? Why is she restricted to the unjust shackles of conservative traditions? Has she no right to think and express her thoughts freely? Can she not enjoy life as an individual like a man? Is marriage necessary for a girl? Has she no existence without marriage? Manju Kapur is a serious thinker who raises the various problems related to women liberation and their place in society. Simon de Beauvior says: One is not born but rather becomes a woman. No biological, psychological or economic fate determines the figure that the human female presents in society ; it is civilization as a whole that produce this creature.(TSS) *Difficult Daughter's* is the story of Virmati who is born in an Arya Samaji Punjabi family in Amritsar. Being the eldest daughter in the family, she has the pressure of household works and other family responsibilities. She has eleven siblings and she was sixteen years old when her mother conceived the eleventh child. Since childhood she remains busy in looking after her siblings and helping her mother Kasturi in other household duties. Here Kapur presents the picture of an Indian family where the eldest girl child is supposed to support her mother in performing domestic duties and other household tasks and the same is the case of Virmati as she is supposed to do the homely tasks and other responsibilities related to her siblings. The novelist calls Virmati the second mother to her siblings. she has a keen desire to study and to do a job but her mother Kasturi, a traditional woman is of the opinion that a girl should have study only to read and write as her basic need is to learn knitting, sewing and cooking and other household works. As both Virmati and Kasturi have different opinions, the rising conflict is quite obvious between the mother and the daughter. Virmati has a dream to be educated like her cousin Shakuntala who is a progressive independent woman. Manju Kapur accepts the fact that women's higher education makes them more confident and ambitious as educated women emerge more successful and prove their individuality. Shakuntala is a new woman who opposes family tradition of early marriage. She has done M.Sc in Chemistry and is teaching science in a college in Lahore. Virmati is much impressed by her dressing sense, activities and modern life style. Through Shakuntala, Manju Kapur presents the image of an empowered woman who is leading a free life of her own choice as herr outlook is modern, As a new woman, she is assertive and defies the patriarchal restrictions and fights for her rights. She is bold, outspoken and determined and inspires Virmati to study and to look outside to education, freedom as time has changed. Virmati is so much influenced by her that she realizes that education is a grat weapon for freedom and thinks that being educated is a way to get freedom and happiness. The novelist remarks: Shakuntala's visit planted the seeds of aspiration in Virmati.It was possible to be something other than a wife. Images of Shakuntala Pehnji kept floating through her head, Shakuntala Pehnji who having done her M.Sc.in Chemistry, had gone about tasting wine of freedom.....No, she had to go to Lahore, even if she had to fight her mother who was so sure that her education was practically over. (DD,19) Here the novelist presents Virmati as the embodiment of liberation. Virmati wants to study further, and even she is prepared to fight her mother Kasturi who opposes her idea to continue her study further. Kasturi believes in the anceint popular Indian myth that a girl is Paraya dhan and it is the destiny of every girl to get married and to follow the family traditions. She doesn't like the life style of Shakuntala who defies the family traditions. As a mother, Kasturi has the responsibility of the marriage of five daughters and is worried for them. To her,study doesn't mean to defy or disgrace family tradition as study helps in the development of the mind for the benefit of the family, so she thinks that Virmati should not think like Shakuntala and says: Leave your studies if it going to make you so bad tempered with your family. You are forgetting what comes first........what good are Shaku's degrees when she is not settled. Will they look after her when she is old? demanded Kasturi irritably.' At your age I was already expecting you,not fighting with my mother'.( DD, 21-22) However, the novelist depicts the true picture of an Indian traditional family that is against the modernization of women.The discussion between Virmati and her mother Kasturi depicts the conflict between traditional and modern outlook.Shakuntala is the embodiment of modernity but her ways are not approved by the traditional women like Kasturi,to whom marriage and family are more important than study and freedom.kasturi is the symbol of tradition and patriarchy. Vera Alexander remarks: In the juxtaposition of marriage and education, education is either described in terms of a threat, or portrayed as a dead end, reducing accomplished female characters to obedient wifehood and dependency rather than enabling them to make a living out of their training.(REINE) Though Virmati hopes to live a free life like Shakuntala, she feels herself bound to the orthodox traditional shackles of patriarchy. Here the writer depicts the difference between Shakuntala and Virmati as the former i s like a free bird while the latter is like a caged bird fluttering its wings to break the cage open. Soon the family finds a suitable match for her marriage and forces her to get engaged with a canal engineer Inderjit But Virmati does not lose her hope and shows great courage to continue her study even after her engagement. As a woman of strong will, she struggles hard to continue her studies and follows her own way. She joins AS college to do B.A.and meets Harish,an Oxford return married Proffesor. As a mother, Kasturi does not realise her daughter's need for love as she has no time to understand or share her daughter's feelings and enthusiasm and thus consequently Virmati shares her feelings with Harish, the professor and falls in love with him. At this point,she does not realise that her affair with a married man will be a cause of her sufferings in future. It is a bold step on her part that she refuses to marry Inderjit and thus challenges the family custom and patriarchal norms. Here Virmati is the prototype of liberated woman who thinks of her happiness only and takes the bold step against the wish of family. Though she finds herself torn between her passion and her duty towards family, she refuses to marry. She challenges the family tradition as she has yearning to be loved. Though She realises that she has failed in performing her duty as a responsible sister and has disgraced the family, she rejects the Idea of marriage. P.Sudhashri remarks:..... Virmati, the protagonist rebels against tradition.Yet she is filled with self-doubt. She pleads for studying further and postponement of her marriage. She attempts suicide, when faced with prospect of marrying the canal engineer. The family brands her'to be restless, sick, selfish.and locks her up.'(P.. Sudhashri,2005) Through the portrayal of Virmati, the novelist presents a woman of unyielding will power who has a zeal to live a free and meaningful life and decides to break her relationship with the professor. She informs the Professor that she is going to Lahore for further studies as she desires to be a teacher like Shakuntala. As an optimist,she has great hopes for her future.So she decides to end her relationship with the Professor and burns all the letters he has ever sent to her. She is termed as the difficult daughter to the family as she challenges the family tradition. The family was against her study but they have to yield to her wish and finally she is sent to Lahore for further studies. Here her life takes a positive turn ad she starts a new phase of her life. As a strong woman, she possesses the strength of mind and decides to give a new turn to her life. In Lahore she came under the influence of strong independent women like Shakuntala and Swarna Lata who constantly motivate her to take part in social and political movements. She is much impressed by her room mate Swarna Lata who is an active participant in the freedom struggle movement.Swarna Lata, a new woman,is a clear headed committed activist who follows her own ideology and fights for women liberation and the upliftment of the social values.Through the character of Swarna Lata, the novelist depicts an assertive, dynamic, modern woman who emerges as the stout champion of womanhood. As a committed feminist, she is firm advanced, and action oriented young lady. She asks Virmati to join the demonstration against Draft Hindu Code Bill : Come and demonstrate with us against the 'Draft Hindu Code Bill' next Saturday outside the railway station. Men don't want family wealth to be divided among women. Say their sisters get dowry, that's their share and the family structure will be threatened, because sisters and wives will be seen as rivals, instead of dependents who have to be nurtured and protected. As a result women will lose their moral position in society! Imagine! (DD,251-252) As a woman, the novelist' feminist concern is quite obvious here as she supports the equal rights for women in the male dominated patriarchy where men are not ready to accept women as their equals. According to her it is a matter of great surprise that women who have the equal intellectual and mental capacities like men, are regarded inferior to men in a patriarchal family.Swarna Lata may be called the mouthpiece of the novelist as through her, Kapur expresses her own views regarding the equality of women' in patriarchal family. Virmati is so much influenced by her dynamic and advanced life style that she desires to be an intellectual dynamic personality like her. Swarna Lata attends various political conferences and rallies and wants to do something beyond marriage and family. Her modern outlook is quite obvious in her conversation with Virmati: Marriage is not the only thing in life, Viru. The war -the Satyagraha movement -because of these things, women are coming out of their homes. Taking jobs, going to jail, wake up from your stale dream.(DD,151) Though Virmati feels much impressed by her opinions, she is unable to check her passion for the Professor who comes to meet her in Lahore. She falls an easy prey to the Professor and gets pregnant. Swarna Lata helps her in aborting the child and motivates her to get involved in social activities of women liberation. Swarna Lata is an advanced, straight forward and mature thinker who follows her own opinions independently without any fear or doubt. According to Christopher Rollason : The pages of *Difficult Daughter's* speak not only of Virmati, but of other 'difficult daughters', who succeed better than she did in their parrallel struggle for independence in their lives. At the centre of the narrative, we are confronted with a woman who fights but falls by the wayside; but at its edges, as no doubt less representative but still symbolic figures, we encounter -as will be seen below -other women, whose relative success points the way to the future.(WOM) As a realist, Manju Kapur has successfully presented the fact that women played an active role in the Satyagraha movement and other social and political movements related to women's rights and freedom struggle. Virmati noticed that women are crossing the threshold of homes and coming outside to be the part of social activities. She also attends many conferences and rallies with Swarna Lata and hears many inspiring speeches delivered by strong intellectual women like Leela Mehta and other women nationalists. She realises that these women are fully devoted to the cause of women liberation and independence of the country.She feels an inner conflict and asks herself _ Is she an intellectual like these women who are free, strong and taking part in the freedom struggle? Is she free? The author says: Am I free, thought Virmati. I came here to be free, but I am not like these women. They are using their minds, organizing, participating in conferences, politically active, while my time is spent being in love. Wasting it. Well, not wasting time,no,of course not, but then how come.I never have a moment for anything else?(DD,142) However, she feels great confusion in her mind as what to do. There is a conflict between her passion for love and freedom struggle. She blames the professor for disturbing her life in Lahore. She curses herself to be an easy prey to Harish. Again she shows marvellous will power in overcoming her passion for Harish and starts a job of a headmistress of a girl' school at Nahan. It is a respectable job and now she is an independent woman living her life like a free bird without any problem. Her job made her economically independent and her life takes a positive turn as it is happiest period of her life. She finds a suitable place to live, away from her family, earning her own money. She is leading her life happily, teaching girls at school. As an educated woman, she succeeds in asserting herself and establishes her individual identity in society. However, Virmati emerges as a bold self-reliant woman who has a positive vision of life. She adjusts herself in the new surroundings and shows remarkable courage and power to control her life. As an educated woman, she possesses the sense of self-worth and finally succeeds to find a proper place in society. As a rebel, she challenges the social practices and breaks the shackles confining women within the four walls of home and gets a proper identity of her own. Though she is happy and satisfied with her free life here, she has no desire to live a lonely life. As a woman, she feels the need of a man in her life as she wants to fulfill her life with love Unfortunately she developes her relationship with the professor again and loses her job as the school authorities come to know about her illicit relationship with the Professor. Inspite of losing her job,she has courage and goes to Shantiniketan. She decides to marry the Professor and becomes his second wife. But she is still a restless as she feels herself alienated in the family. His family does not welcome her and the latter has to bear insults in his home. Virmati has married to get love, happiness, peace and security but it seems that her life is devoid of the desired peace and happiness. Though she succeeds to get the marital status,she has to bear oppostion in the family and thus her search for identity and proper place in her law's home restarts. To her, the only comfort is the love of her husband, Harish who always wants an educated companion. Actually she knows it well that being the second wife she has to bear some opposition in society, as her family is also against her marriage with the Professor. She is sure that her family will never accept her relationship with him as she has disgraced family in the past and her mother and family curse her yet she is happy with the man of her choice and promises herself a blissful marriage. She accepts her marriage as her husband is everything for her. Thus she takes a bold step by marrying the married professor. As a bold woman,she succeeds to show the society that women can defy the patriarchal dogmatism and conservative taboos in society and can bring a revolution in the society. Though she has to bear the hatred and curses of Ganga, Harish' first wife and his mother, she tries her best to adjust in her husband's home. At last she enjoys the company of her phusband at his home when the whole family shifts to kanpur due to partition riots. Finally she gets free space for which she has struggled hard. Yet sometimes she feels guilty as she becomes the cause of the sufferings of Ganga.She also realizes that she did not fulfill her responsibility as a daughter and sister and destroys the good name of her family. Through she is bold enough to overcome the social and traditional barriers, she has to suffer a lot for all this. She conceives and gives birth to a daughter Ida. N.P. Sharma remarks: Virmati has to fight against the power of the mother as well as the oppressive forces of patriarchy symbolised by the mother figure. The rebel in Virmati might have actually exchanged one kind of slavery for another. But towards the end, she becomes free, free even from the oppressive love of her husband. Once she succeeds in doing that, she gets her husband all by herself, her child and reconciliation with her family.(ISNMK) Ida, Virmati's daughter is also portrayed as a rebel who revolts against social conventions. She is not prepared to become a puppet in the hands of her husband Prabhakar who was approved by her parents for marriage. She married him to please her parents as Ida says:" because you thought Prabhakar was so wonderful and I was glad that in the choice of my husband I have pleased you." Prabhakar denies Ida maternity and forces her to have an abortion. As a result, Ida breaks up her marriage as he does not want a baby from her. As a new woman, she rebels against the deep rooted family norms of male dominated society. Ida is the product of post-independence era, she establishes herself as a new independent woman. She transcends the social restrictions and fights for her identity, dignity and individuality. Again the novelist presents the difficult relationship between mother and daughter as the very first line of the novel "The one thing I had wanted was not to be like my mother," depicts the complicated relationship of Ida and her mother Virmati. Ida says: When I grew up I was very careful to tailor my needs to what I knew I could get.That is my female inheritance.That is what she tried to give me. Adjust, compromise, adapt. Assertion, though difficult to establish, is easy to remember.(D D,236). Regarding mother daughter conflict, Manju Kapur herself asserts that " conflict between mother and daughter is inevitable and I suppose I was a difficult daughter.The conflict carries on through generation because mother's want their daughters to be safe.We want them to make right choices-right in the sense that they are socially acceptable. My mother wanted me to be happily married; I want my daughter to have good jobs."( Bala and Chandra 107) Ida is a difficult daughter to Virmati as the latter was to her mother Kasturi. Ida rebels against Virmati and follows her own way. Ida refused to show any signs of intellectual brightness.' There are other things in life' she told her mother.' Like what?' asked Virmati. "Like living, you mean living only for yourself. You are disappointing your father. Why is it so important to please him?........ I grew up struggling to be the model daughter. Pressure : pressure to perform day and night" She is not ready to bear pressure any more and decides to break her loveless marriage with Prabhakar. She is introduced to the reader as a middle -aged divorcee who visits Amritsar and Lahore and meets her mother's relatives to know about her mother's painful past. She wants to understand her mother Virmati's life. She relates to her mother Virmati when she comes to know that Virmati too had an abortion. Ida feels miserable as her husband and her abortion both were not chosen by her. The book connects both mother and daughter as both were not different. She experiences a strong bond with her as she says " without her I am lost, I look for ways to connect (3) Ida is strong independent woman who takes a bold step by freeing herself from the hollow relationship. On the other hand, Virmati, who is also an educated woman with her individuality and challenges the patriarchal norms but fails to show her courage in matter of love and can't think beyond her husband and marriage. But Ida is brave enough to end her relationship with Prabhakar as he had forced her to abort the fetus. Ida's conscious decision shows her strength of mind and heart. Thus it becomes clear that Manju Kapur is a committed writer who has firm faith in the female strength.As a feminist writer she has successfully presented the concept of New women and their struggle to freedom in patriarchal society. Her female characters Ida, Shakuntala, Virmati, Swarna Lata are assertive, self -reliant progressive women who show remarkable will power and transcend the age old social restrictions.Through their portrayal the novelist portrays the female desires, ambitions and expectations.They are high-spirited women who fight to be free from the stale social restrictions and attain freedom. They are aware of gender discrimination, women liberation and their empowerment and raise voice against social injustice and gender inequality and get victory over it by establishing their identity. As new women, they participate in social and political movements for freedom struggle. They are aware, strong -willed, self -reliant beings having faith in the inner strength of womanhood. --- Works cited | The purpose of this paper is to study the evolution of feminist power and the emergence of new woman in the novel "Difficult Daughter's". Manju Kapur is an eminent novelist who can be put in the category of those women writers who brought a remarkable transformation in the representation of women characters. As a woman she is concerned with the problems of Indian woman in the patriarchal family and deals with various feminist issues as gender equality, freedom from discrimination,right to education, marriage, abortion, autonomy, women's property rights, oppression etc. In the novel *Difficult Daughter's* Virmati is a typical daughter to her mother as she opposes the patriarchal norms for self identity. Women are discriminated and devoid of their basic rights in the patriarchal society.Her female protagonists are educated women with independent thinking who follow their own ideology. Her novels present the position of women and their struggle in the contemporary Indian society .As a woman, she successfully highlights the feminist struggle against patriarchy, exploitation. Her female characters reveal feminist power and represent the new women who are thinking, questioning and rational beings and raise a voice for basic rights, self identity and survival. |
Introduction Since 2000, international branch campuses (IBCs) have grown to be a unique feature in the global higher education (HE) system (Wilkins, 2020). Despite this, Altbach and de Wit (2020) have suggested many IBC struggle to provide comparable education in receiving countries as their home institutions due to differing sociopolitical and economic environments. Moreover, complicated geopolitical environments can make running IBCs unsustainable. In early 2015, Altbach identified several unsustainable aspects of IBCs, such as inferior education quality resulting from high turnover rates among foreign faculty, limited curriculum and infrastructure, the difficulty of sustaining quality applicant pools, and competition with local institutions. Recent research has additionally raised ethical concerns regarding IBCs, such as the building of Western IBCs in the Global South as a neocolonial practice (Siltaoja et al., 2019;Xu, 2021). In line with this strand of literature, the current study applies colonial discourse analysis to explain how Whiteness and colonial patterns embedded in IBCs continue to cause harm to local and global communities. IBCs often employ the discourse of internationalization to distinguish themselves from local institutions and to attract prospective students, especially wealthy ones. Buckner and Stein (2020) have argued that, although higher education institutions around the world are engaging internationalization, they often lack a clear understanding of internationalization. Specifically, IBCs often reproduce the imaginary of Whiteness as futurity (Shahjahan & Edwards, 2021) by positioning themselves as providers of world-class educations in the Global South contexts and presenting Western knowledges and experiences as "international." In the past few years, China has surpassed the United Arab Emirates and become the top host country of IBCs (Escriva-Beltran et al., 2019). To understand this phenomenon, it is important to understand how the concept of internationalization has been mobilized by Chinese HE and IBCs in particular. In this article, I look at how IBCs in China define and promote internationalization in HE, how Whiteness is reproduced through the discourse of world-class education, and how Whiteness as futurity is reflected and reinforced in the development and operation of IBCs. I employ colonial discourse analysis to conduct a case study that analyzes publicly available branding materials on the Whenzou-Kean University website and draw on Shahjahan and Edwards' (2021) framework of Whiteness as futurity to understand how Whiteness is mobilized and reproduced through representations that uphold the Western supremacy. --- Internationalization and Chinese higher education In addition to the trend of globalization and many HE sectors' efforts on internationalization, the establishment and growth of IBCs around the world is the result of several overlapping factors, including reductions in public funding for HE from local and national governments in the West. These reductions have driven universities to instead seek international profit via IBCs (Altbach & Knight, 2007;Belanger et al., 2002;Stein et al., 2019;Zha 2003). Importantly, some have argued that there are neocolonial attitudes embedded in the expansion of IBC by Western countries (Siltaoja et al., 2019;Xu, 2021). Welcoming IBCs to be established in some of the Global South nations by both local government and students were arguably an indication of "coloniality of power" (Quijano, 2007). In other words, many people in non-Western contexts also believe that Western knowledges are more validated. This colonial imaginary, however, validates Western subjects at the expense of other knowledges and peoples and demonstrates how Western ideals have spread to non-Western contexts. According to Buckner (2019), who has argued "the benefits of internationalization are localized" (p. 333), internationalization can mean different things in different national contexts. Although internationalization is arguably a contested term and has multiple meanings, Knight's (2003) definition has been widely cited. According to Knight (2003), internationalization is defined "as the process of integrating an international, intercultural, or global dimension into the purpose, functions or delivery of postsecondary education" (p. 2). Nevertheless, de Wit (2014) argued that "internationalisation in higher education is at a turning point and the concept of internationalization requires an update" (p. 97). Therefore, de Wit and Hunter (2015) modified the definition of internationalization as "the intentional process of integrating an international, intercultural, or global dimension into the purpose, functions and delivery of post-secondary education, in order to enhance the quality of education and research for all students and staff, and to make a meaningful contribution to society" (p. 3). Some, however, have questioned the ethics of internationalization. Stein (2016), for instance, has argued that one of the most significant ethical challenges of internationalization is that it reproduces colonial patterns of knowledge and Eurocentrism in a broader, global context and that the existing global system is inherently violent and unsustainable. Stein and da Silva (2020) and Buckner and Stein (2020) have also highlighted the importance of revising the hegemonic assumptions embedded in internationalization to instead promote the possibilities in different ways of knowing and being. Internationalization is also understood by universities and policymakers according to their national contexts and unique economic and political conditions. For example, following the the reform and opening-up policy of the late 1970s, China began seeking opportunities for international cooperation in HE (Chen & Huang, 2013). These efforts eventually resulted in the establishment of IBCs, a form of transnational education that has rapidly expanded in the last 20 years (Li, 2020). Since then, much has been written on the internationalization of Chinese universities (e.g., Yang, 2014;Zha et al., 2019;Chen & Huang, 2013). Some have referred to it as a form of Westernization and argued for the de-Westernization of internationalization instruments, such as the requirement of English proficiency (see Guo et al., 2021). Others have written on students' experiences, particularly those with IBCs (see Li, 2020;Wilkins et al., 2012). For example, Li (2020) found Chinese students considered four major factors when choosing IBCs, namely, "personal reasons," "institution image," "program evaluation," and "city effect" (p. 337). Scholars have also studied IBC models and strategies (see Becker, 2010;Girdzijauskaite & Radzeviciene, 2014;Verbik, 2007;Wilkins & Huisman, 2011;Wilkins & Huisman, 2012;Yang et al., 2020). Yang et al. (2020), for example, identified how differences in Asian and Western educational cultures create gaps in expectations between instructors and students and has suggested practical changes for narrowing these gaps. Yet, despite these scholarly advancements, critical analyses of the ethics of IBCs remains a notably under-explored blind spot in the research (recent exceptions include Shahjahan & Edwards, 2022;Siltaoja et al., 2019;Xu, 2021). For instance, Xu (2021) has argued that the work of internationalization of HE in China is closer to Westernizing Chinese institutions through the hiring of faculty with Western backgrounds, the adoption of Eurocentric pedagogies, the use of English as the medium of instruction, and the privileging of scholarship published in English journals. All of these efforts are perceived as approaches to boost global university rankings (GURs) and strive to become "world-class" universities. In this article, I build on this emerging foundation of scholarship on the ethics of IBCs by applying a Whiteness as futurity framework to analyze the colonial discourse of a particular IBC in China. In doing so, I extend extant critiques of the narrative that IBCs bring worldclass and international HE to China through the examination of Western IBCs, a particular zone wherein Whiteness and coloniality are reinforced and reproduced in Chinese society in ways that diminish non-Western peoples and knowledges. --- Theoretical framework: Whiteness as futurity In the context of this paper, the utility of the Whiteness as futurity framework necessitates a critical understanding how global imaginaries have positioned Western HE and IBCs as desired products in the global HE market. Marginson (2011) has argued three key imaginaries of global HE to be global capitalism, competitions for status and hierarchy, and networks and partnerships among global universities. Stein and Andreotti (2016) have argued imaginaries are embedded in Western supremacy, as Western HE is dominant among top-ranked global universities and leads global partnerships in global HE sectors. However, GURs are not objective, instead they oriented the world in a stratified order (Brankovic, 2022). Together, these imaginaries exert and reinforce a stratified national order that places Western HE at the top of the global HE hierarchy. Western HE is therefore considered a superior and more desirable product in the global market than non-Western HE (Stein & Andreotti, 2016). These assumptions especially perpetuate the coloniality of power in receiving countries in the Global South, where Western HE-by way of IBCs-has become a highly sought after and coveted commodity (Xu, 2021). In such cases, IBCs are understood as useful tools for governments to grow the capacities of their HE systems and brand their nations and cities as hubs of global education. IBCs realize this via the building of partnerships between local universities and Western universities that ostensibly increase receiving countries' competitiveness through the provision of opportunities for students to pursue Western degrees desired in the global market without having to leave their home countries (Marginson, 2011). According to Ahmed's (2007) conception of Whiteness as a phenomenon that orients bodies in directions that privilege White subjects, Shahjahan and Edwards (2021) developed the Whiteness as futurity framework to examine how the power of Whiteness works to "colonize (or orient) global subjects' (nation-states', policy makers', institutions', and individuals') imaginaries and reinforce the asymmetrical movements, networks, and untethered economies underpinning global HE" (p. 2). Specifically, Whiteness as futurity is comprised of three interwoven pathways: Whiteness as aspiration, Whiteness as investment, and Whiteness as malleability. Whiteness as aspiration suggests White nations manipulate global imaginaries in terms of what counts as the future of HE and what Others should aspire to. Whiteness as investment, which is evoked by Whiteness as aspiration, indicates the superstructure of Whiteness compels non-White nations to invest in Whiteness to gain social and material benefits or otherwise face harm. For instance, White and English language credentials are considered preferable and more competitive in national and global labor markets (Shahjahan & Edwards, 2021). Finally, Whiteness as malleability suggests Whiteness and its privileges are reachable. The authors argue this particular feature of Whiteness is what makes Whiteness as futurity possible, as it claims, "non-White bodies and spaces can symbolically and materially project and gain advantages of Whiteness" (Shahjahan & Edwards, 2022, p. 3). For instance, a student from a non-White nation can seemingly enjoy certain privileges of Whiteness after obtaining a degree from a top-ranked university in a White nation. In such cases, students might exhibit Whiteness as aspiration and invest in it to Whiten themselves (i.e., gain privileges associated with Whiteness). In summary, the three pathways of Whiteness as futurity interact with one another and colonize the international HE imaginary by determining for all what "world-class" educations, scholars, and students should look like and know. Whiteness as futurity is appropriate for this study because IBCs were primarily established in non-White nations based on the assumption that Western education and Whiteness are and should be desirable in non-Western contexts. As a result, these assumptions have been inextricably nested in the promotion of internationalization and "world-class education." By drawing on the three pathways of Whiteness as futurity as a guide, this study unpacks how discourses of internationalization and "world-class education" are used in the branding materials of Western IBCs in China and how these discourses reinforce the supremacy of Western education and the desire for the "state of knowing and being" of Whiteness (Shahjahan & Edwards, 2021, p. 2). --- A case study: Wenzhou-Kean University This paper focuses on Wenzhou-Kean University (WKU), an IBC of Kean University in New Jersey. Kean University is a public comprehensive university in New Jersey that claims Kean is "the only American public university to offer a full campus in China" with Wenzhou-Kean University (WKU) (Kean University, n.d.). WKU's website defines the university as, "a Chinese-American jointly established higher education institution with independent legal person status and limited liabilities" and "a province-state friendship project between the Zhejiang Province and New Jersey in the United States" (Wenzhou-Kean University, n.d.). Different from other IBCs in China that are substantially supported by funds from private enterprises in China, WKU was initiated and supported by local and provincial governments. The current president of China, Xi Jinping, visited Kean University in New Jersey in 2006 while serving as the Secretary of the Chinese Communist Party in the Zhejiang Province to deliver a keynote speech at WKU's Signing Ceremony (Wenzhou-Kean University, n.d.). The project of WKU was approved by the Chinese Ministry of Education in 2011 and officially established in 2014 (Wenzhou-Kean University, n.d.). Besides, WKU gained support from local and provincial government to become an internationalized and world-class university (Wenzhou-Kean University, n.d.). As of today, WKU is comprised of four colleges: the College of Business and Public Management, the College of Architecture and Design, the College of Liberal Arts, and the College of Science and Technology. It offers 17 undergraduate programs, 8 master's programs, and 3 doctoral programs. The university imports educational resources from the USA and recruits faculty globally (Wenzhou-Kean University, n.d.). Most of the courses taught at WKU are provided by its home institution, Kean University, and English is the medium of instruction (Wenzhou-Kean University, n.d.). The WKU website highlights that 68% of graduates of the Class of 2019 chose to attend graduate schools and 43% were admitted by top 50 universities according to QS World University Rankings (Wenzhou-Kean University, n.d.). Unlike other IBCs in China, such as New York University Shanghai, Duke Kunshan University, and the University of Nottingham Ningbo China, WKU is unique in that it is likely the only IBC with independent legal person status in China whose home institution is not perceived as a well-known, prestigious university in the West in terms of its GURs. According to US News & World Report (2022), Kean University tied for number 126 in Regional Universities North. Unlike research-oriented universities that are outstandingly positioned in GURs, "Regional Universities focus on providing undergraduate education and only offer a limited number of graduate programs" (US News & World Report, n.d.). I chose to examine a branch campus in China whose home institution is not highly ranked in the West to demonstrate the power of Whiteness as futurity in colonizing the international HE landscape in China. --- Methodology Based on studies that have argued Western IBCs in the Global South reproduce neocolonial attitudes and practices (Siltaoja et al., 2019;Xu, 2021), the current study uses colonial discourse analysis to analyze how colonialism and Whiteness drive the discourses of internationalization and world-class education in WKU's online branding materials. Drawing on Said's (1979) argument-as inspired by Foucault's (1975) assertion that history, knowledge, and power are intertwined-that "the Orient" is a myth created by the West to espouse and evidence the West's superiority. Young (2004) has similarly suggested "colonial discourse analysis... forms the point of questioning of Western knowledge's categories and assumptions" (p. 43). I employ this methodology to locate common features of colonial discourse on WKU's website, such as rhetoric that: (1) "[operates] as a productive force"; (2) "reproduce sedimented social relations and practices"; and (3) "provid[es] opportunities for...disruption and resignification" (Stein, 2018, p. 466). My analysis is specifically centered on WKU's online branding materials for admission and graduates. In the "About us" section, there is a "Publish list" through which branding brochures are available for download. I selected five documents to review in total, including the admission brochure "University Brochure" and graduate brochures, "Proud 2018," "Proud 2019," "Proud 2020," and "Proud 2021". These brochures were selected because they are publicly available online and have English versions. More importantly, the documents' range (i.e., from admission to graduation) gives a general indication of how WKU represents itself using the colonial discourse of worldclass education. The admissions brochure provides particular insight into the types of students WKU aspires to recruit and the kind of education they promote themselves as providing, while the "proud graduates" brochures serve as products that illustrate the success of Western IBCs and the types of students the Western knowledge economy-via WKU-values and considers "excellent." Taken together, these five documents discursively illustrate how colonial discourse undergirds WKU's perceptions of what counts as a "world-class education," its values as a Sino-US joint institution, and internationalization. To code the data, I first looked at how the WKU branding materials represent and define "world-class" education and how they articulate Western education as something Chinese students should aspire to over Chinese HE. In reviewing the University Brochure, I paid particularly close attention to what WKU highlights as features of American-style education. For the Proud Graduates brochures, I looked at how colonial criteria are used to define "proud graduates" by examining who WKU presented as "proud graduates" and what work these students did during college that WKU considered international. I then draw from colonial discourse analysis to examine what was absent in these brochures. To do so, I looked at what and who was not selected for inclusion in the brochures and what types of educations and experiences were invalidated in WKU's version of an international setting. I particularly focused on whether aspects related to Chinese/local knowledge, curriculum, experiences, and faculty were discussed in the promotion of WKU and its graduates. Seeing what was absent allowed me to unpack what WKU excludes from its definition of "world-class education," "internationalization," and "proud graduates," as well as question the hegemonic and colonial assumptions underlying Western supremacy. Finally, I used the three pathways of Whiteness as futurity as a guide to discuss how notions of colonialism and orientalism are embedded in the discourses of internationalization and world-class education in these five documents. I specifically looked at how the discourses of internationalization and world-class education in these documents align with the tenets of Whiteness as futurity and positioned Western HE, students at IBCs, and international experiences as superior to Chinese HE, students in non-Western universities, and local experiences. This analysis focused on the following three research questions: (1) How do the branding materials of WKU define internationalization and world-class education? (2) How does WKU's representation of "proud graduates" in its branding materials implicate or not the three pathways of Whiteness as futurity and colonialism? (3) To what extent is Chinese education or local knowledge and experience acknowledged or ignored in WKU's branding materials? --- Findings --- University Brochure The University Brochure ( 2017) is one of the university's most important branding publications because it concisely represents the core values WKU present to the public, especially prospective students and their parents. Drawing on colonial discourse analysis, I found WKU primarily defines and advertises "world-class education" and "international education" by Western educational resources, Western-style teaching and learning (including faculty and textbooks), English learning environments, and White credentials (University Brochure, 2017). Importantly, I also found the courses WKU offers at their Chinese Curricula Center, such as Chinese culture and history (i.e., required courses for Mainland Chinese students), are missing from its primary branding materials (Chinese Curricula Center, n.d.). As for visual representations, although WKU suggests "100% of the faculty are recruited globally" (Wenzhou-Kean University, n.d.), most of the images depict White faculty teaching Chinese students, while representations of non-White faculty remained notably absent (University Brochure, 2017). "World-class education" is the main theme of the University Brochure. On the first page, the term is used to describe WKU's educational offerings. The sentence "a city of the world, a university of the future" appears on the second page, representing the city of Wenzhou as an international city and WKU as an international university leading Chinese and global HE into the future (University Brochure, 2017). The brochure suggests that the main reason WKU brands itself as a provider of world-class education is because it offers Chinese students the opportunity to access US HE without having to leave China: "[WKU] brings advanced educational resources from the U.S. and implements Americanstyle educational methodology in an all-English teaching environment to provide students with access to world-class education right here in China" (University Brochure, 2017, p. 3). WKU additionally positions itself as a bridge for Chinese students to get US credentials and study in the West. For instance, the brochure indicates students can obtain bachelor's degrees from both WKU and Kean University and that they can attend exchange and graduate programs at Kean University. --- Proud graduates My analysis of the proud graduates brochures centered on examining who WKU defined as "proud graduates" and what work they had done during college that the university considered international, as well as who and what types of experiences were excluded. I found these brochures highlighted certain criteria of proud graduates-i.e., graduate school application results, overseas experiences, internship and research experiences, and English skills. In the following sections, I show how these criteria align with the school's definition of world-class education in the University Brochure. --- Graduate school admissions First and foremost, graduates admitted to top-ranked universities are who WKU primarily represents as "proud graduates." Among the 43 "proud graduates" featured in the brochures and according to the descriptions in the brochures, only eight decided not to pursue a graduate degree right after graduation, but some did note they planned to apply to graduate schools in the future Proud, 2018Proud,, 2019;;Proud, 2020.;Proud 2021Proud, 2021)). The other 35 "proud graduates" were admitted to top-ranked universities in the USA, the U.K., Australia, Hong Kong, and other IBCs in China (Proud, 2018(Proud,, 2019;;Proud, 2020;Proud, 2021). Analysis revealed an extensive use of GURs to describe the universities and programs the proud graduates were admitted to. See the list below for some examples: Although WKU's home institution, Kean University, is not a top-ranked university in the USA, having graduates admitted to highly ranked universities, particularly in the West, was an important, recurring indicator WKU used to evaluate the success of its curriculum. As the list above demonstrates, the university relies heavily on describing "proud graduates" in terms of university rankings, such as GURs and subject rankings. Indeed, all the universities these graduates were admitted to are top ranked, but it is worth noting that the brochures did not use a consistent global university ranking. Rather, the rankings indicating the universities performed well were selectively chosen to depict the school's graduates' achievements according to Western standards of education. Moreover, given the brochures adopted GURs to evaluate the graduates' accomplishments and that WKU itself is a Sino-US cooperative university, students intending to study at US institutions were predominantly featured among the university's proud graduates. --- Overseas experiences Overseas experiences, including exchange semesters at Kean University in the USA, volunteering abroad, and attending international conferences and activities, were highly valued in the proud graduates brochures (Proud, 2018(Proud,, 2019;;Proud, 2020;Proud, 2021). Among the graduates from the featured classes (i.e., Classes 2018 to 2021), many participated in non-academic international conferences to broaden their horizons and gain leadership skills. These activities predominantly took place in foreign countries and are fee-paying programs. For instance, one student participated as a representative in the "6th University Scholars Leadership Symposium in Hong Kong" Proud, 2018, p. 4); another student had "a practice opportunity in the United Nations international Maritime Organization in London" (Proud, 2019, p. 19); another student had an "APEC experience [which] allow [ed] her not only to make many new friends, but also to increase her knowledge and broaden her horizons" Proud, 2019, p. 19); and one student "participated in the 24th United Nations Climate Change Conference as an NGO observer, and took part in the press conference of her NGO as a Chinese youth representative" Proud, 2020 p. 8). Volunteer experiences outside China were also a key feature of the proud graduates. Some went to economically developed nations to experience cultural exchange, like the student who "spent two months in South Korea volunteer teaching" Proud, 2018, p. 4) and another who "spent 48 hours on a work exchange" in Australia Proud, 2018, p. 12). Others went to less economically developed countries to spread "world-class education" in the form of English language and Western teaching. For instance, one student "went to Thailand to support the local education, and her job was to teach local children English, and help these children broaden their horizons of the world" Proud, 2019, p. 8), while others went to Indonesia and Sri Lanka to provide other types of "educational aid" Proud, 2020, pp. 5-6). Participation in an exchange semester at Kean University in the USA was another major feature of WKU's proud graduates. WKU represented these exchange experiences as highly appreciated by graduates, who noted studying at Kean provided them with opportunities to "[meet] enthusiastic and friendly Americans, enjoyed a comfortable life, and experienced the world's top education resources" Proud, 2018, p. 5). WKU also included that some students liked the experience so much that they took courses that would not satisfy WKU's minor requirements Proud, 2018, p. 7). WKU's representations of their proud graduates also emphasized overseas experiences as significant assets for applying to graduate schools. The university implicated these experiences would enrich students' resumes and facilitate their successful admission into top-ranked graduate schools. Nevertheless, the costliness of the overseas programs marginalized students without the funds to participate in these programs. Only those who volunteered in China and attended local activities were thus likely to be excluded from the proud graduates designation. --- Internship and research Professionally, WKU's proud graduates actively participated in internships, such that some had a variety of internship experiences and publications. For instance, one student "started doing internships during the winter and summer holidays of her first year, and her resume includes 4 separate internships" Proud, 2018, p. 9). Similarly, a business graduate "worked as an intern in the loan departments of both ICBC (Industrial and Commercial Bank of China) and BEA (Bank of East Asia)." Academically, many of the graduates participated in research activities with faculty members. Some presented their work at academic conferences and published papers in international journals. For instance, one student "has three publications... and she is determined to be a Ph.D. in the future" Proud, 2018, p. 6). In addition, some students worked with faculty on research projects and "brought their research achievement to Kansas City to participate in the IEEE (Institute of Electrical and Electronics Engineers) conference," Proud, 2018, p. 6). Another student "actively participated in scientific research and academic exchanges. His research result has not only been displayed on the exhibition platform of WKU Student Research Day but also in the IBSS conference held at Waseda University" Proud, 2019, p. 6). One of the graduates even "published four papers at international conferences" and believed his research output to be "the key to his final success" Proud, 2020, p. 7). Like the overseas experiences in the previous section, most of the graduates' internship and research experiences in the brochures were selected because they are, to some extent, considered "international." One student explained engaging in research activities such as presenting at international conferences and publishing on English journals was key to his final success and a main reason why they participated in research and internship is for applying to top-ranked graduate schools. Featured experiences like these show how WKU sees itself preparing students for the capacity required to attend top-ranked graduate schools in the West. This heavy focus on graduate schools' admission criteria, interning at foreign companies, attending academic conferences abroad, and publishing in English journals, is presented as a valid professional and academic experience WKU's branding materials, while local internships and research activities are excluded from qualifying as "proud." --- English skills English skills are especially important in IBCs, where English is the medium of instruction. WKU's brochures presented many proud graduates' experiences learning English and becoming fluent second-language speakers. For instance, one student was described as successful in terms of the English skills he gained: "immersing himself in this English environment every day, his English has been greatly improved to the point that he scored 7.5 on IELTS" Proud, 2019, p. 8). Similarly, to improve their English, another student "chose one of the most [tough] English teachers in her freshman year to force herself to strengthen her English" Proud, 2019, p. 8). WKU's representations of the importance of English in their brochures align with a larger trend in IBCs that equates speaking fluent English with more opportunities. This sentiment holds that English proficiency helps students get good grades in class, obtain career opportunities, and facilitate graduate school applications. For example, a student who became an IELTS teacher was chosen to speak at WKU's commencement as a prime example of an outstanding graduate Proud, 2018, p. 12). In sum, although the proud graduates featured in these brochures graduated in different years and although WKU states the institution is dedicated to the principle of "providing students with different ways of development" (Wenzhou-Kean University, n.d.), the graduates and WKU's "different" ways of development are actually quite similar. For instance, the school presents such methods of development as admission to top-ranked graduate schools, active participation in internships and research, overseas experiences (e.g., exchange and volunteer programs), and speaking fluent English. In the following section, I engage in further analysis of how these merits, alongside the features of WKU's definition of a "world-class education," adhere to the three pathways of Whiteness as futurity. --- Discussions --- Whiteness as aspiration Recall that Whiteness as aspiration has been defined as the manipulation of global imaginaries regarding what counts as the future of HE and what others (i.e., non-White, non-Westerners) should aspire to (Shahjahan & Edwards, 2021). I argue that WKU exhibits this form of colonial discourse in its definition of "world-class education," which primarily refers to Western education models and excludes the university's course offerings at its Chinese Curricula Center. WKU's branding materials thereby elevate Western knowledge as "advanced" in relation to other knowledges and something others (i.e., non-White, non-Westerners) should aspire to if they want to be successful. By positioning Western knowledge and Western IBCs as world-class and advanced, these materials simultaneously suggest other universities, knowledges, and languages are not sources of world-class education and are therefore inferior. Evaluating and equating world-class education as equal to Western education is a manifestation of Whiteness as aspiration. This is apparent in how IBC educations are usually dominated by Western epistemologies taught by English and international faculty, as well as in how the language in IBCs' branding materials often describe Western knowledge, foreign faculty, and the English language as things students in non-White, non-Western contexts should aspire to. Whiteness as aspiration is also apparent in the depictions of speaking fluent English, acquiring Western knowledge, and holding White credentials as the universal qualifications for success in the global labor market. Yet, this phenomenon is not only limited to IBCs; to promote "internationalization," many non-IBC Chinese universities have begun offering bilingual courses in English (Zha et al., 2019), importing Western curricula, and recruiting international faculty (Lin, 2019). Not only does Whiteness as aspiration invalidate Chinese knowledge, it also invalidates Chinese people and culture. In WKU's signing ceremony, a leader of the Zhejiang Province claimed, "Wenzhou people are wealthy in material but in need of educational opportunities; especially higher education is less developed in Wenzhou compared to other cities in Zhejiang Province. But you [Kean University] just come in time and provide the education that Wenzhou people have been longing for" (University Brochure, 2017, p. 4). This quote suggests that, although the Wenzhou people are wealthy, they are undereducated by the Western-driven standards of internationalization. Positioning Wenzhou people as such, regardless of whether there is truth in it or not, justifies the establishment of WKU and cements its necessity in providing Wenzhou people with advanced educational aid from the West. This claim indicates a "hierarchy of knowledge" (Jain, 2013), as it purports that those who seek formal, colonial educations are considered educated, while those who seek non-Western educations are not as educated. Interestingly, Wenzhou is well-known for being "a regional center of global capitalism" because of "the rapid growth of many small and medium-sized family-owned manufacturing enterprises" (Cao, 2008, p. 63). The region's success has been encapsulated in what is known as the Wenzhou model of economic development (Parris, 2017). Yet, by Western standards, Wenzhou's economic success is not valid because its methods are not taught at formal, Westernized institutions. In this way, Western education can be seen as colonizing what counts as quality education as well as who is considered well-educated and why. WKU also deploys Whiteness as aspiration by defining internationalization in terms of language proficiency and foregrounding students who strive to speak fluent English and enter top-ranked graduate schools in Western nations. For instance, WKU graduates are regularly admitted to Chinese graduate schools and choose to work for local companies or government. | A case study is used to understand how Western international branch campuses (IBCs) in China represent themselves through web-based branding materials. Drawing on colonial discourse analysis and the theoretical framework of Whiteness as futurity, this study examined the case of Wenzhou-Kean University, a Sino-US cooperative institution to understand how Western IBCs in China interpret and promote internationalization in higher education. By examining how Whiteness through the discourse of world-class education has been mobilized and reproduced, this study argued that the operation of IBCs perpetuated Western supremacy in the global higher education landscape at the expense of local people and knowledges. |
" (Jain, 2013), as it purports that those who seek formal, colonial educations are considered educated, while those who seek non-Western educations are not as educated. Interestingly, Wenzhou is well-known for being "a regional center of global capitalism" because of "the rapid growth of many small and medium-sized family-owned manufacturing enterprises" (Cao, 2008, p. 63). The region's success has been encapsulated in what is known as the Wenzhou model of economic development (Parris, 2017). Yet, by Western standards, Wenzhou's economic success is not valid because its methods are not taught at formal, Westernized institutions. In this way, Western education can be seen as colonizing what counts as quality education as well as who is considered well-educated and why. WKU also deploys Whiteness as aspiration by defining internationalization in terms of language proficiency and foregrounding students who strive to speak fluent English and enter top-ranked graduate schools in Western nations. For instance, WKU graduates are regularly admitted to Chinese graduate schools and choose to work for local companies or government. Excluding them from the university's definition of proud graduates seems to suggest they are less impressive, less successful, and less educated because they have not prioritized the speaking of fluent English and have not been admitted to a top-ranked university in the West. --- Whiteness as investment Recall that Whiteness as investment is a result of Whiteness as aspiration in that it compels non-White nations to invest in Whiteness to gain social and material benefits (Shahjahan & Edwards, 2021). Thus, by virtue of the existence of Whiteness as aspiration in WKU's branding materials, Whiteness as investment is also present. This is especially apparent in the cost of IBCs, which are much higher than local Chinese universities. According to the latest Wenzhou-Kean University Undergraduate Recruitment Information (2021), WKU's tuition fee is 65,000 Chinese yuan yearly, which is about 10 times higher than other Chinese universities. However, WKU's tuition is lower compared to the tuitions of other IBCs; for instance, the University of Nottingham Ningbo China charges 100,000 Chinese yuan yearly (Tuition fees and finance, n.d.), NYU Shanghai charges 200,000 yuan for first-and second-year students (Cost of Attendance, n.d.), and Duke Kunshan University charges 200,000 Chinese yuan yearly (Tuition and Cost of Attendance, n.d.). In addition, other fees at IBCs (e.g., foreign textbooks and housing) are also much higher than other Chinese universities. Such high costs reinforce the belief that attending IBCs is an educational investment for some Chinese families, which in turn reinforces investment in Whiteness to achieve success in the global labor market. As China has the largest population in the world, the massification of HE in China has in turn made China the largest HE system in the world. However, importantly, this system does not serve the masses, especially IBCs, which are known for being exclusive (Shan & Guo, 2014). The small scope of IBCs in China, the requirement of English language proficiency, and high tuition fees ensure these institutions can only serve students from the upper-middle class and beyond. They are extremely exclusive and thereby considered as "elite" education that only upper-middle-and upper-class families can afford. Being taught Western curricula in English by foreign faculty in Western IBCs can be a great investment for many Chinese students. Through Whiteness as aspiration, Western institutions have colonized the global market and education, meaning those who possess Western credentials often have an easier time entering international companies and top-ranked graduate schools in the West because they are likely fluent in English, come with recommendations by foreign faculty, have US transcripts that do not need to be translated and coursework that complies Western standards. These factors help them stand out among their Chinese peers in Chinese universities who have not made such investments in Whiteness. Given this, I contend that the proud graduates in WKU's brochures also see their attendance of WKU as an investment that makes Whiteness and its privileges reachable. Many acknowledged WKU's role in helping them successfully submit graduate school applications and pursue competitive careers. For instance, a proud graduate from the Class 2019 said, "I benefited enormously from the American-style interactive environment and active classroom participation, which forced me to step out of my comfort zone and enhanced my English communication skills" Proud, 2019, p. 19). Another from the Class 2020 mentioned, "faculty here at WKU was international, and the curriculum was international, as well as the instruction. At WKU, the small-size classroom, all-English teaching environment, group cooperation, and other teaching methods are a great benefit to his study" Proud, 2020, p. 4). Coupled with WKU's definition of a world class education, these testimonies show that, although students at WKU pay much higher tuition fees compared to those in local Chinese universities, many see this investment (i.e., an investment in Whiteness) as worthwhile because they equate it with a greater chance of finding success in a global society colonized by Whiteness as futurity. --- Whiteness as malleability Finally, recall that Whiteness as malleability has been defined as a mode of thought that holds Whiteness and its privileges are reachable. It manifests in Chinese students' assumptions in that, by attending IBCs, they do not need to attend Western institutions in person to obtain White credentials. My analyses of WKU's branding material shows the school sells students a degree they can obtain in the comfort of their home countries that they feel equates to a Kean University degree in the USA. In other words, they feel a degree like this from a US institution of HE can facilitate their graduate school applications and privilege their educational background. In this way, IBCs in China are seen as vehicles for people from upper-middle-to upper-class families to join the game of Whiteness as futurity, which further perpetuates the devaluation of Chinese educations while elevating the value of Western educations in Chinese and global HE landscapes. It is important to note here that the WKU branding materials portrayed its graduates as recognizing the significant role of the university in helping find successful in graduate school applications and job hunting. What is absent from these accounts, however, is an acknowledgement of the privileged backgrounds that enable them and their families to support their investments in Whiteness and its privileges. The resources provided at WKU (e.g., English teaching environments, Western curricula, global faculty, and White credentials) that have helped them get into Western nations to study are one factor, but another is their families' capital. This capital is what allows them to pursue expensive master's degrees in the West and participate in international activities outside of China. It is no mistake that these are the kinds of families and students WKU values and targets for recruitment, while students who do not possess such capital are relatively marginalized by the institution. The lack of such capital hinders these students' abilities to achieve Whiteness and its attendant privileges, and leaves them underrepresented in their institution and after graduation. Overall, even though IBCs in China grant degrees that are ostensibly equivalent to those of the IBCs' home institutions, there is still a colonial hierarchy at work in the global education market that ranks Western degrees as "superior" to all others, even Western IBCs. Thus, without transforming the White credentials from IBCs to Western institutions in Western nations, degrees from IBCs are not necessarily as competitive as credentials obtained directly from the West. It is a combination of credentials from Western IBCs and family capital that ultimately makes Whiteness reachable for certain IBC graduates. --- Conclusions --- Summary of findings In this paper, I analyzed Western IBCs in China via an investigation into how the Wenzhou-Kean University (WKU) international branch campus defines its provision of "worldclass education" and "international education" in terms of Whiteness as futurity. Using an anticolonial lens, I examined WKU's online branding materials and found WKU defines a "world-class education" as the importing of educational resources from the West for teaching Western curricula and knowledge in non-Western nations. Such teaching employs English as the medium of instruction, relies largely on foreign faculty, and grants Western credentials. WKU thus defines academic excellence in this regard by those who were admitted to top-ranked graduate schools, those who actively participated in overseas programs, and those who speak fluent English. Overall, the present study illuminates how Whiteness is reproduced by a particular IBC in China whose home institution in the West and is not top ranked. By depicting Western education as world-class, WKU suggests Western universities are, by nature, superior to local Chinese universities, regardless of GURs. Obtaining Western credentials though IBCs can thus "Whiten" Chinese students by giving them access to more privileges in a global knowledge economy dominated by Western ideals. However, the recent COVID-19 pandemic, travel restrictions, and mounting tensions between the USA and China have posed significant challenges to IBCs in China, particularly cooperative Sino-US institutions. Nevertheless, given the durability of the Whiteness as futurity imaginary and the ongoing assumption that IBCs serve as agencies that make Whiteness reachable for non-White non-Westerners, it is likely IBCs will continue to be welcomed by students and parents in non-White nations, non-Western. --- Critiques and implications In this section, I engaged critiques of internationalization, global citizenship, and GURs, to explore implications for future practice and ways for reimagining Western IBCs in China and beyond. First, many characteristics of world-class education and internationalization depicted by IBCs have already been criticized. Knight (2014), for instance, has argued internationalization should build on and respect local contexts. However, in many IBCs including WKU, local contexts are often subjugated to the dominance of Western modes of thought. An example of this is the exclusive use of Western textbooks and the prioritization of hiring foreign faculty. De Wit (2011) has also pointed out that "Internationalisation is teaching in the English language" and "Internationalisation is studying abroad" (2014) as two of the most common misconceptions of internationalization. Yet, as evidenced in this study, these misconceptions continue to be extensively represented features of IBCs. WKU has further identified overseas experiences, especially overseas volunteer programs through which students teach English in economically underdeveloped countries, as examples of WKU graduates as global citizens. However, these programs operate on the assumption that China and other global south nations require benevolent educational aid from the West and that economically underdeveloped countries need Westerntrained "global citizens" to provide their benevolent educational aid to nations in the Global South (Jefferess, 2008). Programs aimed for helping "unfortunate Others" do not necessarily help the people of these regions and in many cases inflict and perpetuate harm. WKU's representation of ideal graduates is based on its student participation in such activities, which reproduce problematic colonial imaginaries and often inflict harm on non-White, non-Western subjects. Moreover, my analysis in this paper demonstrates that IBCs use GURs as measures to evaluate their graduates, but GURs are not neutral and are arguably problematic. Stack (2016Stack (, 2020)), for instance, has suggested improving GURs does not improve equity and inclusion. Rather, GURs are seen as a way of incentivizing competition among institutions via the "geopolitics of knowledge" that "naturalize inequality as necessary for the development of society and human knowledge" (Stack & Mazawi, 2021, p. 226). Shahjahan and Edwards (2021) have additionally argued that GURs privilege White Western institutions and orient universities around the globe to conform to the norms of predominantly White institutions. Although IBCs do not participate in GURs themselves, they compete with one another to admit the most graduates to top-ranked graduate schools. The more graduates are admitted to top-ranked universities, especially those in the West, the more successful the IBC. In this sense, IBCs have become agencies and gatekeepers for reproducing top-ranked universities and the imaginaries that center GURs as a primary way of measuring academic excellence in non-Western contexts. Based on the results of this study, it is highly recommended that IBCs reconsider their role in perpetuating colonial conceptions of internationalization and move beyond mimicking and elevating the supremacy of Western institutions. IBCs might instead propose curricula that equitably fuse Chinese and Western epistemologies without elevating one over the other. This study also illuminated how Western educational practices can potentially harm non-White subjects. To denaturalize the colonial assumptions embedded in global HE, it is therefore necessary for IBCs to value and equitably incorporate Indigenous and local knowledges if they are to truly engage students to reimagining what world-class education and global talents can look like. --- Limitations and future research This study has certain limitations in its design that should be carefully considered in future scholarship. As a case study that exclusively examined web-based branding materials, some findings of this study might not be generalizable to other IBCs, particularly those outside of China. If possible, future studies exploring IBCs and internationalization should consider interviewing stakeholders at IBCs in addition to discursively analyzing university webpages, social media pages, and visual elements on campuses. Since neocolonialism is embedded in Western HE and the discourses of internationalization that uphold the supremacy of Western HE, researchers studying IBCs might follow Suspitsyna's (2021) suggestion of applying approaches that decenter Whiteness and promote more equitable, inclusive futures for global HE and how IBCs can contribute to this process. --- Declarations Conflict of interest The author declares there is no conflict of interest. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | A case study is used to understand how Western international branch campuses (IBCs) in China represent themselves through web-based branding materials. Drawing on colonial discourse analysis and the theoretical framework of Whiteness as futurity, this study examined the case of Wenzhou-Kean University, a Sino-US cooperative institution to understand how Western IBCs in China interpret and promote internationalization in higher education. By examining how Whiteness through the discourse of world-class education has been mobilized and reproduced, this study argued that the operation of IBCs perpetuated Western supremacy in the global higher education landscape at the expense of local people and knowledges. |
Introduction Extreme weather events cause damage, disruption and loss of life across the world. As the present climate changes, extreme events like floods and heatwaves are likely to become more frequent and intense (IPCC 2014). To adapt, society needs to better understand how the climate might change in the future, together with the associated risks. By showing how the temperature may change or rainfall patterns shift over the next century, climate information can help inform adaptation planning and decision-making. National climate scenarios have taken up this challenge. They paint a picture of how the future climate may change for a country, on the basis of a set of greenhouse emission pathways. As Hulme and Dessai (2008) explain, scenarios have a long and varied history, originating in military strategy and planning in the 1950s and expanded by the energy industry in the 1970s before becoming a common tool for decision-making in government. 1 National climate scenarios, as a result, have become influential decision support tools for adaptation in the UK (Jenkins et al. 2009), Switzerland (CH2011 2011), Germany (DWD 2012), South Africa (DEA 2013), Ireland (Gleeson et al. 2013), the Netherlands (KNMI 2014a), the USA (Melillo et al. 2014) and Australia (CSIRO and Bureau of Meteorology 2015), amongst others. Yet, climate information often remains unused because it is seen to be too complex, not sufficiently relevant or unusable. To narrow this 'usability gap' (Lemos et al. 2012), scholars have focused their attention on how to bring scientists and users together to deliberately co-produce climate information (Meadow et al. 2015;Dilling and Lemos 2011). If scientists understand what climate information is needed and, in turn, users understand what scientists can provide, delivering relevant and usable science could face less barriers; it is argued (Lemos and Rood 2010). How this should be done is unclear, however. Tangney and Howes (2016) have shown that the credibility, legitimacy and saliency of climate information are viewed differently from one country to the next. Different political cultures and scientific values affect how climate information is produced and the extent to which users are involved (Hanger et al. 2013;Beck 2012;Jasanoff 2005;Shackley 2001). This is because, in part, the way science is publicly acknowledged, circulated and legitimised in each country reflects its own 'civic epistemology' (Jasanoff 2005). That is the process by which countries 'assess the rationality and robustness of claims that seek to order their lives' (ibid). While greater scientist-user interactions should be encouraged, those advocating co-production need to be aware of the existing social and political cultures they are intervening in. If not handled carefully, efforts to co-produce climate knowledge may amplify the voice of some at the expense of others with different needs (Klenk and Meehan 2015). The relationship between the state and science differs from country to country. In the UK, scientific expertise and political authority are separated to deliver objective and rational knowledge to support pragmatic empiricist policy-making (Mahony and Hulme 2016;Tangney 2016;Rothstein et al. 2013;Jasanoff 2005). Yet, this same expertise is often funded by UK government departments with their own agendas (Tangney 2016;Steynor et al. 2012). Other countries have very different set-ups. Neither Switzerland nor the Netherlands has a majority government. Decisions have to be consensual. Otherwise, nothing proceeds. Inclusion of the political, scientific, public and private minorities is common. It has been argued that compromises can be found easier through a closed nature of inclusion and a lack of transparency in how decisions are made, as actors are able to negotiate (and concede) without public scrutiny (Hermann et al. 2016;Andeweg and Irwin 2005). Differences between Dutch and Swiss political cultures do exist, though. In the Netherlands, the policy-making process is more participatory in that it includes political elites, interest groups and individual citizens (Andeweg and Irwin 2005;van der Brugge et al. 2005). In Switzerland, by contrast, different representatives from politics, public administrations and interest groups mediate policies between themselves, with the Swiss electorate called on to decide issues in referendums if a consensus cannot be reached (Hermann et al. 2016). In this paper, we seek to understand why climate scenarios are produced differently from country to country by examining the social and scientific values that shape it. To do this, we focus on the experiences of suppliers of climate information, namely scientists and advisors, responsible for delivering climate scenarios whose voices are critical yet too often silent in co-productionist studies (Cvitanovic et al. 2015). We performed a comparative analysis of three countries-the Netherlands, Switzerland and the UK-which share a number of similarities in modelling capacities yet chose to design their climate scenarios in very different ways. After explaining our methods and data, we compare the modelling approaches, institutional arrangements and climate information provided in each country. We then investigate the different motivations for producing climate scenarios, before we turn to the different scientist-user interactions. To close, we develop a typology to explain the differences in how and why the climate scenarios took the particular shape they did. --- Data and methods To understand how climate scenarios are produced and, importantly, why they differ from one country to another, we adopted a case study approach to examine the recent efforts of climate scientists in the Netherlands, Switzerland and the UK. We chose these case studies because they share a number of similarities and differences. Each country has a history of developing climate scenarios, enjoys well-funded climate programmes and makes use of state-of-the-art computing facilities and expertise, yet each differs in the modelling approaches taken and the degree to which users were involved. To examine these case studies in greater depth, we brought together the findings from two methods. First, we conducted a desk-based search to identify documents (e.g. briefing reports, technical summaries, guidance notes) relating to the release of each set of climate scenarios. These documents provide a public record as to why modelling decisions were taken, how users participated in the process and the reasoning behind different presentational styles in each country. A total of 37 documents were imported to MAXQDA-a qualitative coding software-and analysed (n = 12, KNMI'14; n = 13, CH2011; n = 12, UKCP09). We then manually coded the documents to identify emergent themes on a range of topics from the treatment of uncertainty, involvement of users and lessons learnt. Second, we conducted semi-structured interviews (n = 10) with climate scientists and advisors responsible for delivering the Dutch and Swiss climate scenarios during 2015/2016. We supplemented this data with five interviews performed with actors involved in the UK's climate scenarios in mid-2013 (Porter and Dessai 2017). Whenever possible, interviews were held face-to-face in participants' offices or via Skype. We adopted a conversational approach, which allowed people to express their views and experiences on aspects of the production process not covered in the official documentation we analysed. To that end, we asked: Why are climate scenarios needed? Who was involved in the production process, and what role did they play? And, to what extent were users involved, and what did they contribute? All the interviews were digitally recorded (with consent) and transcribed using an intelligent verbatim transcription approach-omitting filler words or hesitations (Hadley 2015). Once the transcripts were imported into MAXQDA, we manually coded the responses to identify emergent themes including modelling decisions, user engagement and institutional relationships. To introduce greater rigour to our findings, we triangulated the codes from both datasets to understand where the greatest agreement, or disagreements, existed. Context: how do the British, Dutch and Swiss climate scenarios compare? Despite only a few years separating the release of the British, Dutch and Swiss climate scenarios, they differ in a number of ways (see Table 1). Briefly introducing each of the climate scenarios below, we highlight how these differences are not only concerned with the way climate change was assessed, or the actors involved, but also how each country presents climate information. --- UK's climate scenarios: the UKCP09 land scenarios After 7 years of work, the UK Met Office Hadley Centre released the world's first set of probabilistic climate scenarios: the UKCP09 land scenarios,2 in 2009. This modelling endeavour was largely driven by the Met Office Hadley Centre, whilst the UK Climate Impacts Programme (UKCIP) managed the user engagement. Funded by the UK Government, the climate scenarios serve as an 'input to the difficult choices that planners and other decision-makers will need to make, in sectors such as transport, healthcare, water resources, and coastal defences' by giving users the freedom to choose the scale, time period and thresholds corresponding to their risk tolerance and appetite (Jenkins et al. 2009). A major focus for UKCP09's climate scenarios was its effort to account for the inevitable uncertainty around future climate change. Probability distribution functions are provided to indicate the plausible range of climate change under a particular emission scenario-with an expression of how strongly different outcomes are supported by different lines of evidence (e.g. climate science, observations and expert judgement) (see Fig. 1; Jenkins et al. 2009). For instance, users can assess the likelihood that temperatures will increase by more than 3 °C in London in the 2080s relative to the 1961-1990 base period. A large number of climate simulations were run to capture structural model uncertainties, accounting for different climate models' ability to replicate key aspects of current and future climate change. To do this, a perturbed physics ensemble with the Met Office Hadley Centre's own climate model was combined with a multimodel ensemble from other modelling centres through a novel and complex (yet as a consequence somewhat contentious) Bayesian approach that used a statistical climate model emulator (see Frigg et al. 2015;Parker 2010). The climate scenarios are given at a resolution of 25 km 2 over land or as averages for administrative regions and river basins. Confidence varies within the data, however. It is highest at the continental scale and lowest at the local scale, which interests users most (Porter and Dessai 2016). Users can choose from seven time periods, with overlapping 30year windows spanning 2010 to 2099. Users, in turn, are also encouraged to work with all three emission scenarios: high, medium and low, to learn the full extent of possible changes (Jenkins et al. 2009). The climate scenarios are available free of charge via three formats: (1) key findings (headline messages, maps and graphs), (2) published materials (reports, guidance and case studies for various sectors) and (3) customisable outputs (raw data via the user interface website) (Steynor et al. 2012). After the launch, there were updates to the climate scenarios; for instance, spatially coherent projections were provided so that users could combine results from grid boxes to create spatial information. CH2011 2011. Switzerland does not have its own global climate model, but ETH contributed to the regional climate modelling COSMO-CLM community project. This means CH2011's'model data have been provided by several international projects' instead (CH2011 2011). Climate simulations from the ENSEMBLES project (van der Linden and Mitchell 2009), as well as studies and assessments from the Intergovernmental Panel on Climate Change (IPCC), were used. New, but importantly peer-reviewed, statistical methods were used to generate multi-model ensemble estimates of changes and associated uncertainties. Probability statements as in the IPCC (i.e. likely indicating at least two in three chances of the value falling in the given range), but no PDFs, are assigned to temperature and precipitation only, under three emission scenarios (two non-intervention and one climate stabilisation) to give users an indication of the likely direction of change (e.g. summer rainfall likely to decrease by 6-23% for 2060 in the western part of Switzerland in the A2 scenario) (CH2011 2011). --- Switzerland The climate scenarios were aggregated spatially into three broad regions with much of the Alps excluded, as its topographical complexity raised concerns over how to reliably interpret the model results (CH2011 2011). Projected changes over the twenty-first century are broken into three time periods (2020-2049, 2045-2074 and 2070-2099) and are available as seasonal and daily ranges. The CH2011 climate scenarios can be accessed freely for research, education and commercial purposes, by visiting the website and downloading the individual datasets (e.g. regional scenarios at daily resolution) or by requesting the published reports for the main findings. Following the release of CH2011, two extensions were published, providing annual averages, climate scenarios for the Alpine region and station-scale daily data for all three emission scenarios. --- Netherlands' climate scenarios: KNMI'14 The Royal Netherlands Meteorological Institute (KNMI) issued the country's most recent climate scenarios in 2014: KNMI'14. Funded by the government, the climate scenarios 'will be used [by decision makers] to map the impacts of climate change... [and] evaluate the importance and the urgency of climate adaptation measures' for building coastal defences, healthcare, city planning and nature conservation (KNMI 2014b). A defining feature of KNMI'14 is the use of four scenarios to visualise how future climate may change around 2050 and 2085 (see Fig. 1). Each scenario differs in terms of the amount of global warming (moderate or warm) and possible changes in air circulation (low or high). Around 2085 (2071-2100), under the G L scenario (low air circulation change, low global temperature rise), annual mean temperature is projected to be 1.3 °C warmer than the reference period whereas, under the W H scenario (high air circulation change, high global temperature rise), it could be 3.7 °C warmer. To obtain a range (e.g. for summer daily maximum extremes), KNMI'14 provides the currently observed natural variability onto which users can superimpose the future climate change signal to derive future upper and lower bounds. These scenarios show a single spatial scale: the whole of the Netherlands. This is because 'any attempt to make climate predictions at a relatively small spatial scale such as the Netherlands or even Western Europe for multiple decades ahead cannot be expected to lead to skilful results' (KNMI 2014b). Eight initial-state perturbed climate simulations with the community global climate model EC-Earth (co-supported by the Dutch) and their own regional climate model RACMO2 were performed. These were then supplemented with a multimodel ensemble from the Coupled Model Intercomparison Project Phase 5 (CMIP5) (WCRP 2010). Users are able to access the KNMI'14 climate scenarios free of charge by downloading the published reports or requesting the dataset directly from KNMI. After KNMI'14 was published, an inconsistency in the W L scenario for 2085 was found which prompted KNMI to issue a rectified version in late 2015. --- Key differences between the British, Swiss and Dutch climate scenarios We found four key differences in how the British, Dutch and Swiss scientists approached the production and dissemination of their climate scenarios. Simply put, these differences include (i) modelling capacities, (ii) treatment and communication of uncertainty, (iii) the actors involved and (iv) access to the data. First, whereas the British and Dutch have their own climate models, the Swiss rely on utilising modelling efforts of others. In turn, the British climate scenarios took a more computationally demanding and complex modelling approach than its counterparts. Second, this gave rise to the British incorporating the structural model error explicitly, with the help of a Bayesian statistical model. This inclusion of model uncertainties broadens the spread of model simulations, which they communicated as PDFs for each emission scenario. In theory, the PDFs incorporate the expert judgment needed to interpret the information correctly. The Swiss and Dutch followed the IPCC approach whereby interpretation and use of model results need expert judgement. But, they did so differently. The Swiss used Bayesian statistics to estimate PDFs but communicated only a lower, medium and upper value as representative plausible outcomes for each emission scenario. Because of the Netherlands' high vulnerability to coastal flooding and the profound implications on most national activities, changes in wind direction have been judged as an additional key uncertainty. Coastal defences, among other adaptation options, need to incorporate both increased storm surges due to wind as well as sea-level rise due to emission scenarios. To incorporate this, the Dutch assessed and communicated their uncertainties along these two dimensions, providing single figures for each of the four storylines. Third, the Dutch kept the entire modelling and user engagement within a single organisation: KNMI, whilst the British and Swiss included various, institutionally distinct and physically distant, actors for these tasks. For instance, the CH2011 community comprised multiple institutions, with some scientists asked to represent the views of multiple actors (and users) simultaneously. Lastly, although the British provide users with all the output data and guidance on potential limitations, the Dutch and Swiss restricted what information users received. The Swiss withheld parts of the data relating to the Alps due its topographical complexity and the Dutch aggregated the data into two driving variables, air circulation change and temperature. These different epistemological preferences affect the reasoning behind how climate scenarios are done in the first place. --- What is the purpose of climate scenarios? Two main reasons were cited by all three sets of scientists as to why they felt it was important to produce and disseminate climate scenarios. First, in order to take well-informed adaptation and mitigation decisions, a single coherent body of locally relevant scientific information is needed. Second, such exercises can help advance scientific understanding through the development of new methods, computing power and working relationships. Although the three case studies share these two objectives, our research suggests that they were prioritised, understood and acted upon differently. --- Informing climate adaptation and mitigation decision-making All interviewed climate scientists agreed that their country needed its own set of climate scenarios because decisionmakers are primarily 'interested in their local patch' (UKCP09 scientist 5) and because weather patterns are different from one place to another (KNMI'14 scientist 1). The IPCC assessment reports and its regional climate scenario chapter (Christensen et al. 2007) are simply 'too coarse' to inform local or sector-based adaptation decision-making (CH2011 scientist 2). A growing user base, with evolving requirements, has also led to'many requests for additional information and guidance' such as the inclusion of more climate variables, extreme weather events and regional details that larger-scale climate scenarios cannot provide (KNMI 2014b). Servicing the informational needs of these users is a major purpose of climate scenarios. All the scientists shared this conviction and went to great lengths to stress how they wanted their work not only to be 'useful' to decision-makers but also importantly 'used' by them (CH2011 scientist 4). National policies added further support for use-inspired science. All three countries have enacted legislation requiring climate scenarios to inform national-scale policy-making as well as local-scale decision-making in public and private organisations. Only in Switzerland have climate scenarios emerged without a governmental mandate (only to be officially approved prior to publication) (CH2011 scientist 2). Yet, in each case, efforts to co-produce climate scenarios have been skewed in favour of scientists who retained power over 'what these scenarios look like' or 'when to provide these scenarios' (KNMI'14 advisor 1). Another key purpose of climate scenarios for KNMI scientists was to initiate a 'paradigm shift' in how users think (KNMI 2014b). Moving away from responses based on experiences of 'past climatic events', users should instead anticipate 'possible future conditions' for decisions today (KNMI 2014b). UKCP09 scientists also felt that climate scenarios helped reaffirm the different roles and responsibilities of those involved in adaptation decision-making: It's not the climate scientist's responsibility to provide a golden number [for users] and accept that risk [for it]. Because [scientists] can only provide what is the best science at the time, and make all the uncertainties available before saying 'Okay, this is our best estimate, so take from that what you can'. And then it's over to users as to how they use it (UKCP09 advisor 1). Some users may, however, struggle with this epistemological position. Users may become frustrated or confused if they identify and manage their risks differently to how the climate scenarios have prescribed them, especially if they prefer to work with single figures rather than a range (or PDFs). As Porter and Dessai (2017) argue, UKCP09 scientists often see users as miniature versions of themselves-mini-mes-who struggle to understand why anyone would not want to use probabilistic information, which, for them, represents the best science available. This can lead to tensions when users who'rely on a definitive answer being provided for them' fail to receive one (UKCP09 advisor 1). By contrast, KNMI'14 scientists felt one of the main purposes of climate scenarios was to engage as many people, from different backgrounds with different interests, as possible so as to actively avoid giving users multiple, perhaps conflicting, outputs (KNMI'14 scientist 2). For each variable, users were given only a single figure (average) for its four scenarios. That is, for a variable of interest, users must compare four averages (one for each of the four scenarios) in order to see if there are differences or trends, and their size, between the four scenarios. This was less likely to be misinterpreted or cause confusion; it is argued (KNMI 2014b). --- Advancing scientific knowledge One, if not the main, driver for developing each set of climate scenarios was the opportunity to advance scientific knowledge. However, the three groups of scientists interpreted their intellectual contribution differently. For instance, KMNI'14 and CH2011 aimed to improve and consolidate the range of scientific information used in decision-making for their respective countries (CH2011 scientist 4), whereas the UKCP09 climate scenarios wanted to develop a 'new method for quantifying uncertainty' with international reach too (UKCP09 scientist 2). Newly developed methods, improved computing power and recently released model runs (e.g. CMIP5), alongside the availability of new observation datasets, were all cited as reasons for producing climate scenarios. For KNMI'14 scientists, advances in climate modelling opened up a new dialogue with users including water managers and health specialists over 'what could or couldn't be done', so that users helped prioritise the scientific work (KNMI'14 scientist 2). It also allowed KNMI'14 scientists to test if the predecessor, KNMI'06, underestimated the impact of air circulation patterns on temperature rise (KNMI 2014b). Interestingly, KNMI'14 scientists were 'a little disappointed with the final result [due to] the similarity of the outcomes' between KNMI'06 and KNMI'14 (KNMI'14 scientist 1). Whilst KNMI'14 scientists reiterated their primary goal to improve the usability and use of the climate scenarios, the satisfaction derived from being the first to discover some scientific novelty is still important. Researchers' desire to advance scientific knowledge about climate and explore new ways of thinking about climate decisions (probabilities), it seems, can conflict with the more pragmatic needs of users (i.e. highly robust information presented in familiar ways) to enable effective adaptation planning. Therefore, the extent to which coproduction will help to resolve these tensions or exacerbate them further as those involved in the supplying and demanding climate information became more frustrated with each other is unclear. For CH2011 scientists, the need to advance scientific understanding via a new set of climate scenarios was expressed differently. Already serving as IPCC lead authors but lacking the modelling resources enjoyed by other countries (Brönnimann et al. 2014), the CH2011 climate scenarios strengthened old and encouraged new collaborations between Swiss research institutions (CH2011 advisor 1). It brought researchers and (scientific) users 'to one table' where everyone could discuss how the modelling should be done (CH2011 scientist 4). 'There wasn't always a consensus within the group' because the complex topography of the Swiss Alps presents challenges for modelling. But, by 'bringing together the different institutions', the Swiss climate science community was able to speak with 'one voice' for the first time and created the momentum to fund future climate scenarios, as well as political support to establish the Swiss National Centre for Climate Services (CH2011 scientist 4). UKCP09 scientists differ from their KNMI'14 and CH2011 counterparts in how they understand and, in turn, acted upon the need to both advance scientific knowledge and inform adaptation decision-making. For KNMI'14 and CH2011 scientists, the two objectives can sometimes be incompatible whereas UKCP09 scientists felt that they went hand-in-hand. UKCP09 scientists assumed that if users want to make'reliable, robust, and relevant' decisions, 'they need the best science' available (UKCP09 scientist 3). Better science, it seems, equals better decisions (see Porter and Dessai 2017). What constitutes good science for decision-making for the British and Dutch scientists is understood differently, however. In contrast to the single figures provided in KNMI'14, UKCP09 quantifies climate variables' ranges so that users can decide about the level of risk they want to manage. Where multi-model ensembles have conventionally been used to assess uncertainty, UKCP09 scientists felt this method failed to capture the full range of uncertainties (Porter and Dessai 2016). By developing their own method, not only would they make a significant intellectual contribution to quantifying model uncertainties but they could also meet the institutional-political goals set by the Met Office, the Department for Environment, Food and Rural Affairs (Defra) and the now disbanded Department of Energy and Climate Change (DECC) to produce world-leading science, with the potential to influence the IPCC process (UKCP09 scientist 2). --- Different understandings, different priorities All three sets of scientists were fully committed to informing adaptation decisions and advancing scientific understandings yet interpreted these commitments differently. For CH2011 scientists, priority was given to assembling a consistent evidence base that spoke with one voice. To do this, the effort was focused on improving working relationships and intellectual exchanges to advance scientific capacities. For KNMI'14 scientists, a major driver was the need to change how people think and act in relation to climate change. Advances in climate modelling certainly aided this process but were not the sole catalyst. For UKCP09 scientists, efforts to quantify uncertainty were underpinned by the assumption that users need the best science possible. Practical or application-based considerations inevitably took a backseat to intellectual contributions and the pursuit of curiosity-driven science. These different understandings of the purpose of climate scenarios affect the way users are involved in the process and the extent to which they are listened to. How involved did scientists think users were in producing the climate scenarios? Our research suggests that all three sets of climate scenarios differed considerably in the extent to which they involved users, what they expected them to contribute and even whom they thought the user was in the first place. Together, these differences have had a marked effect on the particular form taken by the British, Dutch and Swiss climate scenarios. For instance, how model uncertainty was quantified (cf. UKCP09 vs. KMNI'14) is based on a series of assumptions about the capacity of users to work through and make sense of complex information. However, narrowly defined perceptions of users and their needs have seriously diluted the stated commitment to co-produce national climate scenarios. --- Scientists' perceptions of users Without exception, the official documents issued for all three sets of climate scenarios paint a very broad picture of potential users. From actors interested in digging down and exploring the data to those interested only in the headline messages, the scientists hoped that their climate scenarios will be used by the widest audience possible. In other words, the climate scenarios should not become the exclusive preserve of a small group of actors. This manifests itself differently in each country. Where the KNMI'14 and CH2011 climate scenarios aimed to inform decisions in sectors from water, healthcare, agriculture and transport to infrastructure, UKCP09 went even further by subdividing the users within these sectors into three categories: researchers, decision-makers and communicators (Steynor et al. 2012). Simply put, all three climate scenarios should officially cater to different users, all with different needs. Few of the scientists interviewed shared that view, however. CH2011 scientists, for instance, felt the end users would be either impact modellers or government officials (CH2011 scientist 1). Previous experiences from the last climate scenarios, CH2007, and the government agenda to develop a national adaptation strategy, informed this view. Yet, misunderstandings over what users need and what scientists think is useful (see Lemos et al. 2012) soon developed. CH2011 scientists realised they had 'produced far more information than [government officials] could use' or make sense of (CH2011 scientist 1). Lacking the time and resources to work through the probability statements provided, government officials were forced to simplify the climate information they used. A 'user bubble' of likeminded individuals-impact modellersconsulted by the CH2011 scientists meant they had, unintentionally, overestimated the capacity of non-quantitative users (Liniger 2015). Upon reflection, CH2011 scientists told us that while it was fairly intuitive to identify which sectors might be interested in using climate scenarios, it remained a mystery how the climate scenarios would actually be used or what users needed from them (CH2011 scientist 3). UKCP09 scientists, similarly, were confident that they 'knew what users needed' (UKCP09 scientist 1). With over 25 years of experience developing climate scenarios (e.g. LINK project, CCIRG, UKCIP), scientists had formed close working relationships with several users: impact modellers, water managers and consultants (Porter andDessai 2017, 2016;Hulme and Dessai 2008). All of these users share certain characteristics. They are highly numerate, motivated and knowledgeable actors. These characteristics were woven into the fabric of the new climate scenarios. That is, UKCP09 requires users to have already assessed their vulnerability to climate change themselves to be able to use PDFs (Jenkins et al. 2009). A persistent criticism, though, is that potential users without the time, resources or capacity to make sense of their vulnerabilities can find themselves excluded (Frigg et al. 2015;Tang and Dessai 2012). Indeed, UKCP09 scientists were warned against defining the user too narrowly (Steynor et al. 2012). Very late in the process, the government funder, Defra, pushed for the climate scenarios to be opened up to 'as many people as possible' to avoid satisfying only a single type of user (UKCP09 scientist 2). KNMI'14 scientists did things differently. They already knew water managers were the primary user of the previous climate scenarios, KNMI'06 (KNMI'14 scientist 1). Unlike their CH2011 or UKCP09 counterparts, 'the first meeting of the [KNMI'14] project team was on user requirements' (KNMI'14 advisor 1). Put differently, KNMI'14 scientists believe that limiting the volume of (undigested) information given to users, and the choices they have to make, improves the accessibility and understanding of the climate scenarios. Asking users to focus on four storylines places less demands on their time and requires only a basic level of understanding, initially at least. KNMI'14 scientists, therefore, imagined different users with different needs and capacities (KNMI'14 scientist 2). --- Scientists' perceptions of user interactions Despite initial reluctance from some scientists to involve the intended and favoured users, by the end, a closer working relationship between the two became highly valued. Scientists concerned over lack of time or the right skills to engage with favoured users soon realised that with a better understanding of how climate information is used, and therein what users need, they could make a 'few small changes with immediate impact' (UKCP09 scientist 1). The only way to do this was for scientists and users to meet face-to-face, something the UK has been doing since the early 1990s (see Hulme and Dessai 2008). Yet, all three sets of climate scientists held very different views on the interaction format and the extent to which users were listened to. CH2011 scientists told us that users 'weren't involved as much as they would have liked' (CH2011 scientist 1). Both a lack of 'funding' and official'mandate' was cited as major barriers (CH2011 scientist 2). Efforts were made to ensure the voice of users was heard, nonetheless, although 'we didn't do a full user survey... [canvassing only impact modellers] we still had a good impression [of]... what users needed' (CH2011 scientist 4). Moreover, when a coordination group was set up to oversee the production of the climate scenarios, two of the six seats were filled by user representatives. Mirroring the political culture of Swiss collegiality, the coordination group required members to reach decisions collectively. Yet, it was not always easy for user representatives to relay the 'heterogeneous needs' of users (CH2011 advisor 1). As a consequence, this institutionalised the user bubble rather than challenged it (Liniger 2015). Users were only introduced en masse just 'before the report was released' where 'talks and events' were held so that everyone 'who should know about [the climate scenarios] did know about them in advance' (CH2011 scientist 4). However, not only is awareness different from engagement, but the introduction of users at such a late stage restricts what they can, and are willing to, contribute and articulate. KNMI'14 and UKCP09 scientists both conducted surveys with users from previous versions of their climate scenarios and ran workshops to understand how user needs have changed. A long'shopping list' of requirements was identified but was interpreted and acted upon differently. For instance, the 'explicit presentation of [model] uncertainties and assumptions behind [them], easier access [to the data], and higher temporal and spatial resolution [data]' was flagged by both projects (Steynor et al. 2012; see also Bessembinder et al. 2011). Whereas this confirmed UKCP09 scientists' need to advance science linearly (UKCP09 scientist 1), KNMI'14 scientists felt a closer dialogue was needed to dispel the 'you ask, we deliver' paradigm in the hope that users reconsider their requests (KNMI'14 scientist 3). Indeed, KNMI'14 scientists raised concerns about the methods to elicit user needs. For them, surveys risk closing down fruitful conversations about user needs, and therein, fail to understand how, or why, users actually use climate information: You cannot just go to users once and ask them for feedback. You need to have regular contact, continuous contact, over a long time to get really useful feedback. It's not just asking 'what do you want?' and then giving it to them... many users want to do something with climate adaptation but don't know exactly what that is or how to do it... so it's important to know how they use climate data (KNMI'14 advisor 2). To encourage as much interaction as possible, many face-to-face meetings between scientists and users were organised (KNMI'14 advisor 2). Two communication experts were hired to get users more involved instead of 'just listening to talks' (KNMI'14 scientist 2). 'Light workshops with standing tables' mixing scientists and users with 'only six people around each table... to make it easy to ask questions' were used (KNMI'14 advisor 2). This set-up helped scientists to better understand how climate information is used and, in turn, what users need. It also opened up conversations over 'the advantages and disadvantages of probability distributions and the way uncertainties are presented' and differences between what is doable and what is desirable by getting users to think more reflexively about'th eir list of requests'(Bessembinder et al. 2011). 'That discussion and dialogue between users and KNMI staff really was the main contribution of the three years of work. Much more so than the analysis of the data and the climate scenarios' ( | This paper seeks to understand why climate information is produced differently from country to country. To do this, we critically examined and compared the social and scientific values that shaped the production of three national climate scenarios in the Netherlands, Switzerland and the UK. A comparative analysis of documentary materials and expert interviews linked to the climate scenarios was performed. Our findings reveal a new typology of use-inspired research in climate science for decision-making: (i) innovators, where the advancement of science is the main objective; (ii) consolidators, where knowledge exchanges and networks are prioritised; and (iii) collaborators, where the needs of users are put first and foremost. These different values over what constitutes 'good' science for decision-making are mirrored in the way users were involved in the production process: (i) elicitation, where scientists have privileged decision-making power; (ii) representation, where multiple organisations mediate on behalf of individual users; and (iii) participation, where a multitude of users interact with scientists in an equal partnership. These differences help explain why climate knowledge gains its credibility and legitimacy differently even when the information itself might not be judged as salient and usable. If the push to deliberately co-produce climate knowledge is not sensitive to the national civic epistemology at play in each country, scientist-user interactions may fail to deliver more 'usable' climate information. |
it... so it's important to know how they use climate data (KNMI'14 advisor 2). To encourage as much interaction as possible, many face-to-face meetings between scientists and users were organised (KNMI'14 advisor 2). Two communication experts were hired to get users more involved instead of 'just listening to talks' (KNMI'14 scientist 2). 'Light workshops with standing tables' mixing scientists and users with 'only six people around each table... to make it easy to ask questions' were used (KNMI'14 advisor 2). This set-up helped scientists to better understand how climate information is used and, in turn, what users need. It also opened up conversations over 'the advantages and disadvantages of probability distributions and the way uncertainties are presented' and differences between what is doable and what is desirable by getting users to think more reflexively about'th eir list of requests'(Bessembinder et al. 2011). 'That discussion and dialogue between users and KNMI staff really was the main contribution of the three years of work. Much more so than the analysis of the data and the climate scenarios' (KNMI'14 scientist 2). UKCP09 scientists, by contrast, were less enthusiastic about interacting with users than their KNMI'14 counterparts. That reluctance was due, in part, to different ideas about the roles and responsibilities of scientists (Porter and Dessai 2017). As Mahony and Hulme (2016) observe, UKCP09 scientists saw their job as pushing the boundaries of climate modelling and solving practical problems to inform governmental policy and decision-making, while organisations like UKCIP should engage users because they possess the 'right skills and time' to do so (UKCP09 scientist 2). Part of the British political culture of evidence-based decision-making serves to reinforce this separation of scientists and users, in order to preserve the integrity and authority of expert knowledge, on the one hand, and a top-down hierarchy between the two is maintained, on the other (Tangney and Howes 2016). That said, 3 years after the modelling began, the UKCP09 project was reorganised, and UKCIP's idea of bringing users and scientists together via a user panel was achieved with the support of the funder, Defra (UKCIP 2006). Practical concerns were raised, such as the number of users involved, how regularly (or when) to consult them and how to weigh their contributions equally. For instance, there is the risk that 'users who [are] able to eloquently express their needs or regularly attended meetings' gain greater attention or have 'undue influence' on the output of the user panel (Steynor et al. 2012). Yet, user input for the climate scenarios was highly constrained. Modelling decisions had gone beyond the point of being reversed (cf. Corner et al. 2012). Users were left to comment on 'presentation issues' over the spatial aggregation of the outputs (e.g. 25-km 2 grid cells vs. river basins) rather than discussing how to model uncertainty differently (UKCP09 advisor 2). The lecture-like set-up with 'talk after talk' focused on selling the climate scenarios to users (UKCP09 scientist 2). --- Doing things together The motivation, intensity and format of the scientist-user interaction were different across the three countries. The 'you ask, we deliver' paradigm was used strategically in UKCP09 to support their scientific work but dispelled by KNMI as they felt that a discussion on how climate data is used was more fruitful. In addition, the timing was problematic for both the British and Swiss climate scenarios. Users engaged with UKCP09 only after the major decisions have already been taken (and the funder Defra stepped in), and in CH2011, the interaction was confined to awareness. At best, this limits what contributions users can make, and at worst, it can lead to frustration and disengagement. This limited interaction was partly accepted because British and Swiss scientists felt they knew who the user was. In the Swiss case, this happened through official channels between federal offices or past research collaborations. In the UK, the Met Office had been working with users alongside UKCIP since 1997, so UKCP09 scientists felt that they had already developed considerable (tacit and explicit) knowledge of users. Yet, the users that UKCIP formally introduced to the Met Office often asked highly technical questions that UKCIP could not answer itself. That filtering process (unintentionally) skewed how Met Office scientists saw users (Porter and Dessai 2017). This only confirmed what UKCP09 scientists thought users wanted. In both the Swiss and British cases, an early and broader user engagement might have flagged up some warning signs over what scientists thought users needed and what users wanted. For KNMI'14 scientists, the shift in water management practices was only the starting point. It served to question preconceptions of users in other sectors too and avoid falling prey to confirmation bias. --- Discussion Our comparative analysis reveals that climate scenarios are strongly influenced by the civic epistemology of each country, which defines who has a say, what roles scientists and users should play and how the two interact. Internal disagreements on methodological aspects, communication and target users exist but are often masked by the prevailing science-society relations. As shown in Table 2, what constitutes good science for decision-making is understood differently from one country to the next: consolidator (CH2011), innovator (UKCP09) and collaborator (KNMI'14). Simply put, the Swiss are more conservative. They emphasise the need for tried-and-tested methods that have been peer-reviewed (e.g. scientific consensus) whereas the British were more adventurous. They applied a new, largely untested, method for quantifying model uncertainties on the assumption that users need this information to adapt effectively (Porter and Dessai 2017). The Dutch have mixed established methods with novel ones when culturally acceptable (Enserink et al. 2013;van der Brugge et al. 2005; see also Dilling and Berggren 2015). A major concern here is when a mismatch develops between what makes science good for decision-making in the eyes of scientists compared to what makes science good for decision-making for the more pragmatic needs of users. For instance, UKCP09 was too complex for some users (Tang and Dessai 2012) and too bold for some scientists (Frigg et al. 2015), which has impeded its uptake and use. Our 'typology of use-inspired research', shown in Table 2, also develops other social science work on the values and assumptions that shape atmospheric science. For Shackley (2001), climate modelling centres judge good scientific practice differently in response to different institutional-political priorities. A modelling hierarchy can emerge where greater modelling complexity is assumed to provide greater realism and better decision-making (Mahony and Hulme 2016;Shackley et al. 1998;Shackley and Wynne 1995). While UKCP09 has gone down the modelling complexity route, CH2011 and KNMI'14 question what value is added by this. All three climate scenarios differ considerably in how users were engaged, which speaks to different types of user-scientist interaction (Table 2): participation (KNMI'14), elicitation (UKCP09) and representation (CH2011). While the Dutch KNMI involved a large number of users in the production process, the British and Swiss limited interactions to retain power over production. Knowingly or not, science is socially responsive. Different funding mechanisms, institutional arrangements, epistemic cultures and preferences to risk affect what knowledge is produced (by whom and how it is used). This develops Jasanoff's (2005) civic epistemology work that climate science comes to reflect wider societal concerns expressed through national politics (e.g. Swiss consensus building, Dutch inclusiveness and UK expert authority; see also Beck 2012). Our two proposed typologies bring a much needed sociopolitical context into the 'knowledge systems' framework by Cash et al. (2003). Where the 'typology of scientific enterprise' characterises how judgements of good science give rise to credible information, the 'typology of user interaction' explains what is involved in producing legitimate knowledge for decision-making. Through the culturally situated production of climate information, the scientific output is expected to be salient (i.e. relevant) for governmental decision-making-a key argument of the civic epistemologies (Jasanoff 2005). Relevance and usability of scientific information are not synonyms, however. Lemos et al. (2012) argue that usability is high when information is tailored to needs and capacities of users, a quality achieved through co-production where scientists listen to users and respond to their needs. Our results support this proposition: UKCP09 only included sophisticated and numerate members in their user panel while KNMI'14 included a broad user base. The climate scenarios from both countries essentially served only the users involved in their (co-)production. We conclude, therefore, that several future discussions are needed to better understand the different cultures for producing climate information. First, funders and scholars who advocate for scientists to co-produce climate information with users need to be sensitive to, and reflect upon, the existing social and political cultures that shape climate information. Generalising case studies into best practices or one-size-fitsall lists disregard the cultural sensitivities, which influence the successful uptake of climate information (Webber 2015). Second, further research is needed on the role governmentapproved climate information plays in narrowing the usability gap. Civic epistemologies profoundly influence how usable climate information is constructed by both scientists and users. Can political cultures similar to the UK produce knowledge that serves a larger user base with different capacities-but still be salient for government policy-making? What challenges does this present? And, how do users with simpler needs judge the credibility and legitimacy of salient knowledge, in the absence of governmental approval? Third, the growing number of climate knowledge providers, brokers and specialists has led to calls for increased harmonisation of modelling methods, climate variables and climate service institutions across Europe. Although this promises greater consistency and comparability, as well as lower financial costs, many national governments are 'keen on exercising and strengthening their own epistemic sovereignty' rather than offloading power to supra-national climate service institutions (Mahony and Hulme 2016). It is unclear how well European climate knowledge practices would travel, particularly if they ignore the national civic epistemologies governing the interactions between science and society. Considerable institutional inertia exists to keep doing climate scenarios in the same way. Only the British radically changed their way it produced and communicated its climate scenarios between its last and most recent set, as Met Office Hadley Centre scientists pushed for greater innovation in its climate modelling. Whether the 'Europeanisation' of climate knowledge is possible or even undesirable remains open to debate (see Demeritt et al. 2013). Lastly, more research is needed to reconcile the contrasting experiences of scientists and users to better understand why good science is constructed differently and the implications this has. For instance, after consulting seemingly the same water users, why did UKCP09 and KMNI'14 scientists take radically different approaches to their climate scenarios? Different epistemic cultures alone cannot fully explain this. Indeed, user preferences over risk, politics and decision-making are powerful catalysts as well. Only by tracing the experiences of scientists and users together will we be able to fully understand what shapes climate information. Medium-user has to be able to read and understand complex topics Low-entry barrier for use is held as low as possible (no ranges, etc.) --- Conclusion Our research maps how different social and scientific values, and different institutional arrangements, shaped three sets of national climate scenarios. What knowledge is produced, how scientists and users interact and what the user expected to apply the climate scenarios are strongly influenced by the political culture of each country and the respective roles played by science, government and non-state organisations in each. Efforts to co-produce climate knowledge are restricted, possibly even counter-productive, if scientists are unwilling to listen to users in the first place. And, while new actors may join or user needs develop, producers and brokers of climate information need to be aware of, and responsive to, the political culture that incentivises such changes. While government-approved science may help improve the legitimacy and credibility of climate information, the same is not necessarily true for its saliency and usability. This insight has important implications for how societies will adapt to climate change and the extent to which their decisions will be effective. | This paper seeks to understand why climate information is produced differently from country to country. To do this, we critically examined and compared the social and scientific values that shaped the production of three national climate scenarios in the Netherlands, Switzerland and the UK. A comparative analysis of documentary materials and expert interviews linked to the climate scenarios was performed. Our findings reveal a new typology of use-inspired research in climate science for decision-making: (i) innovators, where the advancement of science is the main objective; (ii) consolidators, where knowledge exchanges and networks are prioritised; and (iii) collaborators, where the needs of users are put first and foremost. These different values over what constitutes 'good' science for decision-making are mirrored in the way users were involved in the production process: (i) elicitation, where scientists have privileged decision-making power; (ii) representation, where multiple organisations mediate on behalf of individual users; and (iii) participation, where a multitude of users interact with scientists in an equal partnership. These differences help explain why climate knowledge gains its credibility and legitimacy differently even when the information itself might not be judged as salient and usable. If the push to deliberately co-produce climate knowledge is not sensitive to the national civic epistemology at play in each country, scientist-user interactions may fail to deliver more 'usable' climate information. |
Introduction The relationship between ecological (area based) measures of deprivation and health status measures are often used to determine the presence and scale of health inequality within national populations. These findings are used to assess different health needs and inform the targeting of health resources to reduce health inequalities. The decennial census of the UK population provides a robust data source with which to explore health inequalities across a number of factors, including area-based deprivation. However, such analyses are only possible at ten year intervals, reducing scope to monitor progress during the inter-censal period. To assess change in health inequalities at more frequent intervals, alternative sources must be explored. Ideally a source should align closely with the census and be sufficiently large in sample terms to enable accurate estimates of populations of interest computed previously using census data. This report explores the potential of the General Household Survey (GHS) to provide an accurate inter-censal measure of inequality in health expectancies across groups of small areas that experience differing levels of deprivation. The Department of Health (DH) funded this project as part of a wider programme of work, focusing on the measurement of inequalities in health. --- Background There is a clear relationship between composite measures of health status, such as health expectancies (HE), and measures of socio-economic position (White et al. 1999, Melzer et al. 2000, Mackenbach et al. 2008). However, the incomplete assignment of socio-economic position at an individual level in death registrations, and the absence of inter-censal population estimates disaggregated by socio-economic position, restricts analyses of HE by the National Statistics Socio-economic Classification (NS-SEC), for example, mainly to longitudinal data sources. To overcome this limitation, measures of deprivation assigned to small areas have often been used as alternative indicators of socio-economic position and several studies report a clear, linear association between health and level of deprivation, however each is defined (Bajekal 2005, O'Reilly, Rosato and Patterson 2005, Wood et al. 2006, Morgan and Baker 2006, Rasulo, Bajekal and Yar 2007). Measures of disadvantage based on area deprivation combine individual and environmental characteristics at a given point in time and provide a greater depth of analysis than measures based on occupation and employment status alone (MacIntyre, MacIver andSooman 1993, Bajekal 2005). The decennial census provides a wealth of data to explore the relationship between health and area deprivation, however its use to measure change over time is restricted to ten year intervals. Inter-censal analyses provide the opportunity to monitor progress in reducing inequalities in health at more frequent intervals. Identifying a consistent and continual annual data source of sufficient size and complexity that is coherent with the decennial census is key to producing an inter-censal measure of inequalities in health expectancy. For such a measure to be worthwhile for informing policy, it must be: temporally distinct from the census year; deliverable at least once between census years; able to clearly and precisely distinguish between area deprivation clusters. One likely source is the GHS; which is now the General lifestyle module (GLF) of the Integrated Household Survey (IHS). This survey carries a general health question consistent with the Census 2001, and is currently in use to inform national estimates of Healthy Life Expectancy (HLE). With an annual sample of approximately 20,000 people in England, this survey is small compared to the census, but the data collected over several years can be combined to produce a larger aggregated dataset. In national estimates of HE, for example, current practice is to combine three years of GHS/GLF survey data (Smith, Olatunde and White 2010). A further concern surrounds the measure of deprivation used in assessing health inequality. Previous studies have used the Carstairs index (Carstairs and Morris 1991) to define distinct geographical areas of deprivation, both at census and inter-censally using the Health Survey for England (HSE) (Bajekal 2005, Rasulo, Bajekal andYar 2007). However, it is not possible to update the Carstairs index after 2001; an integral component, namely the Registrar General's Social Class (RGSC), has ceased collection in national surveys. Moreover, there is a lack of comparability between the census 2001 and HSE due to differences in the question used to capture general health prevalence in the population. The Index of Multiple Deprivation (IMD), first introduced in 1999 for electoral wards, is a viable alternative to the Carstairs index; providing a numeric indicator of ecological deprivation based on relative scores across a number of distinct domains such as income, employment and health. In 2004 the IMD was updated to allow for analysis at Lower Super Output Area (LSOA) geographies (Noble et al. 2004) see Box 1. GHS data can be readily assigned to LSOA level deprivation groupings according to IMD 2004 through postcode matching. Restricting the analysis to quintiles of deprivation and combining five years of GHS data provides a sample of approximately 20,000 people for each quintile, which is sufficient for calculating an inter-censal estimate of health expectancy. Moreover, after the initial five year aggregated period, it is feasible to update the measure prior to the Census 2011, using subsequent years of GHS/GLF data to track change in the gap in health expectancies. This study assesses the potential of using the GHS as a data source for the inter-censal measurement of inequalities in HE across quintiles of ecological deprivation as defined by IMD 2004. The initial focus compares health status prevalence and HLE by age and gender for each quintile of deprivation calculated from Census 2001data and GHS 2001-05 (centred on 2003) data. The similarity of quintile specific estimates and therefore the inequality using each data source will indicate the usefulness of the GHS to provide an inter-censal measure of the inequality in HE by area deprivation. --- Methods The analyses in this report contain the prevalence of self-reported health status among the private household population of England; residents of communal establishments are excluded because the GHS does not survey the institutional population. The suitability of the five year aggregated GHS data to provide an inter-censal measure of HE between areas experiencing different degrees of deprivation is assessed by comparing the conformity of its estimates of health status prevalence and health expectancy with those based on the Census 2001 data. Boxes 2, 3 and 4 provide brief descriptions of the survey data and methods used during this study. --- Box 1 Area deprivation IMD 2004 combines seven distinct domains of data to produce a single measure of relative deprivation for each LSOA in England; similar measures have also been constructed for Wales, Northern Ireland and Scotland (Noble et al. 2001;2003, National Assembly for Wales 2005). LSOAs are relatively homogenous in terms of population size and structure; each has approximately 1,500 residents. In this study, the 32,482 LSOAs in England are ranked into quintiles in order to achieve a sufficiently large sample size for subsequent analyses of survey data. Although these quintiles represent a continuum of relative deprivation, there is likely to be a significant degree of heterogeneity within each, such that (for example) those at the bottom of quintile 1 are more closely related to those at the top of quintile 2 than those at the top of quintile 1. The IMD has been criticised as conceptually difficult when used in health related studies since it includes a 'health' domain to calculate relative levels of area deprivation (Morgan and Baker 2006). Therefore, measurements of health using the IMD as a geographical 'anchor' may potentially suffer from'mathematical-coupling' where the integral health domain of the IMD influences the relationship with the health outcome under investigation. Recent studies, however, have found little evidence to support this effect, concluding that the presence or absence of the health domain in the IMD 2004 has little or no effect on observed health inequalities, particularly when using general health, limiting chronic illness and/or mortality as outcome measures (Adams andWhite 2006, Gartner et al. 2008). --- Box 2 Survey data Data relating to residents of private households in England were collected from Census 2001 and the GHS 2001-05. An aggregation of five years of GHS data achieves a sufficiently large sample for meaningful analysis across quintiles of deprivation. A similar approach is used in the annual ONS estimates of health expectancies for England. Census and GHS records were mapped to LSOA geographical boundaries using a postcode identifier, and assigned to the relevant quintile of the IMD 2004 for that area. Census and GHS populations were evenly distributed across deprivation quintiles, each quintile contributing around one-fifth of the population/survey sample (see Table 1). Residents of communal establishments were excluded from the census data to allow better comparison with the GHS which does not collect this data. It should be noted, however, that mortality data used to calculate HE includes deaths in both private household and communal establishment populations. --- Box 3 Health status prevalence The prevalence of health status by sex and five year age-band was derived from responses to the following general health question asked in both census 2001 and GHS 2001-05: 'Over the last 12 months would you say your health has on the whole been.... --- Good Fairly Good --- Not good In this analysis, a binary measure of general health is used to distinguish states of 'good' and 'poor' health; specifically, responses to the general health question were dichotomised by collapsing those reporting 'good' or 'fairly good' health into a single state of 'good' health. The remainder were classified as being in 'poor' health. In comparisons of health status prevalence between census and GHS, data were age standardised to the European standard population to control for the possibility of differences in the age structure between the 2001 census and the GHS samples used. --- Box 4 Health expectancies (HE); healthy life expectancy (HLE) and disability free life expectancy (DFLE) HLE is partly derived from health status prevalence (see Box 3) and partitions life expectancy (LE) into periods of 'good' and 'not good' health. DFLE is partly derived from reports of limiting long-standing/term illness. HE were calculated using the Sullivan method, combining prevalence and mortality data and mid-year population estimates (MYPE) (Sullivan 1971, Jagger 1996). LSOA level MYPE and mortality data are not available prior to 2001, therefore estimates of HLE derived from Census 2001 data use mortality data only from 2001 and the Census population was used as a proxy measure of the MYPE. For estimates of HLE and DFLE based on the GHS, all data (survey, mortality and MYPE) were aggregated over the period 2001-05. Comparisons were made between census and GHS based estimates of HLE for males and females at birth and at age 65 across deprivation quintiles. --- Results --- Comparison of health status prevalence and HLE by area deprivation quintile according to Census 2001 and GHS 2001-05 --- Health Status prevalence Both Census and GHS data showed a similar, consistent pattern of increasing prevalence of 'poor' health with rising levels of deprivation and a greater degree of inequality between extremes of deprivation for males compared to females (see Table 2). At national level, the prevalence of 'poor' health was somewhat higher according to the GHS compared to the census and the gender gap was also more pronounced. Approximately 8 per cent of males and females were in 'poor' health according to census and around 10 and 11 per cent of males and females, respectively, were in 'poor' health according to the GHS. Compared to the census, the prevalence of 'poor' health was higher for both males and females in the GHS in each quintile of deprivation and this difference was greatest in those living in the most deprived areas. As with national figures, the gender gap was also more pronounced in the GHS compared to the census at each quintile of deprivation. In the 2001 Census, the prevalence of 'poor' health for males living in the most deprived fifth of LSOAs was three times higher than for males living in the least deprived areas. For females the equivalent inequality was narrower; the prevalence of 'poor' health in the most deprived areas being 2.7 times higher than in the least deprived areas. Similarly, in the GHS the prevalence of 'poor' health for males in the most deprived areas was 2.8 times higher than in the least deprived areas. The equivalent inequality was again less pronounced for females, the prevalence of 'poor' health being just 2.3 times higher in the most compared to the least deprived areas. --- Healthy life expectancy As with health prevalence, census and GHS estimates of HLE showed similar and consistent patterns across the deprivation quintiles and between the sexes. For both sources, each quintile of deprivation in the cohorts of males or females at birth or at age 65 was significantly different. Estimates of HLE got significantly worse with increasing levels of deprivation and were lower at birth and at age 65 for males compared to females. In addition, the difference between the extremes of deprivation was greater for males than for females (see Table 3). Significant difference in HLE between Census and GHS HLE was lower for males and females at birth in the GHS compared to census; but estimates at age 65 were similar in both data sources. At national level, HLE for males at birth according to census was around 69 years, significantly higher than in the GHS where HLE was approximately 68 years. Similarly HLE was significantly higher at census for females at birth; 72.8 years compared to the GHS at 70.7 years. By deprivation quintile, estimates of HLE at birth for males and females were also significantly greater in the census compared to the GHS. Additionally, the inequality of HLE between the least and most deprived quintiles was greater in the GHS than in the census; 14.3 vs. 13.2 years for males and 12.2 and 11.2 years for females in the GHS and census respectively. The difference in the scale of inequality between genders, however, was similar at around 2 years in each data source. At age 65, estimates of HLE for males and females according to census and GHS data were largely equivalent. Nationally at this age, HLE was 12.8 and 12.7 years for males and 15.0 and 14.9 years for females according to census and GHS based data respectively. For each quintile at age 65, estimates of HLE for males and females were comparable across sources with one exception: among females in quintile 2, HLE was significantly higher at 16.2 years according to census compared to only 15.7 years according to the GHS. Confidence intervals (CI), signifying the precision of estimates of HLE, were substantially narrower for census based estimates compared to those derived from the GHS. As with HLE, LE declined with increasing levels of deprivation; however the difference between the least and most deprived quintiles was much narrower. The range in LE at birth between the least and most deprived areas was around half that of HLE at birth (range in LE at birth: 7.7 years for males and 5.4 years for females) and two-thirds that of HLE at age 65 for both sexes (range in LE at age 65: 3.6 years for males and 3 years for females). The proportion of life spent in good or fairly good health, that is, HLE divided by LE, was broadly similar for males and females in each quintile of deprivation but between quintiles this proportion varied notably. At birth, males and females in the least deprived quintiles could expect to spend approximately 91 to 92 per cent of their lives in good or fairly good health, but for the most deprived quintiles this fell to just 81 to 82 per cent; a difference of around 10 per cent between the extremes of deprivation. For males, in particular, the greatest difference exists between the most (quintile 5) and next most (quintile 4) deprived areas, where the proportional difference was almost as great as that between quintiles 1 to 4 combined. At age 65, differences in the estimated proportion of remaining life spent in good or fairly good health between quintiles was more extreme than at birth. At this age, the gap between the least and most deprived areas was around 17 per cent for males and 13 per cent for females; however, the incremental change between quintiles was on the whole smoother than at birth. --- Disability Free Life Expectancy As with HLE, there were clear and significant differences between estimates of DFLE in each quintile of deprivation within the cohorts of males and females at birth and at age 65. DFLE was observed to decrease with increasing level of deprivation. Males at birth and at age 65 had significantly lower estimates than females in each quintile and the inequality in estimated DFLE between the least and most deprived quintile was narrower for females than for males (see Table 4). At birth, males and females living in the least deprived areas could expect some 13.5 (males) or 11.4 (females) more years of life free from a limiting long-standing illness or disability than their counterparts in the most deprived areas. At age 65, the inequality in DFLE between the least and most deprived quintiles was approximately 4.5 years for males and 4.0 years for females. This difference was of a similar magnitude to the inequality between quintiles seen with HLE although the 95 per cent CIs were a little wider, at around 1.1-1.2 years at birth and 0.8-0.9 years at age 65. At birth, males and females in the least deprived areas could expect to spend around 9-10 per cent more of their lives without a disability than those in the most deprived areas. At age 65, these differences are larger: 14 per cent for males and 12 per cent for females (see Table 4). --- Discussion This report explores the potential of the GHS to provide an adequate inter-censal measure of health inequality between advantaged and disadvantaged populations, defined using the IMD 2004 measure of deprivation at small area level. Initially, comparisons of health status prevalence and HLE for area-based deprivation quintiles in each data source were undertaken to assess level of conformity. These represent the first use of LSOA level geographical groupings in health expectancy reporting by ONS and provide further supporting evidence of the relationship between deprivation and health found in previous investigations. The strong association of deprivation and health status and health expectancies are consistent with previous research; increasing levels of deprivation equate to shorter lives, and longer periods of life in states of poor health and disability in both absolute and relative terms. Census 2001 data clearly distinguishes between level of health status and health expectancy by quintile of deprivation. Significantly fewer people residing in the least deprived areas reported poor health than their counterparts experiencing greater deprivation. The reporting of poor health increased in a predominantly linear pattern with increasing deprivation, which produces a substantial gap between the least and most deprived quintiles. In fact these data show that in 2001 there were three times as many people reporting poor health in the most compared to the least deprived areas. Similar and consistent differentials were found using the GHS in 2001-05, although the prevalence of poor health was greater in each quintile and the inequality between the least and most deprived areas was slightly narrower, significantly so for females. Survey data were age-standardised and so differences in the ages of respondents between the GHS and census would not account for the differences observed. Differences in the design of the census and GHS however, in addition to the wider time period applying to GHS data, may contribute to the observed differences in the prevalence of poor health between sources. There is evidence to suggest that respondents completing self-administered questionnaires (such as the census) are subject to 'primacy effects' whereby the uppermost choices in a list are more likely to be selected. In contrast, respondents in face-to-face interviews (such as the GHS) are more likely to be influenced by'recency effects' where the answers at the bottom of a list are more likely to be selected (Bowling 2005). Such effects could go some way to explain the differences between the census and GHS in this study. Other likely contributors to the observed differences include interviewer prompting in the GHS and proxy effects in the census data whereby forms may be completed by one household member on behalf of another. It is also noteworthy that studies have shown that face-to-face interviews result in more positive and socially desirable responses, particularly for health status and behaviour, compared with selfadministered questionnaires (Bowling 2005). In the GHS, responses to the general health question may vary with other forms of bias such as interviewer characteristics and the social setting in which questions are asked. In contrast, the self-completion nature of the census may present a cognitive burden on respondents as it assumes a certain level of literacy, understanding of the question and ability to recall events without probing. Given the complex interaction of mode effects and responses to the general health question, it is difficult to disentangle their impact on the reported prevalence of poor health in this study. The patterns in health status prevalence rates were also observed in estimates of HLE. For the census, there was again a clear linear relationship between deprivation and estimates of life spent in good or fairly good health. HLE decreased significantly with each declining quintile leading to a substantial gap in HLE between those in the least compared to the most deprived areas. Female HLE was significantly higher than for males at birth and at age 65 in each deprivation quintile although the inequality in estimates between the least and most deprived areas was narrower. For the reasons noted above and because of differences in mortality and mid-year population estimate data used in their construction, estimates of HLE derived from GHS 2001-05 and Census 2001 cannot be directly compared; however, the relationships between HLE and deprivation, between males and females and between areas of deprivation within each cohort at birth and at age 65 are consistent between the GHS and census. In the GHS 2001-05, the scale of inequality in HE (HLE and DFLE) between the least and most deprived quintiles was substantial. Some 11 to 14 years of HLE separated people residing in the least and most deprived quintiles. Males and females at birth living in the least deprived areas between 2001 and 2005 could expect to spend approximately 91 to 94 per cent of their lives in good or fairly good health compared with only 82 to 86 per cent in the most deprived areas. At age 65, these differences were more pronounced; those in the least deprived areas can expect to spend 82 to 84 per cent of their remaining lives in good or fairly good health states compared with just 65 to 70 per cent for those in the most deprived areas. Similar patterns were observed for DFLE. The scale of inequality was greater for men than for women at each point in life examined. This concurs with previous evidence on inequalities in LE and HE by socio-economic position. However, the pattern of inequality across social classes or NS-SEC classes in women is more irregular than the predominantly linear pattern in men (Langford andJohnson 2009, White, Van Galen andChow 2003). However, by area deprivation, the pattern is predominantly linear for both sexes and therefore provides a better indication of graduated need. The estimates reported here are broadly consistent with those found in a study using Carstairs deprivation twentieths to identify health inequalities between electoral ward groupings (Rasulo, Bajekal andYar 2007, Morris andCarstairs 1991). In this study, differences in HLE at birth between the least and most deprived twentieth of wards for males and females respectively were 13.4 and 11.8 years at birth and 5.2 and 4.7 years at age 65. The finer gradation used in that study did not lead to an undue difference in the scale of inequality, suggesting breakdowns of areas into fifths on the basis of level of deprivation are adequate for determining the presence of inequality and its scale. The similar findings serve to verify the approach taken here. As with other studies, results here also show that measures of longevity alone underestimate the magnitude of inequality between areas or extremes of deprivation when compared with measures which combine mortality and morbidity data into a summary index of quality and quantity of life. The gaps in inequality found in HLE and DFLE were much wider than those found in LE. The gaps in HLE and DFLE at birth between the least and the most deprived areas were approximately twice as great as those observed for LE. We now intend to extend this analysis to cover more recent years of the GHS/GLF in an attempt to monitor changes in health inequalities over time. This planned work will focus on DFLE as the measure of inequality as the general health question used to inform estimates of HLE in this study was discontinued in the GHS in 2007; replaced by a EU-harmonised question (Smith and White 2009). --- Limitations of GHS data Of primary concern is the precision of estimates of HLE computed by pooling five years of survey data to form quintiles of deprivation populations. This precision is determined by the width of the per cent CIs surrounding estimates of HLE. Ideally the 95 per cent CI should be less than +/-1 year at birth and less than +/-0.5 years at age 65 in order to detect real changes over time. The estimates surrounding GHS based estimates of HLE presented here are a little wider than this target, but broadly equivalent to national estimates of HLE for England and considerably narrower than national estimates for Wales and Scotland. The CIs would become narrower with each additional year of survey data but this would make the time period of the estimate much less desirable as an inter-censal measure. Despite the fact that the CI's are a little larger than desired, the similarities in the differentials and relationships by deprivation quintile, gender and age between the data sources used in this study, indicates that the GHS is a suitable source for an inter-censal measure health expectancy by quintile of area deprivation. The precision of inter-censal estimates in the near future will improve as data from the Integrated Household Survey core module becomes available for use. This source has a considerably larger sample compared with the GHS/GLF used in this analysis. --- Conclusions The GHS is a useful data source to inform inter-censal estimates of HLE across quintiles of ecological deprivation as defined by IMD 2004 as the pattern observed by level of deprivation concurs with that reported using the Census 2001. This report provides estimates of LE, HLE and DFLE at birth and age 65 by quintile of deprivation across England for the period 2001-05. As such it provides further evidence of the importance of material deprivation for health outcomes; the clustering of deprivation found in very small population units such as LSOAs serves to guide the targeting of interventions to mitigate differences and set benchmarks to monitor change. --- List of Tables | Deprivation and ill health are intimately linked. Monitoring this relationship in detail and with sufficient frequency is key in attempts to reduce health inequalities through more efficient targeting of healthcare resources. This study explores the potential of the General Household Survey (GHS) to provide an inter-censal measure of health expectancies in small areas experiencing differing degrees of deprivation.The prevalence of health status and the health expectancy of males and females at birth and at age 65 by quintiles of small area deprivation are estimated. Comparisons are made between census 2001 and GHS 2001-05 to inform the suitability of the latter as an intercensal measure of health expectancy across small areas. Comparisons are also made between the health expectancies of people living in more and less deprived areas.Reports of 'good' and 'fairly good' health fell and health expectancies declined as deprivation increased. Consistency between census and GHS data indicates that the latter is a suitable source for the inter-censal measurement of health expectancies across quintiles of deprivation. At birth, people living in the least deprived areas can expect more than 12 additional years of life in good or fairly good health than those in the most deprived areas, at age 65 the difference was more than four years. In terms of the proportion of life spent in favourable health states; at birth, those living in the least deprived areas could expect to spend around 91 per cent or more of their lives in good or fairly good health compared to 82 per cent for those in the most deprived areas. At age 65, people in the least deprived areas could expect to spend around 82 per cent of their remaining life in good or fairly good health compared to 69 per cent or less for those in the most deprived areas.This study represents the first use of the Index of Multiple Deprivation (IMD) 2004 in the measurement of health expectancy across small areas. Both the census and GHS highlighted substantial differences in the health status and health expectancies of people experiencing differing degrees of ecological deprivation. These findings serve as a useful measure and benchmark in the targeting and assessment of interventions designed to ameliorate health inequalities. |
Introduction Children below the age of 18 years who are made to work is considered Child Labour according to the International Labour Organization (ILO). They work in various industries and households for meagre wages which is prevalent all over the world. Many studies explore that poverty, illiteracy and lack of education are the major issues for increasing Child Labour in the society which is a significant problem in many developing countries including India. The study made by the International Labour Organization (ILO), reveals that there are more than 152 million children who are involving in Child Labour world-wide at present. In India, 3.9% Child Labour cases are found who are aged in between 5 to 14 years. They violate the constitutional provisions like fundamental rights of Children and also harm to themselves physically, mentally and emotionally for the sake of unwanted burden imposed upon them. --- 1. The Child Labour Prohibition and Regulation Act, 1986: This act prohibits the labour of children below 14 years of age in hazardous activities like mining, explosives, brick-kilns etc. However, the act regulates the working condition of children in non-hazardous occupations like agriculture, manufacturing services etc, however there are some limitations in it i.e., not more than six hours a day (for the children 14-18 years of age). It has a provision of imposing penalty whoever breaks the rule with a fine of Rs. 50,000/ and imprisonment of two years. --- 2. The Juvenile Justice (Care and Protection of Children) Act 2015: The provisions of this Act shall be applied to all matters concerning children in need of care and protection, and children in conflict with law, such as: apprehension, detention, prosecution, penalty or imprisonment, rehabilitation and social re-integration of children in conflict with law, along with procedures and decisions or orders. --- 3. Adoption Regulation, 2017: It has come into force on 16 th January 2017. As per this law, any orphan or abandoned or surrendered child, who are declared legally free for adoption by the Child Welfare Committee is eligible to be adopted by parent/s, irrespective of he/she is married or unmarried (but as per gender regulation) and shall have physically, mentally and emotionally stable, and financially capable. --- 4. The Juvenile Justice (Care and Protection of Children) Act 2021: This Act mandates equal rights for children and protection of children. It also fulfils the India"s commitment as a signatory to United Nations Convention on the rights of the child. In this Act, the District Magistrate is empowered to deal with child protection and adoption process. (Source: The Juvenile Justice Act, 2015 along with rules, 2022) --- An Outlook onChild Labour and Covid-19 in India "Child Labour" most probably is a frequently used term around the world soon after the Covid-19 epidemic slows down its devastating appearance with crores of death and unexpected cases. People become jobless and homeless due to continuous lockdown, and no means of living arrangement is set up for many. On 24 th of March, 2020, the Prime Minister of India, Mr Narendra Modi announced lock down for 21 days which was followed by 14 hours curfew two days back,and again the lock down was extended until the virus spread minimizes (Covid-19 Report, 2020). The societal change is noticed due to this sole reason in degrading the status of living by not only the poor people but also the people from higher classes, educated classes, middle class, or any other class without justification of who they are. Many people have lost their job and migrated from working place to native place during that period. The mass movement of people at that time was described as the largest movement after the partition of India in 1947(The Guardian, 30 th March 2020). Many children lost their parents duringtravelling due to mass rush at the station or somewhere in the road and that later on bound the children to do work in a low wage to keep themselves alive (Covid-19Report, 2020). Many children who become orphan have to be self-sufficient to earn money and to keep living. The parents allow their children to work and support the family due the socio-economic condition and poverty they face. A study made by Reddy (June, 2020) reveals that the children of daily wage earners are worst affected by Covid-19. They have very little earnings, and their children have to remain hungry for another meal. The UNICEF of the United Nations have warned the world being child right crisis for Covid-19 which have also threated India, having 472 million children, the largest child population in the world. The press has repeatedly reported that despite having many existing Child Labour Act in India, the children work tirelessly in the agriculture sector like paddy, vegetables, and other farms to continue their livelihood. In Assam, the press has reported that a large number of children (both male and female) have left their school education and joined Tea Garden, Brick yard, Masonry, Paddy Field, Fishing, and so on for earning to continue their living (The Assam Tribune, 5 th Dec 2021). In Morigaon District of Assam, population plays an important role in the entire process of socio-economic scenario. It had 776256 population in 2001 which is increased to 957423 in 2011 in the total area of 1551 sq. km.(Census of India, 2001 and Handbook of Assam, 2018/Nath, 12Dec, 2019).It is a matter of fact that growth of human population occurs due to the lack of education. The literacy rate of Morigaon District is found 68.03 percent in 2011 census which demarcate it as under developed in the matter of business, trade, industry, culture, education and so on in comparison to many other districts of Assam. The children of illiterate people are unsafe, become child labourer, child abuse victim, and they may alsodevelop the habit of bad practice in the society like stealing, smuggling like: drugs, narcotics, women trafficking, spread HIV infected diseases and so on. --- The Relevance of Covid-19 in Child Labour in India The pandemic has caused a massive increase of child labour in the world as well as in the countries like India. It has pushed many children to work in worst form that has threatened the Child Right Organizations like Educo and the stakeholders in India. The Country Director of Educo, Mr Guruprasadsaya, "Among the various groups affected by the global pandemic, children remain one of the worst-hit across the globe. The pandemic has triggered a massive increase in the cases of abuse and violation faced by children in India as well. The condition of working children and children in forced or bonded labour in the country has only worsened in the light of Covid-19 pandemic" (News and Press Release, 5 th July 2021). The main cause of child labour in India due to covid-19 was recognised as economic crisis followed by lock downs, job-less, unsafe migration, closure of schools and cut down from various existing facilities. The country Director of Educo (Mr Guruprasad) also added that the child labour problem was not a new thing in India but after the crisis of pandemic it becomes prominent. So, we need to understand that unless the opportunities created for marginalised children to engage in meaningful developmental activities, the problem cannot be reduced significantly. --- Objectives of the Study The objectives of the paper are: --- <unk> To analyse the causes and consequences ofChild Labour in post-covid-19 situation. --- <unk> To identify the elements that contribute to Child Labour in Morigaon Town. --- <unk> To find out recovery measures taken by the stakeholders to restore the Child Labour Cases. --- <unk> To forward some suggestions to develop present status of Child Labour if needed. --- Research Design for the proposed study As this work is in the nature of an analytical study, the researcher has adopted the following methodology for choosing the sample and collecting and analysing the relevant data: a) Sample: the sample were collected from four sources: --- <unk> The District Child Protection Unit,Morigaon, Assam, --- <unk> The Sarva Shiksha Abhiyan, Morigaon, Assam --- <unk> The Secondary Schools, and <unk> The Advocate dealing with Child Labour Case of Morigaon District. From "The District Child Protection Unit", Morigaon, Assam, the data were collected from the In-Charge of this officewith the help of a Questionnaire containing 15 open-ended questions. From "The Sarva Shiksha Abhiyan, Morigaon, Assam, the data were collected from the In-Charge of this office with the help of Interaction method. From theSecondary Schools (3 Govt schools randomly selected) of Morigaon Town, the data were collected from the restored children who were learning there, with the help of interaction and observation method. Again, some interactions were made with their parents. For this,Pre-fixed Pro-forma was used. From the Morigaon Court, the data were collected from the Advocates dealing with Child Labour Cases of Morigaon District in Nagaon District Court. In Morigaon, no Child Labour Court is available at present. For this, a Questionnaire is used. --- b) Tools used: For conducting the field study and collecting relevant data, following tools and techniques are used: Questionnaire, Observation and Interaction. --- Delimitation of the Study The Child Labour is a world-wide problem. Every society has greater responsibility to eradicate this problem as the children are the future of the society. A country will develop when the children are properly taken care of and nurtured properly. This type of study can help a society, stakeholders, govt. and non-govt. organizations and research scholars. However, due to time and cost constraints, the current study delimits to Morigaon Town only. But the study can be extended to wide range such as: Morigaon District, Assam, North East, India or the World. The above table shows that in the year 2022, with the help of District Child Protection Unit, total 38 Child Labourers were apprehended, of which 35 were male child and 3 were female. They were registered under the Child Protection Act 2021 and restored with the permission of District Magistrate after proper counselling by them, as declared by the unit. --- Data Interpretation and Analysis The analysis of data shows that in the year 2022, when Covid-19 pandemic was almost relaxed and people started their regular life again, the child labour cases came into notice. Among them, male child cases were higher than the females i.e., 92.11% of total population (child labour cases). From the analysis of data, it can be assumed that they are from the lower socio-economic group and engaging themselves in low-paid work and may not properly support the family they belong to. As a result, no satisfaction is pursued from the work they do rigorously. From the Table 3, it is known that during 2022-23, total 3092 students from the age group of 6-14 years have left the school. Of them, 2122 were male child and 970 were female. The study finds out that in Morigaon District, the male drops out case is greater than the females i.e., 68.63% of total population (OOSC). The Table 1 also proofs this fact where among the apprehended children, female case is lesser than the males i.e., 7.89% of total population (Registered child labour case). From the analysis of data, we can assume that male child is taking more family responsibility than the females and they suspended themselves from the educational rights as well as child rights facilitated by the govt. or constitution of India. The data can be analysed that among the dropout teenage students,males are more prone to continue work than back to education again. Some of the registered cases of these age group (15-17yrs) are also found involved in anti-social activities like drugs, sex abuse etc. (the data is not disclosed due to security purpose). Some of the female students are victim of child marriage also. The above table shows that in the year 2018, the child labour case was found only one. On the other hand, in 2022, the case has increased to 38 numbers which is very significant. From the analysis of data, it is clear that covid-19 has affected the people living in this place (Morigaon) and it has directly impacted many children who are bound to do work for their living. Though the child labour case was existed before the pandemic spread, that was very minimum. In 2018, only one such case was found which was however in 2022, just after the covid-19 was calm down found significantly high. So, it is a matter to be discussed and find out the solution to eradicate the problem. --- Data Regarding Remedials/Initiatives taken by the DCPU a. A Leaflet is published yearly containing information regarding Child Labour. b. Arrange meetings between Headmaster of school and dropout students. c. Awareness programme is arranged. d. The Girl Child Labourers who are drop out for 2 years or more than that are given admission in Kasturba Gandhi School in Morigaon District, Assam. From the study it is found that Covid-19 affects the socio-economic condition of the people of Morigaon District, Assam. Due to lock down,many people lost their source of income, most of the cattle farm and poultry farm were being closed due to the unavailability of foods and decreasing of sell. Again, other natural calamities like heavy rain-fall, flood, erosion, destruction of crops and paddy fields, jute fields and vegetable fields etc. also affect people of Morigaon in the year 2020. Within next one year, i.e., in 2022-23, 3552 students are found out of schooland engage themselves in various activities so that they could earn money and support their family. --- Data Regarding Remedials/Initiatives taken by the Sarva Shiksha Abhiyan --- 2. It is significant that in comparison to total drop out cases (3552) in Morigaon District, only 38 child labours are apprehended or recovered by the District Child Protection Unit (DCPU) in the year 2022. That is only 0.01% of the total population(drop out students). The remaining 99.99% is untraced. --- 3. Though the Govt. of Assam in collaboration with Sarva Shiksha Abhiyan (SSA) flags a mission to trace the drop out children but that is not satisfactory till now. --- 4. The study finds out that the number of boys" drop out are larger than the girls" in the year 2022. In the age group of 6-14 years, the percentage of boys" drop out is 68.63%, and in the age group of 15-17 years, the percentage of boys drop out is 59.56%. From the interaction, it is found that boys have to take more responsibility of their family for the reasons like: family population, poverty, poor livelihood, prolonged health issues of the elder family members, death of parent/s in covid-19, hike of essential commodity prices, and so on.Again, it is also significant that some of the drug cases and child marriage cases are from these teenage drop out groups. --- 5. The study also finds out that many students left their school during Covid-19 due to the online education method. Many poor students were unable to buy the Android Mobile phone to continue their study. Besides this, many students found it difficult to operate due to expertise in the technical issues of it. Therefore, they left the school and did not come back to school due to fear, shame, anxiety and year-loss. --- 6. The study finds out that some initiatives are taken by the District Child Protection Unit (DCPU) and Sarva Shiksha Abhiyan to restore the child labour as: <unk> Publish leaflet <unk> Arrange meeting with Head Masters <unk> Conduct awareness programme occasionally <unk> Arrange special training, etc. But in the real picture, these are useless for many. --- 7. The consequences of child labour are also found significant: i. 47.36% of the child labourer who worked in factories or garage were injured. Some of them became cripple, lost eye, cut hand or finger duringmachinery or hazardous work. ii. Cent percent of the child labourer wanted to live a free life without any responsibility. They are suffering from mental illness like fear, anxiety, depression etc. and live an unsatisfactory life. iii. 50% of the child labourer have suffered from the serious health issues like respiratory diseases, ear problem, eye problem, stomach problem, fever and etc. iv. Cent percent of the child labourer are deprived from child rights like: labour free life, free and compulsory education, mid-day meal, and other facilities provided by the Govt. of India. --- Some Suggestions and Recommendations --- 1. Strict law should be enforced against child labour. --- 2. Access to education should be made easy for the sufferer/victim without considering their age and year loss. --- 3. More awareness camp should be organised in the society to aware the impact of child labour. 4. Create easy income opportunities for elderly persons, weak persons, child bearing mothers, and so on. 5. Utilization of free time in vocational or skill-based work like: soft toys making, craft work, tailoring, weaving, and so on. 6. Provide financial assistance to the needy person to overcome from the situation. 7. Improve social protection such as: health care, nutrition, sanitation, and so on. 8. Emotional attachment and societal support for the weaker section of the society should be given. --- Conclusion Child labour in India is a significant problem. It requires a comprehensive approach to eradicate from the society. It needs involvement of stakeholders including the Govt. NGOs, communities, civil society, and other sectors. By working together, it can be possible to reduce prevalent child labour. Only then, future of the child will be saved and secured. --- Data Analysis from the Interaction and Observation During data collection, the researchers have interacted with the stakeholders (In-Charge, Officials, Advocates, etc.), the victim children and with their parents, and found different causes and consequences of child labour relating to the Covid-19. They are discussed below: --- Main Causes found a. Despite Covid-19, other natural calamities like: rain, hail storm, flood, erosion, destruction of crops, and etc. during 2021-22. b. Parental death, disability, and ailment. c. Poverty: insufficient resources for living. d. Poor income: low wages after day long service. e. Big family: uncontrolled population, many members live together in a single roof. Suffocated. f. Lack of education: no understanding about quality life, no proper family planning, no proper investment, no literacy, little numerical ability. g. On-line education: uncapable of purchasing android mobile, lack of technical skills, network issues and so on. h. Hike of essential commodity price: unbearable rate of daily using things. i. Jobless: common thing during covid-19 due to lock down and safety purpose. j. Unemployment: after investing in education no job guarantee, no proper job available in the society, struggle, etc. --- Main Consequences found --- a. Physical: injured or suffered quite often while working. b. Emotional: lack of love and care from the family members, working place and from the society. | Child Labour" being a risk factor all over the worldnow a day is considered an urgent matter of discussion. It needs to sort out various causes and consequencesof child exploitation in any form and find immediate solutions to conduce a serene environment so that budding children can move freely, enjoy their childhood in fearless environment and grow mentally strong to become good human beings of the society. It is seen that child labour is increasing day by day at present due to the post effect of covid-19 and other causes like poverty, lack of education, societal negligence, addiction, illness, and so on. Thus, this paper aimsto find out various causes and factors responsible for child labour in Morigaon Town, Assam, at present. For this, the data will be collected with the help of questionnaire from theIn-Charge ofDistrict Child Protection Unit (DCPU),Morigaon, Assam, the Sarva Shiksha Abhiyan, Morigaon, Assam, the Secondary Schools of Morigaon Town, Assam,and by using observation and interaction method with the child-labourers and their parents belong to thiscentral area,connected from different villages and mini towns. Moreover, many case reports of child-labour after covid-19 will be collected from the Advocates of Morigaon court,will be analysed. |
Introduction Breast cancer is the leading cancer diagnosed in African American women and is the second leading cause of cancer death [1]. Furthermore, African American women have the highest age-adjusted rates of breast cancer mortality [1,2]. A diagnosis of breast cancer can cause varying degrees of psychological distress among women and oftentimes there is the potential for future mental health issues and reduced quality of life if it is unresolved [3,4]. Furthermore, depression in breast cancer patients has been related to lower medication treatment adherence and higher mortality rates [5][6][7]. Despite depression's detrimental impact in breast cancer prognosis, this condition is rarely recognized and treated [7]. Thus, identification of factors that are related to depression among women with breast cancer is important to help clinicians address and integrate psychosocial needs into routine cancer care, as recommended by the New Quality Standard [8]. This endeavor is especially important for African American breast cancer patients who face a worse prognosis after diagnosis than other racial and ethnic groups and who are understudied compared to their white counterparts [9]. There is some evidence to suggest that depression prevalence may vary by race and ethnicity though data are equivocal and research in this area has been scarce [10][11][12]. A woman's response to her diagnosis is complex and may be the result of interaction of several factors including her internal capacities as well as her interactions with others. Therefore, consequences of negative life events, such as breast cancer may differ between African American and White women. A woman's psychosocial response to breast cancer diagnosis has been examined, for the most part, through administration of personality inventories and structured clinical interviews. However, limited empirical data exist that assess the level of depression symptoms in African American women with breast cancer or whether these rates are similar to African American women in the general community without breast cancer. The need for attention to mental health concerns of African American women has been noted in qualitative studies [13] but specific aspects are lacking such as examination of ego strength and the role of social support in mediating depression. Therefore, it is important to investigate African American women's psychological response to breast cancer status. This will provide a better understanding of the correlates of depressive symptoms in this group, which may help to reduce disparities in cancer outcomes. Existing studies have not examined the role that specific personality traits such as ego strength play in the manifestation of depressive symptomatology in response to breast cancer diagnosis in African American women. Ego strength, a concept widely examined in the field of psychology, has been defined as a measure of the "internal psychological equipment or capacities that an individual brings to his or her interactions with others and with the social environment." [14] (p.70). Because therapists and researchers have utilized ego strength to predict psychological adjustment and the success of patients in psychotherapy [15][16][17], it seems prudent to examine whether different levels of ego strength can assist in predicting the development of depressive symptoms in African American breast cancer patients. Another important element related to the psychological response to breast cancer diagnosis is social support. Social support often functions as a buffer from negative psychological reactions to both mental and physical illness [18]. Several investigators have examined the valuable role that social support plays in assisting breast cancer patients' adjustment to diagnosis [19][20][21][22][23][24][25][26][27][28][29] and have demonstrated an association between social support and depression among patients with the disease. Researchers have also found that perceived adequacy of support is a positive predictor of psychological outcome and response to breast cancer diagnosis [21,[30][31][32]. Some studies suggest that the dynamics of social support may vary by race. One study [21] found a race-relationship between perceived social support and adjustment and another study [33] showed that African Americans and White breast cancer patients tended to seek different sources of support. The National Comprehensive Cancer Network (NCCN) [34] recommends that oncologists routinely assess distress in cancer patients yet there are scientific gaps in knowledge about the level of depressive symptomology in African American women with breast cancerparticularly in comparison to women without breast cancer. Although Black women tend have an earlier age of onset of breast cancer compared to White women [35], there is paucity of research with younger women (<unk> 50 years old). The current study begins to address some of these gaps by addressing the following: (1) what are the levels of depressive symptomatology in young African American women with breast cancer? (2) does the level of depressive symptoms vary according to selected demographic factors (age, marital status, income level, occupation, education)? and (3) how much variance in depressive symptomatology is explained by ego strength, stage of breast cancer, and social support? --- Methods --- Design and Study Participants Approval for this study was obtained from the Institutional Review Board of the NIH, Georgetown University Medical Center, and Howard University. This study compared depression in women diagnosed with breast cancer to those in the general community. The study focused on women between the age of 40 and 50 to capture women that were old enough to have mammography recommended and also to account for the fact that African American women tend to have an earlier onset of breast cancer yet younger breast cancer patients are underrepresented in the literature. Breast cancer cases were eligible if they were: African American between 40-50 years old, diagnosed with breast cancer within 12 months of data collection; not currently being treated for depression; and not currently engaged in abuse of illicit drugs. Breast cancer cases (stage I-IV) were identified from pathology reports in a location hospital registry; 100 patients were mailed invitation letters from their physician and contacted for participation, from which 76% agreed to join the study. A comparison group of women (n=76) were recruited from health fairs which were geared to provide cancer screening services to residents in the Washington, DC Metropolitan Area. Women were eligible for the comparison group if they were African American between 40-50 years old, reported having had a mammogram within the last year with benign results, and not being treated for depression or a mental illness. Women recruited from health fairs scheduled an in-person interview with the research assistant. All recruited participants were consented by a trained research assistant at university study offices where participants completed a self-administered survey which took approximately 90 minutes to complete [36]. No monetary incentives were provided. --- Instruments The outcome variable was depressive symptomatology and major predictor variables were ego strength, social support, stage of breast cancer, and demographic factors. Outcome measure-The Beck Depression Inventory-Short Form, was used to assess depression symptomatology. This tool is widely used and includes 13 -items that assess the severity of current affective, motivational, and behavioral symptoms of depression in psychiatrically diagnosed patients and in normal populations (alpha range 0.74 to 0.95) [37,38]. In the current study alpha=0.89. Each item consists of a list of four statements organized in increasing severity about a particular symptom of depression and a 9-10 cut-off point is suggested for medical patients [39]. Predictors-The Barron's Ego Strength Scale (MMPI-2) includes 52 items that measure aspects of effective functioning, adaptability, and personal resourcefulness. The scale has demonstrated good reliability (alpha=0.66) [40], also in this study (alpha=0.70). Emotional and tangible support from networks was measured using the Norbeck Social Support Questionnaire (NSSQ). NSSQ assesses structural properties (e.g. size of the network) and functional properties (emotional and tangible support). Respondents answer questions regarding: (1) a list of significant people in one's life; (2) length of association and frequency of contact with these individuals; (3) the degree to which each person provides emotional and tangible support; and (4) recent losses of supportive relationships. At least three scores are yielded from the NSSQ: a total functional score, a total network score, and a total loss score. Reported internal consistency Cronbach alpha coefficients for the Norbeck Social Questionnaire range from 0.89 to 0.98 [41,42]. In this study Cronbach alpha coefficient was 0.93. Other variables included on the survey were age, marital status, income level, occupation, education, family history of breast cancer (yes vs. no). Breast cancer (from stage I to stage IV) was captured for breast cancer patients. --- Statistical Analysis Descriptive statistics were used to describe sample characteristics of the study participants. Independent sample t-test, chi square test, and analyses of variance (ANOVA) were used to examine whether depression symptoms varied by the various groups. Post-hoc procedures were performed for pairwise comparisons. Two multiple regression models (with a stepwise selection method) were fit to the data to identify correlates of depression symptoms among women with breast cancer and among disease free women. Coefficient of Determination (R 2 ) was reported to estimate the amount of variation in depression symptoms scale explained by the explanatory variables in the model. All data analysis was conducted using SAS. --- Results --- Sample Characteristics The sample consisted of 152 African American women between the ages of 40 and 50 years, with a mean age of 44.0 (SD=3.11). Both women with breast cancer and those in the comparison group had a fairly high level of education, with 55.3% and 72.6 %, respectively, having some college education or higher. Similarity, 48.7% and 53.9 %, respectively, were employed in professional positions. Table 1 provides additional demographic information. No significant differences (p >0.05) between cases and comparison groups were found in demographic variables (marital status, income, occupation, and education). As expected, depression was statistically higher in cases (mean=11.5, SD=5.0) than in the comparison group (mean=3.9, SD=3.8). Additionally, total functioning (t 150df = (2.26) was significantly lower among cases compared to the comparison group (t 150df =4.04, p<unk>0.001) (Table 2). Depressive symptoms varied according to the woman's stage of breast cancer, which emerged as the only significant main effect F (1, 72) = 66.5, p<unk>0.0001. Mean levels of depression increased significantly as stage of breast cancer advanced: Stage I (mean=5. 53 Tables 3 and4 display results from the stepwise multiple regression analyses. Among breast cancer cases, stage of disease and age were positively related to depression. Both factors were the only independent predictors of depression and explained 84% of the variance. Among women in the comparison group, ego strength and tangible support were independent negative predictors of depressive symptoms explaining 32% of the variance. --- Discussion To our knowledge, this is among the first studies to compare levels of depressive symptomatology in African American women with and without breast cancer while examining the impact of internal characteristics and social support. We found that African American women with breast cancer reported greater levels of depression than women without cancer from community settings and rates in this group were higher than found in a recent study of mostly White breast cancer patients using the same screening tool (e.g., BDI) [43]. We also found that women with breast cancer reported lower levels of functioning compared to women without cancer. These findings underscore the importance of recent guidelines to screen routinely for psychological morbidity in breast cancer patients. Compared to their White counterparts, relatively little is known about adaptation in African American survivors. While previous research demonstrated that individuals with cancer have greater levels of psychiatric illness, especially depression when compared to the general population [44][45][46][47][48][49] we have now expanded this knowledge to African American breast cancer survivors. We found that breast cancer stage and age were independently associated with depressive symptoms in African American woman with breast cancer accounting for a significant amount of the explained variance. Our finding that older age was associated with higher levels of depression is contrary to some reports that examined anxiety and depression and found that depression and anxiety were higher in younger African American women [50]. One explanation for this difference may be that the current study had a more narrow age range and thus did not allow for comparisons between very young or very old African American women. [51,52]. Overall the sample could be regarded as "younger" in general since participants were <unk>50 years of age and data suggests that compared to older breast cancer survivors, younger survivors report more psychological problems and adjustment difficulties [53]. Our findings point to important implications for long-term well-being in African American survivors because they are more likely to have an earlier age of onset of breast cancer compared to White women [35]. While there is limited empirical data regarding reasons for the depressive symptoms in African American women, younger women may have more concerns about taking care of their children, future child-bearing and sexuality than their older counterparts [53][54][55]. In addition to age, certain contextual factors may exacerbate these issues and increase vulnerability to depressive symptomatology in African American women compared to their White counterparts such as financial barriers, lower socioeconomic status, and access to mental health services [56]. More information is needed about the particular problems and/or concerns of younger African American breast cancer patients as well as interventions to address these issues. A higher stage of breast cancer, was associated with higher levels of depressive symptoms, which have been reported in other populations [57,58]. These findings are consistent with previous research demonstrating relatively high levels of psychiatric distress (depression, anxiety) in patients with advanced stages of breast cancer [59,60]. Fulton's study [59] examined 80 women diagnosed with advance stage breast cancer from initial diagnosis through a 16-month period in an effort to monitor levels of depression and anxiety along with identifying mood disturbance. When using cut off scores on the Hospital Anxiety and Depression Scale (HADS), Fulton found that a relatively large proportion of the sample fell into the borderline and cases ranged for both depression (31%) and anxiety (39%). In contrast, a cross-sectional study conducted by Kissane and colleagues [60] found high rates of psychiatric distress (depression) and disturbances in a sample of 303 women with earlystage breast cancer. These studies, however, did not compare depressive symptoms across breast cancer stage. A plausible explanation for the relationship between clinical stage and depression is that the clinical aspects of the disease (i.e., stage of cancer) may be more robust predictors of depression than psychosocial variables included in this study, thus not accounting for significant amounts of the explained variance. Although previous research has not specifically examined predictors of depression in African American women with breast cancer, ego strength and multi-dimensions of social support have been related to psychological adjustment to breast cancer [15,16,19,61,62]. The consistency of these findings across research designs and with different samples gives strength to the conclusion that associations exist between ego strength, social support, and depression. However, the fact that stage of breast cancer did emerge as the best predictor of depression in the present study underscores the important role that stage of disease plays in the psychological functioning of women within the sample. While ego strength and the various social support variables seem to be non-significant predictors of depressive symptoms for women with breast cancer, ego strength and tangible support accounted for 32% of the total explained variance in depressive symptoms in the comparison group. One possible explanation for the difference between the two groups (breast cancer and disease free) is that simply the clinical condition for breast cancer is more stable and robust predictor of depressive symptoms than intra-psychic characteristics of the individual (e.g. level of adaptability) and socio-cultural factors (e.g. social support). However, an alternative explanation may be that other personality factors (e.g. distressed Type D personality) that were not explored in this investigation could possibly predict depression. For example, research has shown that Type D personality, also called distressed personality, has been linked with depression in other clinical populations (i.e. cardiac patients) [63]. Future studies in African American patients that include measures of personality traits such as Type D may be useful. Another explanation of the lack of association between social support and depression in the breast cancer group may rely in the inadequacy of the instrument to capture all the relevant types of social support for this group. Current measures of social support have often been developed and validated in White middle class populations and might not include some types of social support deemed to be important for ethnic minorities or specific subpopulations, such as breast cancer patients [64]. For instance, in a qualitative study that explored perceptions and experiences of social support in African American women with breast cancer, Hamilton and colleagues [64] found various types of emotional support (e.g. presence of others, engaging in distracting activities) and tangible support (e.g. offers of prayers, assistance to continue religious practices) not included in Norbecks' emotional and tangible support subscales. Furthermore, informational support, (e.g. getting information about what to expect, validating the information received by the doctors), was found to be very relevant among African American breast cancer women, but it was a dimension of support not covered by Norbecks' social support questionnaire [41,42]. Additionally, structural properties of the network included in the survey, such as the size of the network, might have a different impact on mental health than expected in breast cancer patients compared to the comparison group. For instance, Ashida and colleagues [20] found that for younger breast cancer women a reduction of network size was associated with better psychological adjustment. Thus, further research to understand the impact of various types of social support in mental health for specific subpopulations is warranted. Therefore, it is important to emphasize that one should not discount the valuable role that personality characteristics of the individuals and the social support available to these individuals can play in the psychological functioning of women with breast cancer given the findings regarding the benefits social support and support groups. In addition to support groups, cancer providers have and important role of providing emotional and social support to women diagnosed with breast cancer [13]. Thus, future studies should examine healthcare interactions (e.g., patient-provider communication) and access barriers as other potential predictors of psychological morbidity in recently diagnosed patients [65,66]. While it is well noted that many African American women have strong spiritual coping [10,67] this does not preclude the need for psychosocial support during cancer treatment. The stress and fear associated with the diagnosis of breast cancer may indeed trigger a depressive response or reaction. Additionally, it is likely that African American breast cancer patients, especially those of lower socioeconomic status, encounter economic as well as other barriers to cancer care [68] which has been shown to be associated with depressive symptomatology in other minority groups [58,69]. Currently little is known about the mental health referral process for African American women with breast cancer. However, limited data suggest that African American women are less likely to seek and/or receive necessary mental health services than their White counterparts [65,66,70,71]. Thus receipt of appropriate psychosocial assessments and mental health referral warrants attention for this group. Based on our data, greater efforts should be implemented to offer psychosocial support services especially to younger African American women with breast cancer. Because comprehensive supportive care may not be readily available to all women with breast cancer, ensuring interdisciplinary collaborations between oncologists and mental health professionals is one practical step in this direction. The study had certain limitations. Due to the cross-sectional nature of the study we cannot determine the causal direction of the association between depression symptoms and the predictors. Women were recruited in urban areas and eligible participants were between 40-50 years old, thus results may not generalize to younger or older populations or to women that live in rural areas. We did not capture information about stage of treatment which prevented us from analyzing the impact that stage of treatment may have had on depression levels. Nevertheless, this study is among the first to compare cases and controls in African American breast cancer patients adding to the paucity of empirical data about correlates of depression and on the internal consistency and reliability of the selected measures for the African American female population. As such, this research begins to close the gaps in knowledge about the general psychological presentation of African American women with breast cancer. Note: The scores on the ego strength range from 0-52; the Depression scale range from 0-39. The range of scores on the other variables vary since they were able to cite as many individuals as they wanted in their list of networks. | Purpose-This study assessed the levels of depressive symptomatology in African Americans women with breast cancer compared to those of women without breast cancer and examined demographic, psychosocial, and clinical factors were correlated with depression. Methods-A total of 152 African American women were recruited from Washington DC and surrounding suburbs. Breast cancer patients (n=76 cases) were recruited from a healthcare center and women without cancer were recruited from health fairs (n=76 comparison). We assessed depression, psychosocial variables (ego strength and social support) and socio-demographic factors from in-person interviews. Stage and clinical factors were abstracted from medical records. Independent sample t-test, chi square test, ANOVA, and multiple regression models were used to identify differences in depression and correlates of depression among the cases and comparison groups. Results-Women with breast cancer reported significantly greater levels of depression (m=11.5 SD=5.0) than women without breast cancer (m=3.9; SD=3.8) (p<.001). Higher cancer stage (beta=.91) and higher age (beta =.11) were associated with depression in the breast patients, explaining 84% of the variance. In the comparison group, ego strength and tangible support were inversely associated with depressive symptoms, accounting for 32% of the variance. Conclusions-Women with more advanced disease may require interdisciplinary approaches to cancer care (i.e., caring for the whole person). Implications for cancer survivors-Depression is often under-recognized and under-treated in African American breast cancer patients. Understanding the factors related to depression is necessary to integrate psychosocial needs to routine cancer care to improve survivors' quality of life. |
INTRODUCTION --- Disentangling the Relationship Between Social Protection and Social Cohesion: Introduction to the Special Issue --- Résumé Il existe d'importantes données probantes concernant l'effet de la protection sociale sur la pauvreté et la vulnérabilité. Cependant, les études qui se penchent sur les effets au niveau de la société sont peu nombreuses. Cet article fait office d'introduction à un numéro spécial qui étudie la relation entre la protection sociale et la cohésion sociale dans les pays à revenu faible et intermédiaire. Au cours des dernières années, la cohésion sociale est devenue un objectif central de la politique de développement. L'introduction et les articles de ce numéro spécial utilisent une définition commune de la cohésion sociale en tant que phénomène à multiples facettes, comprenant trois attributs: la coopération, la confiance et l'identité inclusive. Cet article introductif fournit un cadre conceptuel reliant la protection sociale à la cohésion sociale, montre les preuves empiriques actuelles des liens bidirectionnels et met en lumière la façon dont les articles de ce numéro spécial contribuent à combler les lacunes de la recherche existante. En plus de cette introduction, le numéro spécial comprend sept articles qui couvrent différentes régions du monde et divers régimes de protection sociale, et font usage de différentes méthodes quantitatives et qualitatives. --- JEL classification D63 • H41 • H53 • H55 • I38 --- Setting the Scene The development community has shown an increasing interest in social protection since the start of the new millennium. This trend is due to the increasing evidence that sustainable poverty reduction is difficult to achieve without investment in social protection because economic growth does not usually trickle down to reach an entire population. Moreover, social protection is increasingly recognised as a key driver of economic growth, especially if it incentivises productive investments by low-income households. And, finally, an increasing number of publications (e.g. Babajanian 2012;Evans et al. 2019;Loewe et al. 2020;Molyneux et al. 2016) show that social protection can also contribute to political and broader societal developments, even if such effects are often not its primary intended goals. This special issue contributes to this wide-ranging and multi-faceted debate, focusing specifically on social cohesion. Over recent years, social cohesion has emerged as a central goal of development policy, as demonstrated by numerous publications by international organisations and bilateral donors such as UNDP (2016), the World Bank (Marc et al. 2012), and the OECD (2011). The reasons for this are three-fold. First, societies that are more cohesive are believed to be more resilient, in particular with respect to natural disasters and public health crises such as the ongoing global Covid-19 pandemic (Abrams et al. 2020;Townshend et al. 2015). Second, social cohesion fosters societal peace (Fearon et al. 2009;Gilligan 2014;UNDP 2020). Third, social cohesion contributes to local community development (e.g. Wilkinson et al. 2017), which often depends on a community's ability to agree on common goods to be created for the benefit of all community members. Identifying policies that foster social cohesion is therefore crucial, not least because political and social polarisation is currently rising in many countries worldwide (Carothers and O'Donohue 2019). Social protection is potentially one of these policies. Examining its effects on social cohesion is particularly important in the context of the Covid-19 pandemic's impact on health, societies and economic development globally. People around the world feel more vulnerable, which can undermine resilience and thereby bring about societal and political instability. However, the relationship between social protection and social cohesion is unlikely to be a one-way street: socially cohesive societies are deemed, in their turn, to provide better and more all-encompassing and acceptable social protection systems because their members share similar values; a shared understanding of the common good helps to identify generally acceptable compromises for the design of social protection systems. This bi-directional relationship, even though it is quite intuitive, has received only limited attention so far. One major reason is that there is no universally agreed concept of social cohesion and no established set of indicators to measure it (see Babajanian 2012;Bastagli et al. 2016). In this special issue (SI), we address the problem by relying on a clear definition of social cohesion, which is quite similar to many definitions already suggested by the existing literature but is sufficiently narrow for straightforward operationalisation. According to this definition, social cohesion is composed of three main attributes: cooperation for the common good, trust and inclusive identity (Leininger et al. 2021a). The SI examines the possible effects of social protection on social cohesion and-though less so-those of social cohesion on social protection. It aims to address three sets of interrelated guiding research questions. The first set is whether different social protection schemes generate effects on social cohesion, and which ones have the strongest effects and on whom. Only the direct beneficiaries or the entire population? Mainly people in poverty? Mainly people working in the formal sector? Mainly women or men? The second set concerns the conditions under which these effects materialise. When exactly do they arise? In which contexts? Does it matter if a social protection scheme has been set up by the state or by other actors? Does the quality of targeting or quality of benefit delivery play a role? How important is the reliability and institutional durability of the schemes? And the third set of questions is whether social protection influences all aspects of social cohesion in the same way. Or does it perhaps affect mostly inclusive identity because beneficiaries (and possibly others) all feel better integrated into society-or horizontal trust because social benefits can bridge gaps and overcome hostility between different socio-economic classes? Likewise, we have to ask if all of these components are equally important for the existence and functionality of social protection schemes. And at the same time, we ask what role social cohesion plays for the planning, design, setup and operation of social protection programmes. The remainder of this introductory article is structured as follows. The next two sections present the concepts of social protection and social cohesion endorsed in this SI. The fourth section introduces the conceptual framework linking social protection to social cohesion, while the fifth reviews the existing empirical literature on the causal effects going either way. The last section presents the key findings of the papers in this SI as well as the gaps remaining in research so far. --- Social Protection The notion of social protection in international development is still quite ambiguous. Most people would probably agree with the definition of social protection as:... the entirety of policies and programmes that protect people against poverty and risks to their livelihoods and well-being. (Loewe and Schüring 2021, p. 1) This means that social protection includes all measures that help people in their efforts to (i) prevent risks (e.g. by healthy diet, cautious behaviour in street traffic, safety at work, vaccinations, social distancing during pandemics), (ii) mitigate risks (e.g. by crop or income diversification, the accumulation of savings or insurance or risk hedging) and (iii) cope with the effects of risks (e.g. by credit or income-support to people in need). At the same time, there is still disagreement on some feature of social protection, such as the following: (i) Who can provide social protection (just the stateor also private actors such as private health or life insurance companies, social welfare organisations, informal self-help groups or society-based mutual support networks) and (ii) Which risks should be covered (just health and life-cycle risks such as longevity and work-disability-or also political, macroeconomic, natural and environmental risks such as theft, terrorism, drought, soil degradation or business failure). At the very minimum, though, social protection includes (i) non-contributory transfers (direct and indirect, in cash or kind), (ii) social insurance (which is contributory and activated in case of contingencies only), (iii) micro-insurance, (iv) labour market policies (passive and active) and (v) social services (such as therapy, training, or rehabilitation) (Loewe and Schüring 2021). The core purpose of social protection is to reduce vulnerability and poverty in a country by preventing people from falling into poverty (preventive function), providing support to those who are living in poverty (protective function) and enabling low-income earners escape from poverty (promotive function) (Loewe 2009a). Thereby, it contributes to nutrition, education and health because it allows even low-income people to buy food, consult a physician when they are sick and send their children to school rather than work (Burchi et al. 2018;Kabeer 2014;Strupat 2021). And social protection is also one of the most powerful tools to reduce income inequality between social classes (Inchauste and Lustig 2017) and genders (Holmes and Jones 2013). However, social protection can also have a transformational function by addressing the root causes of poverty and vulnerabilities, such as unequal power relations or unjust distribution of public resources. This function has been less explored so far, given the indirect link between many social protection programmes and transformational outcomes of interest, such as social equity and inclusion, empowerment and rights (especially labour laws). At the same time, social protection matters also for economic development (Barrientos and Malerba 2020;Loewe 2009a): On the one hand, it enables even low-income households to address risks, smooth income volatility and improve inter-temporal allocation of income. Thereby, it improves the lifetime utility of households and reduces pressure put on networks and society as a whole to provide support for people in need who have failed or omitted to make provision for themselves. On the other hand, social protection encourages low-income earners to make investments and thereby improve their future income expectations. This effect is due to the fact people with low-income and insufficient social protection tend to deposit any possible small savings in a safe place, from which they can easily withdraw the savings without penalty whenever they suffer a loss caused by bad harvest, illness, unemployment or any other risk. This preference changes only once people enjoy reliable and sufficient social protection against at least their most fundamental risks. Some empirical evidence shows that from then on, people start investing at least some of their savings in machines, new modes of production, training or better education for their kids (Gehrke 2019). Investments like these bring about new risks (investment failure) but they raise future income expectancy. Likewise, Borga and d'Ambrosio (2019) find that beneficiaries and non-beneficiaries alike increased their investment in asset formation and livestock holding in response to the launch of cash-for-work programmes in India and Ethiopia. And Bastagli et al. (2016) confirm a clear relationship between cash transfer receipt and increased school attendance, the use of health services and investment in livestock and agricultural assets. If well designed, social protection can thus be a key driver of pro-poor growth: growth that benefits predominantly low-income people (Alderman and Yemtsov 2014;Bhalla et al. 2021;Ravallion et al. 2018;Sabates-Wheeler and Devereux 2011). --- Social Cohesion Social cohesion refers to the ties or the "glue" that hold societies together (Durkheim 1999). Overall, there is a broad agreement that social cohesion is a complex, multi-faceted phenomenon, encompassing a horizontal and a vertical dimension (Jenson 2010). While early studies only equated social cohesion with the relationship among individuals and groups in a society (horizontal dimension), over the last years, equal emphasis has been placed on the relationship between individuals and state institutions (vertical dimension) (Chan et al. 2006;OECD 2011;Langer et al. 2017;Lefko-Everett 2016). In the search for an operational definition of social cohesion, we adapt the minimalist approach suggested by Chan et al. (2006), according to whom the concept should be "thin", including only the core attributes and excluding the determinants (e.g. inequality) and the outcomes (e.g. peace) of social cohesion. As often generally stated in academic and policy debates, inequality is likely to play a key role in determining social cohesion in a society (Leininger et al. 2021a). However, verifying the relationship between social cohesion and inequality analytically is not possible if they are part of the same concept. Against this background, in this SI, we endorse the definition provided by Leininger et al. (2021a): Social cohesion refers to both the vertical and the horizontal relations among members of society and the state as characterised by a set of attitudes and norms that includes trust, an inclusive identity and cooperation for the common good. (Leininger et al. 2021a, p. 3). Based on this definition, social cohesion has three attributes: cooperation for the common good, trust and inclusive identity. All three attributes have a horizontal and a vertical dimension. The first attribute is cooperation for the common good. When many people/ groups cooperate for interests that go beyond-and sometimes even conflict withthose of the individuals involved it is a clear sign of high social cohesion because people who cooperate for the common good do care about society. Cooperation among individuals and groups represents that horizontal dimension, while cooperation between individuals/groups and state institutions represents the vertical dimension (Chan et al. 2006). For instance, the maed magarat ("dish sharing") in Ethiopia-a food-sharing initiative between neighbours to counter the effects of the first wave of the Covid-19 pandemic (Leininger et al. 2021b, box 4)-is a form of horizontal cooperation. In turn, investing time to take part in participatory budget processes to define the purposes of public spending is an example of vertical cooperation for the common good. The second attribute is trust (Chan et al. 2006;Dragolov et al. 2013;Langer et al. 2017;Schiefer and van der Noll 2017). Social cohesion includes two types of trust: generalised trust and institutional trust (Fukuyama 2001;Zerfu et al. 2009;Langer et al. 2017). Generalised trust is the "ability to trust people outside one's familiar or kinship circles" (Mattes and Moreno 2018, p. 1) and it captures the horizontal dimension of social cohesion. Institutional trust, instead, refers to the trust towards the core, structural public institutions of a country (Mattes and Moreno 2018), and thus covers the vertical dimension. The third attribute of social cohesion is inclusive identity. Most people feel they belong to different groups, and thus have several identities (such as religion, ethnicity, gender, village, family, class). A socially cohesive society is one in which individuals can have different identities and yet live together in a peaceful way, and where a minority with a shared identity does not dominate the majority with a collective identity. In other words, different group identities tolerate, recognise and protect each other while state institutions support such tolerance for different identities. In cohesive societies, individuals can still have different group identities but they should also have a feeling of mutual belonging to a broader unity (the nation) that is more than the sum of its members and can bridge identities. There are still diverging views in the literature, especially regarding some potential ingredients of social cohesion. As stressed in a comprehensive review article by Schiefer and van der Noll (2017), these ingredients would be "quality of life/ well-being" and "inequality". We do not integrate well-being and inequality into our concept of social cohesion for three reasons. First, in line with other scholars (Dragolov et al. 2013;Schiefer and van der Noll 2017;Burchi et al. forthcoming), we argue that social cohesion is a "macro-level" or "meso-level" phenomenon. It is thus a specific trait of a community, a country, a region or the world as a whole. The literature on well-being, instead, focuses on individuals or households as units of analysis and refers to their living conditions in different life domains (Sen 1985). Second, it is problematic to include inequality as one of the constitutive elements of social cohesion as a notable number of studies do (Langer et al. 2017;Canadian Council on Social Development 2000;Berger-Schmitt 2000). It would imply that, by construction, societies that are more unequal are less socially cohesive. While it is plausible to expect a (negative) relationship between inequality and social cohesion, incorporating the former in the definition of the latter does not allow for testing it empirically. Third, in view of the objectives of this SI, having well-being or inequality as integral parts of the concept of social cohesion would generate particular problems. The expansion of well-being and the reduction of inequality are often considered two direct objectives of social protection. If included in the concept of social cohesion, social cohesion would be identified as a primary goal of social protection, as well. In other words, any policy that enlarges well-being or reduces disparities would automatically increase social cohesion: it is, instead, important to verify whether it contributes also to social cohesion through either of these two channels (or others). --- Conceptual Framework: Relationship Between Social Protection and Social Cohesion There are good conceptual arguments for the assumption that social protection and social cohesion affect each other (see Fig. 1). As highlighted in the second section of this article, the goal of social protection is to reduce poverty and vulnerability and to contribute to pro-poor growth (Molyneux et al. 2016). In this SI, we argue that social protection can contribute to societal and political development, as well. More concretely, we state that social protection can have positive effects on all elements of social cohesion: inclusive identity, trust and cooperation for the common good. Households that are well protected against the most serious of their individual risks can be assumed to have more confidence in themselves, feel better included in society because they have more opportunities than other households do and, hence, feel less alienated from other groups of society (Babajanian et al. 2014). This includes the positive impact of effective measures to mitigate climate change. In addition, social protection is an important tool to reduce inequality, i.e. disparities between different parts of the population. It would thus contribute not only to the inclusive identity (feeling of belonging) of beneficiaries but also to their trust in other members of society (horizontal trust)-even if these belong to other segments/ groups of society (upper arrow in Fig. 1). These effects can be particularly strong if social protection schemes incentivize interactions between members of different societal groups (such as in the case of CfW schemes where people with different origins work side by side for the common good, including females and males). Their cooperation in the creation of a common good can foster the acceptance into a group of individuals from outside that group, and the acceptance and legitimacy of social protection schemes in a society. In addition, we assume social protection schemes strengthen vertical trust-at least if they are implemented or financed by the state (Burchi et al. 2020;Haider and Mcloughlin 2016;Loewe and Zintl 2021). Their beneficiaries are likely to be grateful to the actors that support them financially or by providing efficient instruments to deal with risk and poverty.1 Consequently, the overall trust of beneficiary Countries with high SC supposedly face less resistance against the implementa<unk>on of SP schemes Where SC is strong, government officials are more likely to engage in SP Where ver<unk>cal trust is strong, people rely on the con<unk>nuity of SP and hence are more likely to change their behaviour --- (modera<unk>ng, non media<unk>ng Fig. 1 Main mechanisms between social protection and social cohesion. Source Authors households in public institutions is likely to increase-at least if social protection schemes are universal or well targeted at those in need (see below). Social protection thereby establishes a stronger relation between citizens and the government (Babajanian 2012). As a result, citizens tend to be more willing to accept their current government and the given political order, and to invest in public goods such as public order, the tidiness of streets or communal action (Burchi et al. 2020;Loewe et al. 2020). This effect on vertical trust is found to be especially relevant in climate change mitigation policies, where both cash transfers and trust in government play a key role (Klenert et al. 2018). The intensity of all these effects depends on the design and implementation of social protection schemes (middle arrow in Fig. 1). For example, trust in the government is likely to increase mainly if the social protection schemes are set up or are effectively financed by the government and if the population is aware of this fact. If, however, social protection schemes are run and financed by non-governmental organisations or foreign donors, they might even have negative effects on citizens' trust in their state. Good communication can also be helpful. Vertical trust and cooperation is likely to increase more if the government gives a clear explanation of the rationale for the existence and design of a social protection scheme and that it is financed by scarce public resources. The most effective strategy to foster trust in the government is to establish social protection as a citizens' right rather than as poverty relief (Evans et al. 2019; Vidican Auktor and Loewe 2021). In addition, social protection's effect on vertical trust is likely to be stronger if membership and targeting criteria are reasonable and transparent, and if citizens have reason to believe that the targeting is rule-based and fair in practice. In contrast, we can assume that high errors of inclusion and exclusion (and even rumours about it) have negative effects on citizens' trust in the government. Lack of transparency can create feelings of unfairness and resentment as well, thereby worsening horizontal trust (Molyneux et al. 2016). In particular, it can create conflicts between direct programme beneficiaries and those excluded but perceived to be in similar conditions (Adato 2000;Adato and Roopnaraine 2004;Loewe et al. 2020). Even worse could be a situation whereby these programmes are targeted based on political considerations, or are at least perceived as such by the population. For example, social protection programmes often benefit mainly the middle class rather than the poor (Loewe 2009a), which can be intentional or not but in any case intensifies existing inequalities and hence weakens both horizontal and vertical trust (Köhler 2021). Moreover, some schemes, such as cash transfers targeted at the poor, can increase stigma and thus reduce social inclusion and social cohesion when not adequately designed (Li and Walker 2017;Loewe et al. 2020;Roelen 2017). Finally, if these programmes are not endorsed by the sections of society not directly addressed by these interventions, the net effect may be negative. In the best case, the target population itself participates in the design and implementation of social protection programmes, which adds to the positive effects on horizontal and vertical trust (Sabates-Wheeler et al. 2020;Loewe et al. 2020). Likewise, the effect of social protection on people's inclusive identity rises with the level and reliability of benefits. And the way beneficiaries are treated by government officials certainly plays a role as well (middle arrow in Fig. 1). Also, social protection supposedly improves the vertical dimension of all attributes, assuming they are universal rather than differentiated according to social groups (such as employment-related social insurance schemes, programmes of professional or trade unions or geographically targeted social transfer schemes). Poverty-targeted programmes may also be acceptable to large parts of the population if their targeting criteria are just, transparent and easy to understand. In any case, social transfer schemes are likely to generate stronger effects on social cohesion than social insurance schemes, whereby members finance their own benefits. Though this SI mostly focuses on the effect of social protection on social cohesion, it also touches on the reverse relationship. In addition to being a goal in itself, social cohesion is also crucial for the implementation, design and effectiveness of social protection schemes (lower arrow in Fig. 1). First, policy-making depends highly on the readiness of policy-makers to set up social protection schemes benefitting not just their clientele or peer group (ethnicity) but the entire population, or poor and other vulnerable groups in particular. Supposedly, this readiness is higher in countries with strong horizontal and vertical trust and cooperation for the common good. In addition, governments of countries with high social cohesion are less likely to face resistance against the implementation of social protection schemes for the poor and vulnerable. Recent studies in the context of Covid-19 have highlighted that trust in government is crucial for the selection of, and compliance with, containment policies (Bargain and Aminjonov 2020; Devine et al. 2020). This shows that high social cohesion enhances governments with public confidence. When social cohesion is weak, however, it may be much more difficult for governments to set up social protection schemes successfully, as citizens may feel resentful about social protection programmes they do not like, which may ultimately foster grievances in society (Abrams et al. 2020;Wilkinson et al. 2017). Second, policy implementation benefits from social cohesion, too. Where social cohesion is strong, government officials are more likely to take actions that enhance the welfare of the population as a whole, and less likely to neglect their duties, misappropriate public funds and give preferential treatment to their peer group (family, home province, friends etc.). As a result, social protection schemes are more efficient and functional. Third, policy reception is possibly even more important. Where vertical trust in the government is strong, people rely on the continuity of social protection policies and schemes and hence are more likely to change their behaviour (e.g. invest savings rather than hoarding them because they feel safe). Where sense of belonging and horizontal trust are strong, beneficiaries are more likely to share their benefits with other households and invest in social capital. As a result, the effectiveness of social protection schemes increases, including their multiplier effects. For traditional social protection schemes (solidarity networks among neighbours and friends) based on societal structures, social cohesion is even more important. They cannot work without horizontal trust, and if horizontal trust and sense of belonging are particularly strong, traditional social protection schemes are even based on generalised, rather than balanced, reciprocity, i.e. they function like an insurance rather than a mutual credit club. Where people have lived together for most of their lives and hence can trust each other, they are ready to help relatives, neighbours and friends in need without an expectation that their support will ever be paid back. Everybody receives support from those who are able to provide it, and gives support whenever they can to whatever person is in need-but this other person is not necessarily the same person that has provided help before. The reciprocity is thus between individuals and the community rather than between individuals, such as in the case of balanced reciprocity (Cronk et al. 2019;Loewe 2009b). In this SI, we would have liked to discuss equally both directions of the relationship between social protection and social cohesion. Much of the discussion in subsequent sections focuses, however, on the effects of social protection on social cohesion. The reasons are two-fold: the increasing importance of social cohesion as a policy outcome and goal, which also feeds into empirical research agendas, and the fact that there is even less empirical literature on the effects of social cohesion on social protection than on the effects of social protection on social cohesion. This is possibly for two reasons. First, testing the effects of social cohesion on social protection requires variance in social cohesion, which exists mainly in cross-country comparisons, which are often impossible because of the lack of comparable data on social protection systems. Second, given the presence of several confounding factors, it is difficult to attribute differences in social protection programmes clearly to differences in the level of social cohesion. --- Empirical Evidence Unfortunately, empirical evidence for the assumed bi-directional relationship between social protection and social cohesion is limited and scattered because social cohesion is hardly ever an explicit goal of social protection programmes and hence rarely considered in monitoring and evaluation reports. The few existing empirical studies define social cohesion in quite different ways, but most of them operate with attributes that are no different from, or very similar to, the ones we use: cooperation for the common good, trust and inclusive identity. Unfortunately, the bulk of the studies focus on the horizontal dimension of these three attributes. In addition, studies in this SI applied different research methods, which do justice to their specific research subjects and questions. A number of studies are providing empirical evidence of the positive effects on the horizontal dimension of social cohesion. Most of them look at cash transfer schemes in sub-Saharan Africa or Latin America. Adato (2000), for example, conducted focus group discussions with different actors in 70 communities across six states of Mexico and finds that cash transfers have positive effects on horizontal trust. Two studies using both survey and experimental data document that a conditional cash transfer in Colombia has increased beneficiaries' willingness to cooperate with each other (Attanasio et al. 2009(Attanasio et al., 2015)). Relying on existing secondary data, primary data and the implementation of different qualitative and participatory methods in Yemen, West Bank and Gaza, Kenya, Uganda and Mozambique, Pavanello et al. (2016) confirm that both social insurance and assistance schemes can contribute to horizontal trust and inclusive identity by promoting local economic development. FAO (2014) provides evidence from several cash transfer schemes in sub-Saharan Africa indicating their positive impacts on social relations and participation in community events. ORIMA and the Asia Foundation (2020) argue that Timor-Leste's Covid-19 cash transfer programme has a positive effect on horizontal trust: 82% of the population stated that Covid-19 had brought their community together, in contrast to 70% immediately before the pandemic. Also in this SI, Beierl and Dodlova (2022) find that CfW activities in Malawi increase the readiness of people to invest in public goods as well as to interact with others from the same or a different societal group. Andrews and Kryeziu (2013) provide evidence that CfW programmes in Ethiopia and Yemen have improved social cohesion by citizen participation in programme design. Roxin et al. (2020) find that CfW schemes in Turkey and Jordan have contributed to horizontal trust and the sense of belonging of participants and non-participants. Zintl and Loewe (2022, in this SI) confirm the finding for Jordan. UNHCR (2019) also finds that the Kalobeyei Integrated Social and Economic Development Programme (KISEDP) in Kenya, which enables refugees to purchase supplies from local shops and thereby promote interactions, has positive effects on horizontal trust between refugees and locals in Turkana. Likewise, Köhler (2021) presents some case studies to show that social protection programmes reduce poverty and thereby contribute to social inclusion, the overall satisfaction of people and, ultimately, social cohesion. Reeg (2017) suggests that the existence of social protection programmes raises the opportunity costs of being part of an armed group. And two studies (Lehmann and Masterson 2014;Valli et al. 2019) have assessed quantitatively the impacts of specific social protection programmes in refugee settings, providing initial evidence of positive effects on social relations among refugees and between refugees, and in one case also between refuges and local communities. However, few studies suggest negative effects of social protection programmes on social relations (Adato and Roopnaraine 2004;Cameron et al. 2013;Kardan et al. 2010;Pavanello et al. 2016). In the majority of cases, the authors find that the lack of transparency and/or clarity in the targeting of beneficiaries generated feelings of jealousy among households that did not benefit from the programmes, thus increasing tensions with beneficiaries (Molyneux et al. 2016;Roelen 2017;Sumarto 2020;Burchi and Roscioli 2022 in this SI;Camacho 2014 in this SI). In addition, benefitting from social protection can also bring stigma and lower social cohesion (Hochfeld and Plagerson 2011). In a study based on individual interviews with different actors in Sri Lanka, Godamunne (2016) shows that disrespectful treatment by government officials and delays in the transfer of benefits weaken the vertical trust of beneficiaries of a social transfer programme. Roelen et al. (2022, in this SI) provide insights into different graduation programmes in Burundi and Haiti, showing that social protection can have positive and negative effects at the same time. While it can contribute to dignity, participation in social activities and sense of belonging, stringent targeting and discretionary provision of benefits can, in time, undermine trust among non-participants. The evidence concerning the relationship between social protection and the vertical dimension of social cohesion is even scarcer and hence even less conclusive. Building on experimental data, Evans et al. (2019) find that a conditional cash transfer in Tanzania significantly increased vertical trust in local leaders and a self-reported willingness to participate in local projects. And this effect seems to be higher when beneficiaries are better informed about the central role played by the local government. In Brazil, however, Bolsa Familia did not reach the same positive results because beneficiaries did not believe that the designed institutional space to ensure their representation-the municipal-level councils-were "truly available to them for participation, monitoring, and accountability" (Molyneux et al. 2016(Molyneux et al., p. 1093)). Other studies find negative effects of social protection schemes on societal perceptions of government (Aytaç 2014;Bruhn 1996;Guo 2009). Likewise, Zepeda and Alarcón (2010) show that social protection programmes foster vertical trust only if they are institutionally sustainable. Gehrke and Hartwig (2018) conduct an extensive literature review on public work programmes and finally suggest that the involvement of foreign donors in social protection policies can harm vertical trust. Zintl and Loewe (2022, in this SI) provide evidence in support of this assumption; they find CfW programmes to have positive effects on horizontal trust in Jordan; however, they report also that these same effects are much weaker where participants are aware of the fact that foreign donors rather than the national government have set up the respective CfW schemes. In addition, the effect is also much weaker if the targeting of transfers is perceived as unfair or non-transparent. Similarly, Camacho (2014) finds that the conditional cash transfer in Peru increases vertical trust only among the beneficiaries, and decreases it among non-beneficiaries. Köhler (2021) presents anecdotal evidence that the introduction of a social pension and a child benefit scheme in Nepal has been a major factor in the increase in vertical trust in Nepal after 2009, while the dismantling of pension schemes in Chile led to a decrease in vertical trust. Looking at the reverse side of the relationship, we find even less empirical evidence. Based on qualitative analysis, Hossain et al. (2012) find that the Indonesian unconditional cash transfer programme Bantuan Langsung Tunai had positive effects on different outcomes only in high social cohesion communities. In a study covering four Asian countries, Babajanian et al. (2014) find that the impacts of social protection schemes depend substantially on the local institutional setting and, above all, on the nature of the relationship among social groups. Indeed, where gender and ethnic disparities were high especially due to existence of discriminatory rules against women and specific groups, programme performance was lower. Roelen | While there is substantial evidence of the effect of social protection on poverty and vulnerability, limited research has focused on societal outcomes. This paper serves as introduction to a special issue (SI) examining the relationship between social protection and social cohesion in low-and middle-income countries. Over the last years, social cohesion has emerged as a central goal of development policy. The introduction and the papers in the SI use a common definition of social cohesion as a multi-faceted phenomenon, comprising three attributes: cooperation, trust and inclusive identity. This introductory article provides a conceptual framework linking social protection to social cohesion, shows the current empirical evidence for the bi-directional linkages, and highlights how the papers in the SI contribute to filling existing research gaps. In addition to this introduction, the SI encompasses seven papers, covering different world regions and social protection schemes, and using different quantitative and qualitative methods. |
as unfair or non-transparent. Similarly, Camacho (2014) finds that the conditional cash transfer in Peru increases vertical trust only among the beneficiaries, and decreases it among non-beneficiaries. Köhler (2021) presents anecdotal evidence that the introduction of a social pension and a child benefit scheme in Nepal has been a major factor in the increase in vertical trust in Nepal after 2009, while the dismantling of pension schemes in Chile led to a decrease in vertical trust. Looking at the reverse side of the relationship, we find even less empirical evidence. Based on qualitative analysis, Hossain et al. (2012) find that the Indonesian unconditional cash transfer programme Bantuan Langsung Tunai had positive effects on different outcomes only in high social cohesion communities. In a study covering four Asian countries, Babajanian et al. (2014) find that the impacts of social protection schemes depend substantially on the local institutional setting and, above all, on the nature of the relationship among social groups. Indeed, where gender and ethnic disparities were high especially due to existence of discriminatory rules against women and specific groups, programme performance was lower. Roelen et al. (2022, in this SI) shows that the quality of horizontal relationships at the community level plays an important role in the success of two different graduation programmes in Haiti and Burundi. --- Findings of this SI and Their Implications for Future Research The remainder of the articles in this SI contribute to filling some of the gaps outlined in this introduction in two ways. The first is by using a common understanding and definition of social cohesion. Some papers focus on just some components of the definition (especially trust), but they all share a common understanding. This facilitates a comparison of the findings across papers. Second, the articles look at several mechanisms linking social protection and social cohesion, and also represent a good balance between qualitative and quantitative methods. In addition, the aforementioned comparability advantage of the SI is strengthened by the different contexts and countries considered, as well as the different social protection programmes analysed, ranging from long-term to short-term ones, from conditional and unconditional cash transfers and public works schemes to graduation (or social protection plus) programmes2 and contributory social insurance schemes. Table 1 provides a brief overview of the main features of the different articles of the SI. The majority of the articles advance our understanding of the effects of social protection on social cohesion. Burchi and Roscioli, for example, look at the effects of an integrated social protection programme on social cohesion in Malawi using a mixed-methods approach. Specifically, they exploit an experimental design and primary household data for about 800 households in total to investigate the impact of three different components of the programme on a set of indicators for the trust and cooperation attributes. Informed by the results of the econometric analysis, they then examine the contribution of one specific component-participation in the saving groups-through focus group discussions and individual interviews. The study shows no concrete effect of a lump-sum payment on social cohesion, but a positive effect of both the training and participation in savings groups on within-group trust and (economic and non-economic) cooperation. Conversely, vertical trust towards local institutions and horizontal trust towards other village members declined, in particular due to jealousy and tensions arising from the targeting of social protection. The authors thus underline the possible limitations of just giving cash, as well as the potential of savings groups. Still in Malawi, Beierl and Dodlova investigate whether a public works programme effects cooperation for the common good. The authors address this research question through quantitative analysis applied to primary and secondary data. The primary data, collected in two waves (2017 and 2019), cover 500 randomly selected households; secondary data are from the nationally representative integrated household survey conducted by the World Bank in three waves (2010, 2013, and 2016). The paper finds that the scheme improves cooperation among community members and speculates that this may, in turn, improve trust among community members and the perception of state institutions. Strupat examines the effects of social protection on social cohesion during a large covariate shock such as the Covid-19 pandemic in Kenya. He does so econometrically, by using a difference-in-difference model and household data collected before and after the Covid-19 pandemic. His analysis suggests that social assistance has no statistically significant preserving effect on social cohesion overall. Ongowo presents the results of qualitative research on the effects of social protection on social cohesion, focusing on street children in Kenya. The author conducted comprehensive qualitative content analysis of key informant interviews with twelve government officials, and in-depth qualitative interviews with twelve randomly selected former street children who previously benefited from social protection programmes. He finds that social protection can be an important tool to build social capital and solidarity. In particular, he concludes that social protection programmes improve the chances of street children developing a career, reduce public resentment towards street children and, thus, enhance various aspects of social cohesion. Zintl and Loewe in turn look at social cohesion in the context of state fragility and migration, with a focus on donor-funded programmes. They analyse the effects of public works/CfW programmes in Jordan on participants and non-participants, in both cases Syrian refugees and Jordanian locals, females and males. Their results are based on qualitative analysis of key informant interviews (281 with CfW participants and non-participants at nine CfW sites all over Jordan, 99 with neutral observers at local and national levels), four group discussions and quantitative analysis of a census among all participants of one specific CfW programme. The results confirm effects on the sense of belonging and horizontal trust of participants and non-participants, refugees and locals. They provide evidence in particular for a positive effect on women being more active in the economy and the society. The results for vertical trust, however, are more ambiguous because many Syrians and Jordanians attribute positive effects to donor support rather than to Jordanian authorities. Other papers in the SI look at the broader picture by also including the effects running from social cohesion to social protection. Roelen et al. conduct extensive qualitative analysis to investigate the bi-directional relationship between social protection and social cohesion in Burundi and Haiti. In particular, key informant interviews and focus group discussions were performed with programme participants (male and female) and programme staff. Data collection was based on semi-structured discussions as well as interactive activities such as ranking exercises. They find that the existing programmes have strengthened some aspects of social cohesion, such as dignity and positive identity, whilst also having negative effects on others, such as sense of belonging and togetherness. However, they also find that social cohesion enhanced the positive effects of social protection programmes. Malerba looks at social protection and social cohesion in the context of climate change mitigation. This is important, as climate mitigation policies are strictly related to socio-economic development in low-and middle-income countries. While some of the issues have been investigated in separate literatures, there was a lack of unifying framework or empirical analysis when considering the combined effects of social protection and social cohesion on the implementation of climate mitigation. In more detail, the econometric analysis employs data collected in 34 countries (24 high-income and 10 lower income) in a multilevel model framework. The data used collected preferences for environmental policies as well as other relevant information. The results show that social cohesion in the form of trust is positively correlated with support for climate mitigation. Conversely, social protection has positive effects only in high-income countries but not in middle-income countries; this suggests low complementarity between climate and social policies and higher prioritisation of social goals in lower-income contexts. In sum, the papers in this volume provide support and empirical evidence to different aspects of the relationship between social protection and social cohesion, as outlined in the conceptual framework. However, despite the important contribution that the SI makes to the topic, further research needs to be done on the remaining critical gaps. One of these is the impact of social cohesion on the effectiveness of social protection, as this SI focuses more on the inverse relationship. Such research can definitely benefit from applying the definitions of social protection and social cohesion used by all authors of the articles in this SI. Future research should also address the other gaps outlined in the first section. Social cohesion has become prominent only in recent years (which affects availability of data collection and programme evaluation in the context of social protection programmes) and the relationship between social protection and social cohesion is not direct and straightforward. These facts make the empirical analysis challenging from a methodological point of view. Therefore, better data, which will hopefully become increasingly available, can improve the empirical evidence and the knowledge of these issues. And as a third research gap, it remains to be seen how the ongoing expansion of social protection programmes in the aftermath of the COVID-19 pandemic (Gentilini et al. 2020) can be better linked with the goal of improving social cohesion. | While there is substantial evidence of the effect of social protection on poverty and vulnerability, limited research has focused on societal outcomes. This paper serves as introduction to a special issue (SI) examining the relationship between social protection and social cohesion in low-and middle-income countries. Over the last years, social cohesion has emerged as a central goal of development policy. The introduction and the papers in the SI use a common definition of social cohesion as a multi-faceted phenomenon, comprising three attributes: cooperation, trust and inclusive identity. This introductory article provides a conceptual framework linking social protection to social cohesion, shows the current empirical evidence for the bi-directional linkages, and highlights how the papers in the SI contribute to filling existing research gaps. In addition to this introduction, the SI encompasses seven papers, covering different world regions and social protection schemes, and using different quantitative and qualitative methods. |
Introduction --- S ociological general theories (or "grand theories") have been criticized for being too abstract to be of much practical use for empirical sociological work. Such a criticism was made by Robert Merton (1968a) of Talcott Parsons' theoretical system, but similar criticisms have been levelled at many other well-known general theories (Münch, 1996;Van den Berg, 1998). The claim has been especially often made that general theories cannot explain phenomena (something that is even deemed scandalous), and are therefore irrelevant to empirical research (Goldthorpe, 2000). This article advances the idea that a sociological general theory may be written around the concept of a "social game", and that this general theory may have an edge over competing general theories when it comes to giving guidance on interpretation, explanation, and translation into middle-range theories. The concept of "game" is used here not as a metaphor (as it is used by many scholars), but as a heuristic starting-point and center for the general theory. 1 A general theory is what Merton (1968a) calls "general sociological orientations", a series of interlinked concepts that may guide the researcher's thinking and be translated, if made more specific, into substantive, "middle-range" theory. This means that general theory cannot be immediately tested empirically; however, neither should it be self-contained or immunized from empirical falsification. There are scholars who would a priori question the utility of such general theory. On the other hand, general theorizing, if successful, may have important functions: it allows us to summarize sociological knowledge, makes findings from different substantive fields comparative, and, most importantly, may provide ideas and guidance for substantive theorizing and empirical work (Alexander, 1986;Fligstein and McAdam, 2011). The goal of this article is to show that the theory of social games is as general as other competing grand theories, but that it offers a more straightforward way of being translated into middle-range theorizing and empirical work. 2 The link to middlerange theory and empirical work is created with a descriptive heuristic, an explanatory heuristic, and formal and agent-based modeling. The contribution of the article is thus to offer a highly abstract unifying scheme both for qualitative, quantitative, and formal and agent modeling in sociology. I will construct the theory by starting with very simple gamesfor-fun, such as "noughts and crosses" and chess, abstracting their basic properties, and showing how such a model can be applied to social games in general. In doing so, I draw freely on, and integrate, the insights of wellknown theorists from different disciplines. My main inspirations come from sociology, I draw in particular on the work of Goffman, Garfinkel, Elias, and Coleman. Goffman (1961, 1967, 1969), analyzed social life in respect to the ways that individuals-in-roles play-either for other individuals, as in a theater performance, or with other individuals, as in a game. Garfinkel (1967Garfinkel (, 2006Garfinkel ( (1963))) showed that social games use various layers of both discursive and tacit rules, and that the reproduction of these games rests on a level of general trust that these rules will prevail. Elias (1970) argued that using game models of varying levels of abstraction to analyze the social can help overcome the individual-society dichotomy. Coleman (1969Coleman (, 1990) ) realized that the playing of social games leads to emergent outcomes that can be explained by independent game elements and the process of the game. However, important insights regarding games as models can also be taken from the writings of Boudon (1976), Bourdieu (1984), Fligstein/Adams (2011), Merton (1968b), and Weber (1988Weber ( (1922))). More recently, DiCicco-Bloom/Gibson (2010) and Stachura (2014) have argued that real games such as chess, go, poker, and cycling competitions could help us devise sociological theory. But the theory of social games also draws on the insights from disciplines other than sociology. A whole research tradition in economics and mathematics launched by Neumann and Morgenstern (2004Morgenstern ( (1944))) has shown that games-for-fun can be the starting-point for a mathematical modeling of strategic situations, thus leading us to formal models of idealized games. Probability theory was invented by Huygens in the 17th century by analyzing dice games (David, 1955). In philosophy, Searle (1995) used games-for-fun to demonstrate how social reality is both real and constructed, and Winch (2008Winch ( (1958))), following Wittgenstein (2003), showed that the understanding of social phenomena resembles the understanding of games-for-fun. Biology and evolutionary social science argues that play is used both in animals and humans to learn behavior useful in later adult life (Bateson, 2005). Humans extend the period of immaturity and let their children play and engage in games-for-fun for an increasingly long time; here, children also learn complex interactions and role identities through playing. 3 This point of view is grosso modo corroborated by anthropologists who study early hunter-gatherer, pastoral, and horticultural societies (Gray, 2012). Finally, in cultural and game studies, Huizinga (1963Huizinga ( (1956))) argued that human culture is in essence game-like, Caillois (2001Caillois ( (1961))) proposed important ways of classifying games, and scholars such as Klabbers (2009Klabbers (, 2018) ) have shown how computer games can create whole new worlds. Creating a general social theory with games-for-fun as a startingpoint has been criticized, however, with scholars arguing that, unlike a game (for example, a game of chess between friends), the rules of social life are often complex, ambivalent, and open to different interpretations by different actors; that the actors may not consciously know these rules and sometimes only discover them while playing the game; and that there may be substantial disagreement on the rules, which may be contested and changed by powerful players (Bourdieu, 1980;Garfinkel, 1967;Giddens, 1984;Rawls, 1955). Furthermore, critics have argued that, unlike games-for-fun, situations in social life are extremely complex; actors have to react to cues that belong to various, and sometimes conflicting, frames and contexts; and that a game does not have this complexity (Goffman, 1974). Finally, it has been argued that, unlike in games-for-fun, actors in social life are not in a make-believe world of a game, but in the real world. Thus, they cannot just stop the game, take "time out", or ignore the consequences of their actions (Maynard, 1991). I do not find these criticisms convincing. Contrary to what these critics think, many games-for-fun are in fact complex, ambivalent, and open to different interpretations (Kew, 1992). Rules can be complex and contradictory in improvisational games; the application of rules is routinely challenged in football; when children play games, they constantly discuss the existence and form of rules; and, in Russian roulette and running-for-thebride, the game may have serious consequences. The problem is that the (implicit) definition of "game" that these critics use is very narrow, and automatically excludes many phenomena of interest. A broader definition of social game would provide us with a powerful tool to understand and explain precisely the phenomena mentioned in the criticisms above. This outline article can only show the central elements of the general theory. Since every part could be treated in much greater detail, many possible questions must remain unanswered. But there is a rationale for presenting a first overview to see if further work on such a project is warranted. --- Social games Defining social games. A social game is a form of ordering the social sphere in which players with resources use objects to engage in actions, which are shaped by goals, rules, and representations. The social game creates game time, game space, and leads to game outcomes. The game takes place in, and is influenced by, a context. Figure 1 shows the main idea. The arrow loop points to the recursive nature of social games; game interactions lead to new game interactions until the game is finished. Social games operate in a societal context: they "use" actors and their behavior, as well as physical objects, and transform them into players, game actions and game objects with a symbolic reality that would not exist without the game (the dotted lines show this transformation). Thus, when I play rock-paper-scissors, I become a "player", and my fist becomes a "rock". In football, a round leather object becomes a "football", and a person in black becomes the "referee". The ontological status of social games. A note is in order here on the ontological status of social games: social games exist in the real-world, and are at the same time "socially constructed". This problem has long bedevilled social theorists, and much energy has been expended on discussing whether social reality is "real" or "constructed" (Burr, 2015 ;Hacking, 1999;O'Brien, 2006). In the current discussion, the constructionist view is often merged with postcolonial, critical, and discourse theories, while the realist view is often confounded with analytical sociology. The theory of social games easily shows that social games are both real and socially constructed (Elias, 1970;Goffman, 1961;Searle, 1995). They exist independently of how social scientists represent or are aware of them, and are thus part of the "real-world out there". Nevertheless, social games exist only insofar as the players themselves believe that they exist and actually play these games. This can be easily demonstrated with a game-for-fun: when I play rockpaper-scissors, my fist is not a real rock. It is socially constructed in the sense that it only represents a rock as long as I and the other players treat it as a rock in the framework of the game. Nevertheless, in that framework, it has its undeniable reality with the real consequence that I can really win or lose the game. But the same point can be made for all social games: a $100 bill is socially constructed in that it is worth $100 only insofar as I and many others believe in its worth-if those beliefs crumbled, I would be left with a worthless piece of paper. Nevertheless, and insofar as these beliefs pertain, I can go to a shop and buy real objects for my $100 bill. Forms of social games. Social games come in a staggering variety of forms, and many different classifications have been proposed (Klabbers, 2009). Social games may or may not have spectators, exhibit external effects, have a function for yet other games, have the same or different goal(s) for the different players, may involve only two or hundreds and thousands of players. Their rules and representations may be consensual or contested, may or may not be known to all the players, etc.. For this outline, I focus on two classifications: the distinction of games-for-fun and social games, and the distinction between "levels" of social games. Games-for-fun and serious games. A first distinction is between games-for-fun (e.g., chess, football, rock-paper-scissors) and serious games or games that are not played for fun (e.g., staff meetings, emergency services, political campaigns) (Fig. 2a). The main distinction between the two types is the fact that games-forfun are abstracted from manifest interests and functions in the social world. This is why games-for-fun exhibit a sense of "freedom", "absence of necessity", and "enjoyment" (Caillois, 2001(Caillois, (1961)); Huizinga, 1963Huizinga, (1956))). Serious games (in this understanding of the term) on the other hand are seen as belonging to the "real worl", where serious work and necessity reign. Apart from this point, however, games-for-fun and serious games exhibit exactly the same properties. The basic assumption made in the theory of social games is that there exists one overall gamelike structure of social organization. Games-for-fun are just the emergence of exactly this same form in a mini-format and "for enjoyment". This is why they lend themselves particularly well as models for theorizing. I have found that some people have difficulty in extending the game definition to serious matters such as presidential elections, police raids, or faculty meetings. They may object that calling a faculty meeting, which is arguably often devoid of fun, a game is only true metaphorically. But "fun" is not part of our definition of a social game, and a faculty meeting falls very nicely under the definition of social game that we have given above. Levels of social games. A second classification concerns different "levels" of social games, these different levels being distinguishable according to how players are accepted as players (Fig. 2b). For example, interactions are formed by players who see each other as present in a concrete situation and as currently playing a game; groups are formed by players who accept each other as members based on certain criteria; and markets are formed by players who buy and sell goods and services from and to each other. In this way, we can distinguish between very different types of social games that are well-known in the social sciences, such as interactions, groups, organizations, networks, movements, milieus, markets, and societal sub-systems (the economy, the polity, science), which are all analyzed as social games. Thus, a conversation between neighbors (an interaction) is just as much a social game as a book club (a group), or a Friday-for-Future meeting (a movement). Note that this is quite similar to how Luhmannian systems theory sees different levels of social systems (interaction, organization, society) (Luhmann, 1996). I allow more types of games than Luhmann, however, and my criterion to distinguish the types is different to his. An in-depth treatment of these different types of social games would require another article. It is only important at this point that the theory of social games aims to be very general, and that its fundamental concepts are applicable to phenomena of very different extension. Like competing grand theories, the theory of social games claims to be applicable to the social world in general; social games are thought to exist in all domains and at all levels of the social. However, I do not claim to offer a theory of the social as suchwhich would require deep treatments of language, communication, social evolution, etc.)-but rather a theory of the social whenever it takes the form of social games. In fact, not everything belonging to the social world is a game: most notably, most game elements. Thus, the rules of a game are not themselves a game, and nor are the players, the goals, the objects, or the representations. Individuals may also take individual actions that are not part of an obvious social game. Furthermore, the so-called life-world is not itself a game, but consist of the complicated coupling and nesting of several games. When I go to a Manchester United match with my friends, we form an interaction game that also belongs to a group game (not everyone in our group of friends is present). To enter the stadium, we have to go through security, an interaction game that is part of a larger organization game. When inside, we buy hotdogs and beer (an interaction game that is at the same time a market game). When we watch the match (an interaction game), the teams are each a group game. There is a further interaction game between the public and the teams. Every one of these games could be subject to an in-depth analysis regarding its players, rules, representations, objects, etc. --- Assumptions about individuals. A theory of social games must necessarily make at least six assumptions about the individuals who play such games. I call this actor model "homo ludens" (for a comparable set of assumptions see Fligstein and McAdam, 2011). First, homo ludens speaks and understands a language. Games are language-based, and, without language, the actor could not play a social game (Searle, 1995). Second, homo ludens has basic human needs, such as the need for food, water, clothing, sleep, shelter, security, the sense of belonging, and social worth. Third, homo ludens recognizes social games in her surroundings and can adopt and internalize their goals, understand their representations, and follow their rules, as well as also being able to a certain extent to explain them causally and to predict their outcomes. Much of the waking time of a homo ludens consists in scanning the world for clues of various games. Fourth, homo ludens makes different games and their goals the center of her action, and uses them to fulfill her basic needs and motives. She does so by identifying her personal goals with the game goals. Thus, homo ludens seeks to gain social worth through being in a group of friends, to earn money through being employed in an organization, and to reach her place of work through driving through traffic. Fifth, homo ludens creates a sense of "who she is", of her own "identity", by monitoring and judging her relative performance in the game and by identifying with a game that she or others are playing. She may also create identity by identifying with the leaders of some of the games that she plays. Finally, homo ludens will try to satisfy her needs as much as possible by expending as little energy/input on a game as possible. She will try to balance her engagement in different games to maximize the satisfaction of her overall needs. This is not to say that homo ludens always calculates in a perfectly rational way. Rather, it is assumed that homo ludens tries overall to "play the games well". These assumptions seem quite uncontroversial, but, should they require justification, then we can turn to literature in socio-biology. Humans have at a certain point in time acquired language and goal-related, rule-guided, symbolic, cooperative action ("games"), and we take it that this is now "human nature" (Harari, 2011;Hauser et al., 2014). As readers will notice, homo ludens combines the two elements of norm-following (homo sociologicus) and rationality (homo oeconomicus) (Elster, 1989a). This is obvious: we could not play a game of chess without at the same time wanting to follow the rules and seeking to choose winning strategies. Also note that, while homo ludens is rational, her preferences are not fixed, but rather are transformed by the game that she is playing. For example, she may be engaged in a game where the goal is to be altruistic or heroic, and where social worth is created by looking out for others more than for herself. And while she will normally try to strike a balance regarding her involvement in different games, she might become so caught up in a certain game that she no longer satisfies some of her basic needs (e.g., amateur bodybuilders who risk their health by using steroids; spiritual seekers who try to survive by eating only sunlight). It is also worth emphasizing that this model of the individual has at its center the symbolic nature of the human being. Social reality, which is made up of social games, is symbolic, and we could not understand even the simplest human game actions (e.g., moving a chess piece) without understanding the game representations in which this action is immersed. --- The elements of social games Players. Games are played by actors in their capacity of players. Actors are individual human beings. A player can be defined as an actor (or a group of actors) who is accepted (voluntarily or involuntarily) by other players as such, and who actually plays the game. Players have game-relevant attributes and roles. Player attributes are the traits of players that are relevant for the game. These include the amount of game resources (e.g., objects, money, land, publications) and the amount or type of social, physical, psychological, corporal resources or attributes (e.g., gender, intelligence, strength, number of friends, stigmatic appearance). For example, in Monopoly, it is only important how much game money a person has at a certain point in the game, but it is not important whether a person is male or female; on the Titanic, on the other hand, both money and gender were important factors in survival. Player attributes can also be negative, i.e., rules may specify what attributes certain players are not allowed to have. A player role is a bundle of rights and obligations concerning the actions and behavior of the respective player. Thus, in cops-androbbers, some players are cops and others are robbers. In football, one player per team is the goalkeeper, while all the others are field players. Resources. The term resources is used to capture all the (both legitimate and illegitimate) means that players may use to achieve the (intermediate or final) goals of the game. Resources are also sometimes called different forms of "capital". Resources do not denote a separate area of the game, but encompass all the game elements described in this article insofar as they help players achieve the goal of the game. Thus, player attributes, rules, representations, context, and even other game goals themselves, may all become, in one situation or another, a resource in a given game. A good tactic that can help a person find resources in a game is when she asks herself what she needs to be successful as a player-a list of resources will then come to mind. Resources come in a large variety of forms, and different typologies have been proposed (Bourdieu, 1983;Coleman, 1990;Esser, 2000b;Giddens, 1984). From a social-game perspective, resources comprise objects, cultural knowledge, social capital, mental and physical attributes, positional attributes, but also game and context attributes that a player may use to achieve the goal of the game. In general, forms of resources or "capital" differ strongly according to the game in question. Being tall (an individual corporal attribute) helps with basketball, but not with chess. A profound knowledge of Einstein's field equations (an individual cultural attribute) may be an important resource when doing a physics exam, but will (probably) not help much when chatting someone up in a bar. Actions. An action may be defined as a socially constructed model of a short duration (or "strip") of behavior that is distinguished from other behavior (and thus "counted as" an action) on the part of one or several actors. The distinguishing or "counting as" may happen before, during, or after the strip of behavior. Examples of actions would be "score a goal", "give a statement in a presidential debate", "ignore somebody", and "chop wood" (the famous Weberian example) (Weber, 1978(Weber, (1920))). These models of behavior can be used by actors to plan, conduct, and monitor their own behavior, as well as to interpret the behavior of other actors. We would be unable to conduct our lives if we could not interpret, plan, conduct, and monitor our stream of behavior in terms of these socially constructed models of action. A game action is a model of a strip of behavior by a player that is accepted by other players as being part of a social game. In game actions, players orient their behavior towards the other game elements, i.e., they try to achieve the game goals with game resources and objects, thereby keeping in mind the rules and representations of the game. Game actions are often called "moves". If I "score a goal in football", or "give a statement in a presidential debate", then this is counted as a game action. If I voluntarily "ignore somebody", acting as if that person were not present, and if others perceive this behavior as such, then this action becomes a game action. --- Goals. Games have at least one, but often several, goal(s). The goals of a game can be defined as the typical states, events, or things that players aim for, which is the reason that they enter a playing relationship with other players. The goal is what the game "is about", what is "at stake" (Bourdieu, 1984;1968b). In tennis, for example, the game is about "winning the match"; in a US presidential race, it is about "becoming president"; in science, it is about "discovering new knowledge"; in a chat with a neighbor, it is about having a short and friendly exchange that is not too profound. There is a large array of types of goals, and I can only mention some of the most important distinctions. Goals can be final or intermediate. In tennis, a player has to win sets to win the match; in a US presidential race, a candidate has to win the primaries to win the presidency. Goals can be competitive, noncompetitive, or a mixture of the two. Competitive goals demand that players try to be superior to the other players in achieving the goals; non-competitive goals can and should be achieved without its being intended or even possible to compare the players. Goals in games may apply to individuals or groups (individual vs. team sports); in some games, all the players have the same goals, while, in other games, different types of players have different goals. As can be seen clearly in presidential races, even people or groups that detest each other may share the same game goal. Goals should be distinguished from players' motives to play the game. Social games have the power of channeling players' goal-seeking behavior into a similar direction, but motives to play the game may vary widely. On a first level, there is variation in whether the primary player motivation is to reach the game goal. Most players will play the game to reach the game goal (e.g., tell the funniest joke, rise in the league). But sometimes players may have other motives to play the game (e.g., take part in the church youth group to meet attractive other participants). On a second level, even when players are motivated primarily by the game goal, their motive as to why they want to win may vary widely (e.g., become president to help the country, to fulfill personal psychological needs of grandeur, for personal financial reasons, etc.). The playing of a social game very often involves a mix of motives. As has often been noted, players may also internalize the game-goals and fuse them with their innermost motives. Scientists may believe that finding something new is the most important thing in their life; Musicians may think that they could not live without music. Rules. Social games have rules. These can be defined as instructions that are applied intersubjectively and under certain circumstances to (a) perceive/count a certain phenomenon in certain ways (constitutive rule), or (b) act in certain ways (regulative rule) (Searle, 1995). Thus, a rule may stipulate that the person who was fastest be seen as "the winner" (rule telling us to perceive/count as), or it may tell us that once one player begins counting to 40, the others have to run away and hide (rule telling us to act). The rules in a game derive their existence and validity from being shared. A rule is valid if players share the belief that it is valid. In turn, this belief is created by the observation that most of the other players in their actions obey the rules, and that transgressions are either sanctioned or otherwise "repaired". As Garfinkel (1967Garfinkel (, 2006Garfinkel ( (1963))) has shown, social games use various layers of both discursive and tacit rules. If there are written rules, we often find that there are other (written or unwritten) rules of how to apply the first-order rules. Yet, there are even other, often unwritten, rules of how "everybody knows" that these rules and their application really have to be applied (or not) under different circumstances. This phenomenon can be found both in games-for-fun and in social games in general. Rules may be more or less legitimate. Legitimacy may be defined as the correctness of rules in both a cognitive and a normative sense (Esser, 2000c). Rules are legitimate for players if the latter think that they are actually the rules (facticity), and that there are convincing values that show these rules to be "good" (e.g., with regard to fairness, God's will, etc.). Rules may also be typologized according to their form. Following Merton (1968b), we can distinguish prescriptions (what is to be done), preferences (what should preferably be done), permissions (what is allowed to be done), and proscriptions (what is forbidden). As such, rules may regulate every aspect of the game, such as the nature of the goal of the game, the kinds of actors that are allowed to be players, and what attributes of actors are game-relevant. Many social games have known ways of breaking the rules, ways of acting that the players of the game find particularly iniquitous: in sports, doping; in science, plagiarizing and fabricating results; in stand-up comedy, stealing material from other comedians; in criminal gangs, snitching. The breaking of rules can lead to different reactions and effects. The rule can be upheld by negative sanctions, which are actions or events that punish the rulebreaker. More minor infringements will normally be dealt with first within the framework of the game itself. Thus, in football, the referee may punish the guilty player by awarding the other team a free kick. Likewise, cheating in an exam at school may lead to the mark "0". More major infringements may also have effects outside the game, as when cheating in a casino is dealt with by the police. Negative sanctions may be applied by other players, by leaders of the group, or by individuals or groups with game roles that involve policing/judging (e.g., referees, police officers, judges). However, there are other ways of reacting to transgression and maintaining the rule. The rule-breaker may try to "repair" the situation by apologizing or by explaining her action through shifting the responsibility elsewhere. If rules are broken and the norm-breakers are not sanctioned, or the norm-breaking is not "repaired" in some other form by apology or explanation (Goffman, 1971), then the rules might simply disappear, such as when littering in public spaces becomes acceptable, or a teacher loses all authority in her classroom. Representations. Games are based not only on rules, but also on representations, which can be defined as signs that signify something else, according to convention and in a public way. Representations are symbols or associations of symbols (Searle, 1995). The representations of a game are what we could also call its "culture", and this is how cultural sociology is incorporated into social game theory (Smith and Riley, 2008). We can distinguish three types of representation in a game. The first concerns signs for different game elements (rules, objects, players). Objects and events have names (e.g., the "king" in chess); rules come in the form of language (e.g., "Players take it in turns to move a piece"). The second concerns representations that are attached to game elements so that the players can communicate reflexively about the game. Such representations can legitimize, mythologize, systematize, comment on, or critique the game. In chess, there is a large literature on chess tactics; the ritual of Christian communion is linked to various Biblical stories and concepts (the Last Supper, the bread of life). The third type concerns the language used when playing the game. In most games, players have to use language to communicate before, during, and after the game to "pull the game off". Players must greet each other, determine when and where to begin, decide on "whose turn is it next", etc. Games are made out of representations, but they are also immersed in the wider context of language, as well as of other social games and their representations (Searle, 1995). It is important here to understand that social games are by nature representational or symbolic (or "meaningful") (Giddens, 1993;Searle, 1995). What all the different strands of "interpretive" sociology (ethnomethodology, symbolic interactionism, Schutzian phenomenology) have said about interaction is true also of social games. To take away the meaning of the different game elements is to take away the game. Economists versed in economic game theory have sometimes objected that representations are not important. Once the structure of the game (the pay-off matrix) is fixed, it does not matter what the different options are called. This may be true in certain cases. For example, it is possible to play a game of chess with a board depicting a court with a king and queen, or with figures from Star Wars or Harry Potter, or in the form of birds, or made out of cookies or corks (all these exist). If the figures retain their function, then the form and imagery and "culture" that are present make little difference. Nevertheless, in most social games, representations are of the utmost importance, since these are what give the social game its true meaning. It is their imagery that makes us feel that the game is worth it. If that were not the case, then marketing, branding, and spinning political messages would make no sense. As Weber (1922) wrote: "Not ideas, but material and ideal interests, directly govern men's conduct. Yet very frequently the 'world images' that have been created by 'ideas' have, like switchmen, determined the tracks along which action has been pushed by the dynamic of interest". Objects. An object can be defined as a non-human material entity (including plants and animals). People do not count as objects, 4 and nor do ideas or ideational phenomena (freedom, love, God). Games do not always need objects: for example, the "material basis" of paper-scissors-rock or a spontaneous rap battle is provided by the bodies of the players and the sounds that they make, and the game objects in digital gaming are not material entities but digital representations that are encountered in the digital world. Nevertheless, all game elements can be linked to or represented by objects. The goals (or the reaching of the goal) can be represented as objects. In some games, the goal of the game is to obtain an object, as in a raffle or lottery. In other games, special objects symbolize the win: medals, trophies, and pedestals. Rules and representations are immaterial by nature, but they are often symbolized by objects, written down in books, or engraved in stones. Or the objects may themselves be the signs representing the rules and representations, such as in traffic signs, statues of Gods, or crowns. Resources very often come in the form of objects. In games-for-fun, we find gaming pieces, cards, balls, sticks, sportswear, etc. In social games, everything that Marx (1992Marx ( (1867))) called the means of production qualifies: factory halls, technical equipment, machines, tools, but also all kinds of objects that represent symbolic power, such as clothing, means of transportation, luxury items, etc. Game space is often symbolized by objects, such as game boards, fields, buildings, fences, border stones, and curtains. Finally | Sociological general theories (or "grand theories") have been criticized for being too abstract to be of any practical use for empirical sociological work. This paper presents the outline of a general theory that claims to be better linked to empirical social research than previous theoretical attempts. The theory analyzes social life as a multitude of interacting social games. A social game is an entity created by players with resources who engage in action that is shaped by goals, rules, and representations, that involves objects, and that leads to game outcomes. The general theory is as encompassing as previous theoretical attempts, while allowing us to integrate both instrumental and normative action at different levels of the social. Its main advantage is that it is linked to middle-range theory and empirical research by a descriptive-interpretive heuristic, an explanatory heuristic, and formal and agent-based modeling. The article provides many examples to illustrate the claims. |
all game elements can be linked to or represented by objects. The goals (or the reaching of the goal) can be represented as objects. In some games, the goal of the game is to obtain an object, as in a raffle or lottery. In other games, special objects symbolize the win: medals, trophies, and pedestals. Rules and representations are immaterial by nature, but they are often symbolized by objects, written down in books, or engraved in stones. Or the objects may themselves be the signs representing the rules and representations, such as in traffic signs, statues of Gods, or crowns. Resources very often come in the form of objects. In games-for-fun, we find gaming pieces, cards, balls, sticks, sportswear, etc. In social games, everything that Marx (1992Marx ( (1867))) called the means of production qualifies: factory halls, technical equipment, machines, tools, but also all kinds of objects that represent symbolic power, such as clothing, means of transportation, luxury items, etc. Game space is often symbolized by objects, such as game boards, fields, buildings, fences, border stones, and curtains. Finally, objects can also characterize actors, who may wear uniforms, robes, rings, crowns, colored belts, or have slit ears. Interestingly, objects may also stand for players, as avatars: for example, every player in Monopoly is represented by a small figure (a car, a ship, a dog, etc.), while a person in black magic may use a doll to represent her enemy. Space and time. Concrete games are always situated in time, space, and a societal context. Interestingly, though, they also create their specific game time, game space, and game context. Game time is the time during which the game is played. The beginning, internal temporal structure, and end of a game are often marked by specific actions, for example by uttering words (Ready, steady, go!) or making sounds (a gun shot, a gong ringing, a whistle). They may be regulated by fixed rules as when a seminar at university takes place from 9 o'clock until 10.30. Games very often have an internal temporal structure, such as tennis, where a number of games make up a set and a number of sets make up a match, or a BA degree, where weeks are nested in semesters, semesters nested in years, and years nested in the overall curriculum. Another example is the liturgy of a Catholic mass, which gives the different elements of the ritual a sequence that can be repeated. Game space is the space where the game is played, and is often marked by objects (lines, ropes, steps). The game space is sometimes inside a special building or room (a temple, a parliament, a hospital), and is very often spatially differentiated internally, as when a football pitch is divided into two halves, with each goal having a six-yard box and a penalty area. Outcomes. Games have outcomes, which are the states, events, or dynamics of a game or its context that result from game interaction. They can coincide with the game goals or not, be intended or not, and be measured by the game or not (Boudon, 1982). Other meta-theories call outcomes "explananda" or "effects". Outcomes can take different forms. One type of outcome is the creation or change of a game element. Examples are the occurrence of checkmate in chess, or Hitler's decision to invade Poland on 1 September 1939. A second type comes in the form of a statistic of a game or context variable, often a point measurement, sum, mean, or variance. For example, the number of goals scored by each team in a football match, or the percentage of overall wealth owned by a society's wealthiest 2%. Third, outcomes may also present themselves as the covariance of two game or context variables, often a cross-tabulation, correlation coefficient, regression coefficient, or odd's ratio. For example, the mean difference in the number of goals scored by Manchester United and Manchester City, or the difference in mean income earned by men and women. Finally, outcomes may present themselves as a statistic of the form the game process over time (e.g., a function). For example, the way that property and money become concentrated in a game of Monopoly, or the way that a medical innovation is disseminated over time. Game outcomes that are created for a higher-level game or the players are called game functions. Thus, a commission may be set up with the function of finding a new president for an organization, a university has the function of educating the elites for the wider society, and a football match may be played for the enjoyment of the public. Some of these functions may be latent, and not consciously known by the players, as when Christmas traditions have the latent function of maintaining the social bonds of families, or when the Kula game helps strengthen social control in Trobriand societies. Of course, the existence of games should not be explained by their function or the needs of the players, as classical functionalism thought possible (Malinowski, 1960(Malinowski, (1944)); Parsons, 1977). Current effects (the function) are not the same as historical causes. Nonetheless, some games are consciously set up to fulfill a certain function, the planned function then being one of the causes behind the setting-up of the game. Furthermore, some games are very stable, because their function creates an interest among powerful players or stakeholders, who will counter any attempts to stop the game or change its game elements. Context. Game context consists of all the phenomena outside the game-to the extent that these phenomena were, are, or might in the future be important for the playing of the game. Game context is not everything that exists outside the game, and clearly defining its limits is difficult. Thus, the invention of the spiked leather running shoe in the 1890s certainly belongs to the context of football, whereas the invention of the flexible vaulting pole in the 1950s does not. --- Social games and empirical research The theory of social games is a general theory and cannot as such be tested directly. To render the theory empirically testable, we would have to transform it into a middle-range substantive theory. It is here that the theory has in my view an advantage over alternative theories, since it uses (1) a descriptive-interpretive heuristic; (2) an explanatory heuristic; and (3) formal and agentbased modeling. Descriptive-interpretive heuristic. Social games can be reconstructed with a descriptive-interpretive heuristic. This consists of several questions that can be asked to create a model of the game (Anonymous (year)). The questions are simply constructed by going through the list of necessary elements of a game. We would thus ask: What kind of game is played here? Where can we place this game in the different game typologies? What are the relevant players, resources, actions (moves), goals, rules, representations, objects, game space, game time, game context, and outcomes? Is this game coupled with other games, does it encompass other games, or is it nested within other games? And, if so, how? In practice, this means that, depending on their initial knowledge, researchers will often begin with a rather crude model and tentative game elements that they will then specify during the analysis. The descriptive-interpretive heuristic has to be used in a qualitative manner. To yield ever more valid answers to their questions of what the goals, rules, representations, etc. of this social game are, researchers have to spend time with the social game, use participant observation, conduct interviews, read documents. Thus, researchers do in a more systematic way what individuals in the everyday world do when they try to learn a new game. An additional heuristic trick that proves extremely useful when reconstructing a social game is to ask: What elements would I minimally have to use to create a board or computer game that would create the dynamics and the outcome of interest? This question forces researchers to specify the necessary elements of the game, and often makes them notice previously unobserved assumptions and mechanisms (Coleman, 1969). Readers acquainted with qualitative research will have noticed that the proposed heuristic resembles the "coding paradigm" in grounded theory (Strauss, 2003(Strauss, (1987)); Strauss andCorbin, 2014 (1998)). This paradigm distinguishes conditions, interactions, strategies, and effects, and I will replace it here with our game model as a heuristic starting-point. We call this heuristic "descriptive-interpretive" because at the same time it leads researchers to a description and an interpretive understanding of the central game elements. Understanding an element of a social game (a move, a rule, a representation) means capturing its possible meanings within the framework of the entire social game. For example, I understand the chess rule "castling" if I know under what conditions, with what reasons, and with what resources/objects a player may typically apply it. Thus, understanding a social game means understanding the "game language" and being able at least in principle to play the game. This is similar to what the later Wittgenstein (2003) and Winch (2008Winch ( (1958))) proposed. Explanatory heuristic. The game mechanisms of social games can be tested with what I call an explanatory heuristic, which consists of several general hypotheses that steer researchers to useful and more substantive hypotheses and mechanisms that can be directly tested (for a similar endeavor, see Elias (1970)). The hypotheses are created by distilling central sociological insights from the literature and expressing them as game mechanisms. We do not have space to give all the explanatory hypotheses here and point the reader to a companion paper (Anonymous). The goal at this point is just to show how the heuristic functions. We will therefore stick to three examples of hypotheses involving rules-but analogous hypotheses exist for all other game elements (actors, resources, objects, representations, etc.). (H1) Rule change. If a new rule is created in a game and if it is enforced, then it will change the behavior of the players in accordance with the rule. Since rules restrict the chance that actors have of achieving some of their goals, some of these actors may try to find ways around the rule, leading to non-intended effects (Boudon, 1982). The rule-change hypothesis seems to be obvious, but rule change is the most important way that interventions are effectuated in social games, which we can see very well in games-for-fun. In 1925, football officials changed the offside law, reducing from three to two the number of players needed to make an attacker offside. This was done because the old rules had favored the defending team, who could plan very efficient offside traps, thus increasing the number of stoppages and decreasing the number of goals. The rule change did in fact have the intended effect, with the number of goals scored in the Football League increasing from 4700 in 1924-25 to 6373 in 1925-26. 5 But it also had several unintended effects: for example, the defending team played much closer to their goal, and the attacking team made more use of their wingers. As for non-fun games, rule changes are one of the main types of intervention used in both democratic and authoritarian states, a prominent example being the use of lockdown rules and cards to prove vaccination status during pandemics. The unintended effects of this are of course financial problems for cafés and shops, and the fact that people might begin forging their vaccination cards. (H2) Absence or overuse of sanctioning-anomie. If transgressions of the rules are not sanctioned in a game, then the rules tend to disappear, and a state of "anomie" ensues (Durkheim, 2009(Durkheim, (1897););Esser, 2000a). Conversely, overuse of sanctioning may have the same effect. Overuse of sanctioning signals that rules are in effect not obeyed by other players and that further disobedience may be expected. In such a situation, players may be encouraged to join in the contestation of authority. Both absence or overuse of sanctioning may lead to the collapse of the game, and there are many examples of this hypothesis. In one infamous Chilean football game, the referee showed a red card to a player and then slapped the player across the face when the player confronted him. This led to his losing all authority, with many other players then confronting him and finally chasing him around the pitch in a scene resembling a Benny Hill sketch. 6 Other good examples of everyday anomie are unruly classrooms with teachers who lack authority, or a state of lawlessness in failed states. (H3) Rule advantage-social closure. If a game offers important benefits to players, then people from the outside will try to join the game and share in the benefits. The game's current players will then try to set up entry barriers to keep the benefits to themselves (Weber, 1978(Weber, (1920))). Social closure exists with regard to players who try to enter a game from the outside, or to players who try to enter higher-ranked sub-games (e.g., elites, professions) from below. There are numerous examples that illustrate this hypothesis. Pastors try to prevent deacons from preaching the gospel; psychiatrists try to prevent psychologists from prescribing medication; Western countries try to stop immigrants from entering their territory; the aristocracy tries to stop the bourgeoisie from entering its circle. Using this type of mechanism heuristic brings us close to the tradition of analytical sociology (Elster, 1989b;Hedström and Bearman, 2009;Manzo, 2010). Analytical sociology is very strong in explanation and methods but has had difficulty in reaching consensus about its central theoretical concepts, especially the definition of "social mechanisms". Against this backdrop, the social-game perspective proposes to define social mechanisms as typical causal relationships in one or several social games. Explaining an outcome of a social game then means showing how a change in a game parameter (i.e., a rule change, a goal change, a change in context) has led causally, via a game mechanism, to a change in the game output. Two types of explanations may be distinguished. A reconstructive explanation accounts for a specific game move or a game process by showing that precisely this game move or game process could have been predicted (or had a high probability of happening) in a specific historical instance. If we combine different specific explanations in a historical chain, then this may result in a historical-genetic explanation of a specific game. We try to reconstruct the game situation at different points in time, look at the options open to different players, and try to understand-explain all (or only the "important") moves made by the players. In this way, we could, for example, explain the outbreak of the French Revolution historically genetically. A statistical explanation occurs when we explain variance in game outcomes. Here, we account for the typical statistical effect of a change (or of a difference) of a game element on a game outcome. In this case, we normally assume a game mechanism to be at work, i.e., a typical way in which a combination of game elements creates a specific game outcome through game interaction. 7 For example, we find that, when a larger ball was introduced in table tennis in 2000 (change of a game rule and game object), the average number of exchanges in a rally increased (Djokic et al., 2019). The mechanism lies in the fact that the larger ball is slower due to more air resistance, which decreases the importance of the difficult serve, increases the chances of players receiving the serve, and allows for more attacking play overall. Both reconstructive and statistical explanations are causal explanations that assume counterfactual causality (Pearl and Mackenzie, 2018;Woodward, 2004). Such explanations make statements such as: "The changing of rule R1 has caused outcome O in such and such a way; and, had we not changed rule R1, outcome O would not have changed in this way". Formal modeling. Social games can be formally studied in the style of economic game theory (Davis and Brams, 2021;Selten, 2001). Game theory can be defined as a "branch of applied mathematics that provides tools for analyzing situations in which parties, called players, make decisions that are interdependent. This interdependence causes each player to consider the other player's possible decisions, or strategies, in formulating strategy" (Davis and Brams, 2021). The main types of game theory are classical game theory, evolutionary game theory, and behavioral game theory, and a further distinction is the game-theoretical analysis of cooperative and non-cooperative games (Breen, 2009). Just like the theory of social games, economic game theory starts with the analysis of games-for-fun (Gesellschaftsspiele) (von Neumann, 1928), and is then extended to a mathematical and economic theory that claims to be applicable to a wide range of social phenomena (Luce and Raiffa, 1957;von Neumann andMorgenstern, 2004 (1944)). The initial idea is that multi-person strategic situations are different from rational action facing nature. They are like a "game" in which player A faces a player B, who also wants to win the game. Both players know this about each other; the situation is one of circularity. Neumann asks what rational action player A (and any other player) should perform in such a situation, and what the outcome of such a game will be if all the players are rational. Neumann, and later Neumann and Morgenstern, show that a certain number of very simple games have clear "solutions" (which, following Nash (1951), are called "equilibria"), i.e., endpoints that necessarily result if all players play rationally. Interestingly, they may also create suboptimal social effects even though all individuals play rationally (e.g., in the game of "prisoner's dilemma"). To be able to calculate the solution of such a game, Neumann and Morgenstern need to make very strong assumptions: players must be perfectly rational and perfectly informed; the types of "moves" must be welldefined; and the payoffs for each outcome must be fixed. Game theory has had important successes in disciplines such as economics, political science, international relations, and biology, but has been used less often in sociology (Breen, 2009;Swedberg, 2001), with many scholars in the social sciences criticizing the theory, just as they criticize the rational-choice approach, for being "unrealistic" and "irrelevant" (Schmitter, 2009). It is probably fair to say that the games constructed by game theory are strongly simplified and idealized (Little and Pepinsky, 2016): they often assume that information is perfect, that players are perfectly rational, that payoffs are well-defined, and that no other variables influence the game. Most real-world (not-for-fun) games are more complex, however. Rules have many layers (formal rules, actual rules), and different players interpret them differently. Games are routinely played even though the players only have a very unclear knowledge of a very restricted part of the game, and even if they do not yet understand the main payoffs. It is for this reason that we need the descriptive and explanatory heuristics described above-namely, in order to gain information about complex and constantly changing social games. When it comes to complex real-life games, formal game theory often has only limited applicability. From the point of view of the theory of social games, however, formal modeling and agent-based modeling do have an important function. First, formal modeling may help clarify the deep structure of a certain type of game (e.g., dilemma games, zerosum games, and certain aspects in games, i.e., a penalty). Understanding that a certain real-life game has the deep structure of a prisoner's dilemma can be very illuminating. Second, the models created by formal modeling may function as ideal types that can be used to measure real cases by measuring the difference to ideal situations. They tell us what the pure form of the game looks like, and how perfectly rational players would play it. In this sense, they are normative. Third, formal and agent-based modeling may help us uncover hidden assumptions and simulate how different parameters may lead to different game outcomes. An illustration: Blau's dynamics of bureaucracy. To illustrate the three heuristics, consider the following example: In his fascinating book "The Dynamics of Bureaucracy", Peter M. Blau (1955) describes the very different effects of a new monitoring system-productivity statistics-on two sections of a job-referral agency of a large state bureaucracy. With a technique close to what I have described as a descriptive-interpretive heuristic, Blau reconstructs the structures and processes of the agency with its two sections. In terms of social game theory, he shows us the goals, rules, representations, and outcomes of the social game that is played here. Agents receive job-seeking individuals with the goal of matching them with job offerings, the outcome being a certain number of job placements per day. In an exploratory manner Blau shows us the great complexity of the social game being played-a complexity that could only be unearthed with qualitative methods. For example, Blau demonstrates that the official rules and goals set down in official regulations are adjusted for the specific needs and contexts at hand (1955: 24). To give one illustration among many: Agents should officially choose the best applicant for a job opening; in practice, however, and since jobs have to be filled quickly and agents are evaluated on the number of placements, such maximizing behavior is never observable. Rather, agents choose the first possible applicant for a job opening (satisficing). Or, to give another illustration, receptionists receiving job-seekers for jobs that have no opening should tell these job-seekers to come back two months later. To minimize tension, receptionists frequently give earlier due dates at their own discretion. With an explanatory technique close to what I have described as explanatory heuristic, Blau gives several reconstructive and statistical explanations of bureaucratic practices. For example, he routinely uses the heuristic device to check how rule changes lead to changed intended and non-intended behavior. In one especially interesting case, he shows how the introduction of a new monitoring system leads to non-intended consequences in section A of the agency. The new monitoring system consists in counting the number of placements per agent per day and thus showing every agent's productivity. The non-intended effect consists in the fact that agents are afraid of being judged negatively if their individual scores are suboptimal. Therefore, they try to increase their placement score by using "dirty tricks" (hoarding of job openings; giving false information on job openings to fellow agents). Conversely, agents in section B react differently. The new monitoring system leads to norms forbidding fast and competitive work and everybody continues to work with everybody else. Blau explains the difference in reaction by three combined factors: The supervisor in section B puts less emphasis on statistics as a measure of individual productivity than the supervisor in section A; the agents in section B have previously developed a professional code of employment interviewing; the agents in section B have more job security than the agents in section A. Interestingly, the cooperative section B proves-as a section-to be more productive than the competitive section A. While Blau does not use formal modeling, his analysis makes it very clear that formal modeling could nicely be used to elucidate the deep structure of what is going on in the two sections. The overall situation is one of a prisoner's dilemma, where agents have an incentive to defect (use "dirty tricks"). If everybody uses "dirty tricks", the overall outcome is suboptimal (as happens in section A). Additional factors may lead to the creation of norms that impede defecting, thus leading to a better outcome (as happens in section B). My point is neither that Blau uses social game theory (evidently, he does not), nor am I suggesting that his study would have become better had he consciously used the theory of social games-as it is, it is a remarkably good piece of social research. Rather, my claim is that this seminal piece of empirical work can be very well reconstructed with the "grand theory" of social games. The three heuristics are very close to what Blau actually does. The theory of social games thus brings the heuristics implicitly used by Blau into a coherent and explicit whole. But why should one reconstruct the case with a grand theory in the first place? As I have argued above, grand theories have two important functions, and they can be seen in this case. First, the grand theory may provide new ideas and guidance to study a specific case. In our example, the theory of social games could not strongly improve the Blau study in descriptive-interpretive and explanatory terms since the study is already so expertly conducted. Still, we might get the idea to formally model the deep structure in the two sections. Second, grand theory summarizes sociological knowledge and makes findings from different substantive fields comparative. Applying the social game perspective to this case, we see the agency as a social game of the organizational type, where a rule change leads to nonintended consequences of a prisoner's dilemma type. In a next step we might for example use the case in a more general account about non-intended consequences in organizations. Alternatively, we might engage in comparative case studies about how rules in different social settings are adjusted to specific contexts both in organizations and other social games. To give just one example, the filling of life-boats on the Titanic as analyzed by Stolz et al. (2018) is an extremely different phenomenon than Blau's job agency. However, here, too, we find the phenomenon that official rules (women and children first) are adapted to specific circumstances: On Starboard, since not enough women were present, life-boats were filled up with men. The fact that very different phenomena may be summarized in an overall theoretical framework is a progress in sociological theorizing. To reiterate, the functions of grand theory lie not so much in explaining specific facts better than competing theories, but in providing a helpful conceptual and heuristic environment for middle-range empirical research in all stages of the research process. The Blau study is an illustration of how the theory of social games may do this. --- Conclusion The main contribution of this paper is to propose an outline of a new "grand theory" which has a similar level of abstractness as its competitors, but a clearer link to empirical, qualitative, quantitative, formal, and agent-based modeling research. I have outlined a meta-theory for the social sciences called "theory of social games". Readers acquainted with sociological theory will have noticed that much of what this theory says is based on its integration of ideas from various strands of existing sociological traditions. While simply including some previous insights, the new general theory often also adds a new twist. Thus, the idea of causal game mechanisms is very close to the mechanisms described in the tradition of analytical sociology (Boudon, 1998;Hedström and Swedberg, 1998;Manzo, 2010). What is added is that game mechanisms are assumed to consist of interlinked game elements, and are therefore never only causal, but also symbolic. Likewise, the idea that social games are both real and socially constructed owes much to the writings of Searle (1995). What is added is that such a games perspective can be put to explanatory use, because games have (often quantifiable) outputs that are the causal effects of playing the game. The idea that there are different "levels" of social games is taken from Luhmann (1996), who speaks of "systems" rather than "games". Unlike Luhmann, though, we allow many more forms of social games, and distinguish them according to how individuals become players. To give a final example, we can see that the idea that those players who are consistently disadvantaged by playing the game will try to change the rules, while those advantaged by the game will try to preserve and legitimize the rules, is of course inspired by Weber (1978Weber ( (1920))) and different field theories (Bourdieu, 1990;Fligstein and McAdam, 2011). What is added is that this element of contesting the rules of the game as well as other game parameters can be generalized from strategic action fields to games in general, and can be found in children's games, in everyday interactions, and in "societal fields" like art and science. The generality of the theory can be seen in the fact that it starts from a very abstract model of social games that is nevertheless able to capture phenomena at very different social levels: interactions, groups, milieus, movements, networks, organizations are all cast as social games. Phenomena of extreme complexity are seen and analyzed as combinations of nested and coupled social games. The theory can show that social reality is both real and constructed, that social action incorporates both rule-following and instrumental aspects, and that it is both causal and meaningful. But this generality and these insights are not yet what sets the theory apart, since systems theory, practice theory, discourse theory, and structuration theory all have such a high level of generality, and make some or all of these points. The main advantage of the proposed "grand" theory of social games, though, is that it is better able than its competitors to bridge the theoretical-empirical research divide, by using a descriptive heuristic, an explanatory heuristic, and formal and agent-based modeling. The descriptive-interpretive heuristic consists in several questions directly linked to the game elements (e.g., "What are the goals of this game?", "What are the rules and sanctions of this game?", "Who are the actors and what are their resources?"). This heuristic works much like the "coding paradigm" in grounded theory and lends itself very well to explorative qualitative work. It allows researchers to reconstruct a game model, one that is as simple as possible, yet as complex as necessary, and one that the players may not (or only partly) know consciously. This heuristic is strong because it is a systematization of how real people learn real games in the social world. The explanatory heuristic consists of several hypotheses, which are again linked directly to the central game elements. This heuristic allows researchers to focus on typical game mechanisms that crop up time and time again in social games. They function like a toolbox of possible "nuts and bolts" that may or may not be applicable in an empirical social game. Explaining an outcome of a social game means showing how a change in a game parameter (e.g., a rule change, a change in game leader, a change in resources) has led causally to a change in the game output. Again, this heuristic is strong because its central elements are straightforward and easily observable, and because this is how players try to have a causal influence on games in social reality. In other words, our explanatory heuristic is a systematization of how real people try to have a causal influence on real games in the social world. Finally, social games can also be analyzed with formal (mathematical) game theory, which can be very useful when it comes to understanding whether such games have solutions that would be chosen by rational players. Such formal analysis may help clarify the deep structure of a certain type of game (e.g., dilemma games, zero-sum games), create ideal types from which to measure real cases by measuring the difference to ideal situations, and reveal other possibilities not (yet?) observed empirically. Agent-based modeling may also help towards a better understanding of emergent game behavior given various types of initial parameters. Some critics might say: "We already have economic game theory, so why do we need the theory of social games?" My answer is that economic game theory does not exhaust the possibilities of the game model for the social sciences. Economic game theory is extremely strong in its domain, i.e., when it comes to formal analysis, experimental research, and simulation. But my point is that games as starting points are also very useful in additional fields, such as when we think about how individuals learn and understand games, how they create their identities, and how they create the symbolic worlds in which we live. Thus, for a vast number of research questions in the social sciences, economic game theory must be supplemented with a sociological take on games. These questions must be addressed with qualitative or quantitative empirical methods, and they may or may not lead to an additional formalization à la game theory. To give just one example: if you want to know about football, reading only game-theoretical accounts of the sport will not be of much use. Other critics might say: "This is all very well: so we can see social reality as a number of interlinked social games. But we could just as well see it as several fields (Bourdieu), social systems (Parsons, Luhmann), configurations (Elias), or structures (Giddens). What is the advantage of starting from scratch with the game concept?" I have argued that the major advantage that the theory of social games has over its theoretical competitors is that it is just as general as its competitors, while having a more straightforward link to middle-range theory and empirical research. This article of course has limits. It is only an outline that sets out the major ideas in a very general way, and it has had to skip many deeper issues-something that is difficult to avoid in an initial sketch of a new theory. Thus, I have only alluded to the different types of games (e.g., interaction, group, milieu, etc.), and to how games may be interlinked (nested, coupled). I have not been able to present the descriptive and explanatory heuristic in full, and nor have I been able to go into questions of trust and power. It is also evident that, while formal game theory and agent-based modeling are already well-established scientific fields, using descriptive and explanatory game heuristics must still prove its usefulness in the future. These limits notwithstanding, I am convinced that there is some promise in developing a general theory of social games, and I welcome both theoretical and empirical studies that develop this new research path further. Data sharing. Data sharing is not applicable to this research as no data were generated or analyzed. --- Received: 3 September 2022; Accepted: 19 June 2023; Notes 1 My use of the concept "social game" is not metaphorical, since I define the concept of game, identify its elements, and show how the concept can be operationalized and put to practical use in the proposed heuristics. 2 Some readers may expect a "theory" to single out a specific area of social life in which it describes and explains phenomena in a novel way. But that is not the goal and function of grand theory (and therefore not of this paper). 3 Among the sociological and philosophical classics on games and their link to social evolution and socialization, Mead takes a special place. Mead, G.H. (1967Mead, G.H. ( (1934)) January 2022). 7 The terms "game mechanism" and "game" have to be distinguished as can be seen by the definitions given. One game may therefore include a variety of game mechanisms (e.g., sanctioning mechanisms, self-reinforcing (de)motivating mechanism, playerrecruitment mechanism, etc.). --- Competing interests The author declares no competing interests. --- Ethical approval This article does not contain any studies with human participants performed by any of the authors. | Sociological general theories (or "grand theories") have been criticized for being too abstract to be of any practical use for empirical sociological work. This paper presents the outline of a general theory that claims to be better linked to empirical social research than previous theoretical attempts. The theory analyzes social life as a multitude of interacting social games. A social game is an entity created by players with resources who engage in action that is shaped by goals, rules, and representations, that involves objects, and that leads to game outcomes. The general theory is as encompassing as previous theoretical attempts, while allowing us to integrate both instrumental and normative action at different levels of the social. Its main advantage is that it is linked to middle-range theory and empirical research by a descriptive-interpretive heuristic, an explanatory heuristic, and formal and agent-based modeling. The article provides many examples to illustrate the claims. |
game, identify its elements, and show how the concept can be operationalized and put to practical use in the proposed heuristics. 2 Some readers may expect a "theory" to single out a specific area of social life in which it describes and explains phenomena in a novel way. But that is not the goal and function of grand theory (and therefore not of this paper). 3 Among the sociological and philosophical classics on games and their link to social evolution and socialization, Mead takes a special place. Mead, G.H. (1967Mead, G.H. ( (1934)) January 2022). 7 The terms "game mechanism" and "game" have to be distinguished as can be seen by the definitions given. One game may therefore include a variety of game mechanisms (e.g., sanctioning mechanisms, self-reinforcing (de)motivating mechanism, playerrecruitment mechanism, etc.). --- Competing interests The author declares no competing interests. --- Ethical approval This article does not contain any studies with human participants performed by any of the authors. | Sociological general theories (or "grand theories") have been criticized for being too abstract to be of any practical use for empirical sociological work. This paper presents the outline of a general theory that claims to be better linked to empirical social research than previous theoretical attempts. The theory analyzes social life as a multitude of interacting social games. A social game is an entity created by players with resources who engage in action that is shaped by goals, rules, and representations, that involves objects, and that leads to game outcomes. The general theory is as encompassing as previous theoretical attempts, while allowing us to integrate both instrumental and normative action at different levels of the social. Its main advantage is that it is linked to middle-range theory and empirical research by a descriptive-interpretive heuristic, an explanatory heuristic, and formal and agent-based modeling. The article provides many examples to illustrate the claims. |
Introduction Despite efforts to improve access to basic resources, 768 million people rely on unimproved drinking-water for daily consumption, and an estimated 2.5 billion people lack access to improved sanitation facilities [1]. The link between access to these basic resources and psychosocial outcomes is an emerging area of importance in global health research. A study in Ethiopia found that water insecurity was significantly associated with psychosocial distress (r = 0.22, p <unk> 0.001; one sided test) [2]. In Bolivia, Wutich and Ragsdale found that gender and the process of accessing water resources were significantly associated with emotional distress citing fear, worry, anger, and bother [3]. Though the literature focuses on water insecurity, sanitation access presents similar psychosocial risks, particularly for women and girls. In Kenya, Henley and colleagues studied hair cortisol concentrations as a biomarker for chronic stress, finding that concentrations were significantly higher in women who reported feeling unsafe while collecting water or accessing sanitation [4]. In a study of mental health in urban slums in Bangladesh, Gruebner, et al. found that elements of the built environment including access to a better toilet facility were significantly associated with high quality of life scores (WHO-5 scores) [5]. In addition to navigating the built and physical environment for sanitation activities, women face daily struggles with social status, access to resources, and social conflicts [6][7][8]. Time of day and privacy contribute to sanitation-related stress [9]. Moreover, women may have to cope with violence [10,11] or sexual assault and rape [12][13][14] while completing sanitation-related behaviors. The present study seeks to add to the emerging body of research on the impact and determinants of sanitation-related psychosocial stress (SRPS). Data for this study are part of a larger mixed-methods study exploring women's relationship with sanitation in low-income, infrastructure-restricted settings in Odisha, India. We build upon an initial Grounded Theory study that provided an empirically based, conceptual understanding for SRPS among women of reproductive age in Odisha [15]. Findings from this study suggest that sanitation encompasses a range of behaviors specific to the local cultural context, including: ritual anal cleansing, menstrual management practices, bathing, and changing clothes prior to reentering the house after defecation. Sanitation-related psychosocial stressors arise when women are unable to perform these behaviors free from worry, fear, or anxiety. According to the conceptual model proposed in the study, there are three categories of stressors, environmental, social, and sexual / genderbased violence stressors, whose intensity is modified by a woman's life stage, living environment, or access to sanitation facilities. The current study aims to examine and compare stress as it relates to the specific sanitationrelated behaviors as well as explore the relative frequency and severity of individual stressors that contribute to SRPS among a sample of women in Odisha. Recognizing that these sanitationrelated behaviors and stressors are contextually bound and dynamic in nature, this analysis explores the differential impact of common psychosocial stressors on women living in different geographic settings and occupying differing social roles within the household and community. We selected systematic data collection methods-a broad family of interviewing techniques originally intended to examine tacit knowledge in ethnography and cognitive anthropologyfor use in this study [16]. These methods have been used to explore the boundaries and dimensions of specific cognitive domains that may be culturally defined or difficult to articulate, such as kinship terms [17] or medicinal classifications [18,19], and the internal systems of classification that individuals employ. Unlike open-ended interviewing or participant observations, systematic methods entail asking all respondents the same questions and analyzing responses according to emic categorization rather than those imposed by the researcher. For the purposes of this study, the successive application of multiple systematic data collection methods allowed us to simultaneously examine the dynamic nature of sanitation-related behaviors, the relative degree to which these behaviors have contributed to psychosocial stress, and the frequency and severity with which women in the sample and women like them in the broader population have dealt with psychosocial stressors. --- Methods --- Study Sites Access to sanitation in much of India remains scarce, and an estimated 44% of the population practices open defecation [1]. However, access to water and sanitation facilities may vary considerably by geographic context. Therefore, we chose three resource-poor geographic locations in Odisha to reflect differing access to sanitation infrastructure as well as differing social and cultural practices: urban slums, rural villages, and rural tribal villages with a large proportion of ethnically distinct residents. In the urban site, we interviewed women in two slums in Bhubaneswar, the capital of Odisha (population density of 2,134 people per square kilometer). Some slum residents had access to either privately owned or public latrines, but several participants still reported practicing open defecation. Rural women were selected from Khurda district, an agricultural region outside of Bhubaneswar (population density of approximately 800 people per square kilometer). Low-density, rural tribal villages were selected from Sundargarh District (population density of 216 people per square kilometer), where about half of the population belongs to scheduled tribes (Adivasis) recognized by the Indian government [20] including Oraron, Munda, and Kisan tribes. In local terms, "tribal" is used to describe both the geographically isolated regions and ethnic minority populations, and we use the term "tribal" when referring to women from this site. Both sanitation practices and access to infrastructure vary here compared to rural areas in Odisha, and tribal women were therefore expected to face unique sanitation challenges. --- Sample and selection of participants We purposively sampled women from four life stages that are reflective of social and biological characteristics that influence a woman's place in her household and community: 1) "Adolescents": unmarried women aged 14-24 who had reached menarche and who lived with their parents and extended families; 2) "Newly married women": married two years or less, the majority of whom had moved to a new social and physical geography to join the husband's family household; 3) "Pregnant women": women who identified as pregnant during data collection, for whom pregnancy changed their household roles and created distinct physical needs for sanitation; and 4) "Established adult women": women between the ages of 25 and 45 who had been married more than two years, and were not currently pregnant. This sampling technique, while not providing a proportionally representative sample of the population of women in Odisha, offered us an opportunity to assess life stage-based variance in SRPS in a small sample. --- Data Collection Volunteer community health workers affiliated with the Asian Institute for Public Health (AIPH) identified 20 women at each study site for participation in the study for a total of 60 participants. Our stratified, purposive sampling strategy ensured equal representation from each of the four life stage groups of interest (5 women per life-stage group per site) and a sample of latrine users and non-users similar to the general population. A team of four female interviewers trained in systematic data collection methods completed recruitment and data collection. Data were collected from April to May of 2014. We carried out structured interviews that employed two systematic methods: pile sorting and ranking (S1 File). Pile sorting methods have traditionally been used to understand the internal organization of domains through the generation of graphical multidimensional scaling plots [21] or hierarchical clusters [19]. However, the flexibility of these methods to examine the categorization and organization of a range of topics has resulted in innovative adaptations to, for example, explore abstract concepts such as stress in children [22], perceptions of post-traumatic mental health [23], and gender roles [24]. Ranking and rating techniques have been used to develop measurement tools for wealth and wellbeing reflective of local understandings of economic security [25,26] and as participatory tools to engage residents in identifying and prioritizing needs in their communities [27]. Structured interviews began with basic demographic questions about the woman's household, followed by a data collection module on sanitation behaviors and one on stressors. For behaviors, we identified a local taxonomy [28] of sanitation-related behaviors from our initial qualitative study [15] that included defecation, urination, menstruation, post-defecation cleaning (dhua dhoi), post-defecation bathing, changing clothes, and carrying water for use in sanitation. Field staff verbally presented participants with seven index cards each labeled with one of these specific-sanitation related behaviors, and explained each card to the respondent As interviewers introduced each card, women indicated if the behaviors were part of their typical routines (e.g. pregnant women could choose to include or exclude menstruation, but the choice was up to the participant and we stipulated no rules as to what was applicable). If not applicable, the card associated with a behavior was set aside and excluded from further data collection in the interview. Next, interviewers asked women to 'rank' stress associated with each behavior -most stressful to least stressful using a quick-sort ranking method [16] in which respondents organize items along a specific continuum. The rank order of cards was read back to the participant and recorded by the interviewer. Next, the interviewer shuffled the cards and asked respondents to rank behaviors by freedom-from the behavior they had the most freedom to choose when and how to practice to the least freedom. Rank order was again recorded (S1 Table ). For stressors, we presented women with index cards labeled with specific sanitation-related stressors and challenges identified in previously conducted in-depth interviews [15]. Interviewers again verbally presented each card, and women identified cards with stressors that they considered applicable to their typical routines, excluding those that were not applicable from the remaining questions. Next, interviewers asked women to'sort' the cards into three piles based on how frequently they encountered the problem: always, sometimes, or rarely. The groupings were recorded and the interviewer shuffled the cards for the next question. Finally, participants were asked to'sort' cards based on perceived severity: high, medium, or low. After each exercise, interviewers reviewed the rankings or piles and asked participants to describe their reasoning with open-ended questions (S2 Table ). Interviewers took detailed notes of both the ranking and sorting outcomes as well as participant responses. Ranking and sorting results were entered into a database (S1 Database), and open-ended questions were digitally recorded, transcribed, translated, and de-identified. --- Data analysis For sanitation behaviors, ranking data on stress and freedom were modeled using rank-ordered logistic regression by maximum likelihood, specifically with the rologit command in Stata 13.1 [29]. Rank-ordered logistic regression is used to estimate the probability that an item-in our case, a sanitation behavior-would be ranked by a respondent as first along the characteristic of interest. Rank-ordered logistic regression accepts incomplete rankings, making it amenable to data where participants can discard some items or, as in our case, exclude inapplicable items, as long as we assume that omitted items are ranked lower along the trait of interest than all items that were retained. Unlike conditional logit models that only account for how often an item was ranked first among a set, rank-ordered logistic regression takes into account all ranks assigned to an item. Therefore, two items with equal numbers of first place rankings can be differentiated in the rank-ordered model based on how many second, third, etc. rankings they received. Frequency and severity data regarding stressors arising during sanitation practice were interpreted as Likert-type scale ratings. We found that reporting and comparing percentages of "high severity" and "always" responses was sufficient to illustrate variations of concerns across groups. --- Ethical approval Prior to the interviews, all participants provided written consent. For girls under 18, interviewers collected written assent from the participant and written consent from her parent. Participants were informed of their rights to terminate the interview at any time and to skip any questions or topics that they did not wish to discuss. Names and other identifiers collected during the interview were redacted during the transcription process and the original audio files destroyed. Ethical approval for this study was provided by the Ethical Review Committee at AIPH (ERC Protocol No. 2013-03) and the Institutional Review Board at Emory University (Protocol 00069418). --- Results --- Participant Characteristics Table 1 presents characteristics of the 60 study participants by geographic site. Women ranged in age from 14 to 45 years old. The majority of the participants (73%) and all women in rural areas identified as Hindu. Access to a private or public latrine was limited; the majority of our participants did not have access to latrine facilities (63%) and were forced to practice open defecation. Latrine access was highest among participants in the urban population. --- Sanitation behaviors Table 2 presents the percentage of women in each geographic region who self-reported engaging in each of the seven sanitation-related behaviors of interest. We assessed whether or not women engaged in these activities to ensure that women only responded to issues that were pertinent to them in the subsequent exercises; these questions were not asked to compare habits of women in urban versus rural versus tribal areas. Women everywhere report defecation, urination, post-defecation cleaning (of the hands and feet), and bathing as part of normal sanitation practice. Women in rural areas reported less carrying water for sanitation purposes, since many use sites at or near open water sources or were more likely to walk to a pond or a river to complete their washing. Only 25% of women (all Hindu) in the tribal site reported changing clothes after defecation, a practice that women reported in previous qualitative interviews to be strongly linked to Hindu beliefs about ritual cleanliness [15]. Stress. Table 3 shows results of the rank-ordered logistic regression analysis for stress and freedom, indicating the probability of a specific behavior being ranked first (most stressful, greatest freedom). We present the data in as raw a format as possible to encourage a more nuanced understanding of the responses than statistics such as modes and or measures of dispersion would supply. Menstruation was most likely to be ranked as the most stressful behavior in our total population, followed by defecation and urination. However, the ranking of stress associated with these behaviors varied considerably according to geographic site. For example, menstruation was highly likely to be ranked as most stressful among rural and tribal women, but carrying water was the most stressful aspect of sanitation practice in urban areas. Tribal women were about twice as likely to rank defecation as most stressful compared to urban and rural respondents. Stress rankings also varied by life stage. For adolescents, defecation was ranked as the highest stress, followed by menstruation, bathing, and post defecation cleaning. Menstruation was most likely to be ranked as high stress among newly married and pregnant women. Carrying water was also among the most stressful activities among newly married women, pregnant women and established adults. Freedom. Daily sanitation activities take women out of the domestic environment in order to access latrines, fields for open defecation, or communal water sources. Women face restrictions dictating when and how they may practice these activities, such as when they leave the household, where they go, and whom they are allowed to go with. Table 3 presents the probability that a woman ranks a sanitation-related activity as the one she can practice with the most freedom. Overall, women had a high (25%) probability of ranking urination as the behavior with the most freedom, a pattern consistent among all of our geographic and life course groups. The two activities least likely to be ranked as having a high degree of freedom were changing clothes and menstruation. We note some variation in freedom by geographic site and life stage group. Among rural women, the activity most likely to be ranked as having the highest degree of freedom was changing clothes, followed by urination, post-defecation cleaning, and bathing. When comparing across life stages, though urination is most likely to be ranked as most free by adolescents, newly married and pregnant women, established adults had a higher probability of ranking bathing as most free. Defecation was ranked with a relatively high degree of freedom for adolescents, newly married, and established adult women; however, this is the least likely to be ranked as most free among pregnant women, indicating that pregnant women may face greater restrictions associated with this practice based on their physical needs and the social and cultural restrictions accompanying pregnancy. Fig 1 provides a visual representation of results, combining data on the percentage of women who reported completing specific behaviors (size of the circle), probability of a behavior being ranked as most stressful (x-axis), and the probability of a behavior being ranked as having the most freedom (y-axis). Fig 2 depicts this same visualization by life stage and geographic region. Among the total population, we note a clear and expected negative correlation between the probability that a behavior would be ranked as most stressful and as having the most freedom. Only changing clothes is an outlier from this general trend. This trend is less pronounced when visualizations are developed for each geographic area and for each life stage group. In particular, the graph of rural responses shows a steep association linking high freedom activities (such as urination and changing clothes) with lower stress compared to more restricted activities like menstruation with a high degree of stress. In the tribal site, the relationship between stress and freedom was less clear. However, the relative association between activities does follow the general trend (e.g. urination is higher in freedom and lower in stress than defecation, urination, and carrying water). Conversely, among adult women, the relationship Applicability, stress, and freedom associated with sanitation activities. The diameter of each circle is proportional to the percentage of women who indicated the activity was applicable to them; the location of the center of the circle relative to the horizontal and vertical axes indicates the probability that the activity was rated most stressful and most free, respectively. doi:10.1371/journal.pone.0141883.g002 between stress and freedom is slightly positive, and activities less likely to be associated with freedom are more likely to be associated with greater stress. --- Sanitation Stressors We asked women to indicate what stressors they faced during sanitation (in general) based on twenty previously identified problems that were highly salient to women in Odisha [15]. Overall, women most commonly indicated rain (e.g., getting wet, walking through mud during sanitation), night/darkness, animals, and health during illness as sanitation stressors, with 87% or more of women indicating these were problems they faced. In all sites, women identified an average of 13 out of the 20 potential stressors as applicable to their sanitation practice. Table 4 summarizes the results of constrained pile sorting of stressors by frequency with which it is encountered (always, sometimes, rarely) and severity of the stressor (perceived severity of the stressor is high, medium, low). Frequency and Severity. Overall, the issues more likely than not to be considered applicable, as "always" a concern, and as stressors of high severity were rape/assault, distance, reputation, and ghosts. For the minority who considered it applicable, lack of space was also predominantly considered a persistent and severe concern. These stressors span multiple domains related to sanitation-related psychosocial stress [15] including the built and social environments. Lack of space and distance stand out as especially prominent sanitation infrastructure-related concerns, compared to physical barriers. Rape/sexual assault and reputation are distinguished from, for example, being scolded as particularly poignant constructs of the social environment that induce SRPS. Among the most concerning of stressors, we also find an example from the domain of cultural beliefs, namely, encountering ghosts. The types of stressors and the frequency and severity with which they were encountered ranged by geographic site and life stage group (Table 4). While the majority of women in all sites and life stage groups reported the majority of the 20 stressors as applicable (ranging from 13 among established adult women to 17 among adolescents, and from 14 among tribal and rural women to 18 among urban women), the variation in describing those stressors as frequent or severe manifests the importance of understanding the context in which women encounter SRPS. For example, urban women identified physical barriers (like fences or gates restricting access to sanitation) as more applicable to their sanitation behaviors than rural or tribal women (30% in urban sites as opposed to 5% in both rural and tribal sites), and half of urban women rated physical barriers as a high severity concern (compared to 0% of rural and tribal women). Rape and sexual assault was particularly salient in the urban group where 70% of women said it was a stressor. Among these urban women, 86% were always concerned about it and 100% described it as a highly severe issue. In comparison, only 55% of rural and tribal women identified rape/sexual assault as applicable, and among these women it was not categorized as "always a concern" (36% of rural and 45% of tribal women) and 64% of women in both groups said it was highly severe. Being seen, a construct of the social environment, had roughly equal applicability across groups (14-16 of 20 women in each site marking it as applicable), but happened infrequently among women in the tribal site (only 6% said it was always a concern in tribal areas, compared to 57% in urban areas and 33% in rural areas) and seldom considered severe (20% of tribal women said it was a severe concern, compared to 50% of urban and 27% of rural women who considered it applicable). Males teasing or throwing stones was also similarly applicable across geographic sites (7-9 women per site), but varied greatly from rural women indicating that, even when applicable, it was never a high severity stressor nor one that was always a problem (0% of rural women categorized this in the most severe or frequent categories); tribal women agreed that males teasing or throwing stones was not always a problem, but, when it was, it was severe (57% indicated it was high severity). The salience of specific stressors also changed by life stage. Rape was salient to a majority of women in all groups, but in no group was it as often considered salient, frequent, and severe as it was among adolescents. Reputation was a concern shared almost equally by adolescents and newly married women, with 80% in both groups considering it salient and 83% in both groups considering it high severity; half of newly married women and 67% of adolescents viewed it as always a concern. Pregnant women were especially concerned with issues that they perceived to be detrimental to their pregnancies, such as encountering ghosts-a concern that was not as often salient, frequent, or high severity in other life course groups. We note a general positive trend between the perceived severity and perceived frequencies of stressor: issues that were commonly ranked as highly severe were also commonly ranked as issues they "always" encounter (Fig 3). Lack of space (in all geographic sites), sexual assault (among urban women), and distance (among tribal women) were likely to be ranked as both high frequency and high severity issues. Visualizing results also shows us exceptions to this relationship. For example, when applicable, adolescents encountering physical barriers ranked them as something they always encounter, however, this was not likely to be ranked as a severe stressor. Likewise, adult women who ranked "ghosts" as a stressor, were not likely to rank them as a frequent stressor, but they were often ranked as a high severity issue. --- Discussion Using structured data collection methods for this research allowed us to explore the scope and dimensions of key sanitation-related stressors in a more nuanced manner than a survey would afford and more systematic than exploratory qualitative research. Ranking sanitation-related behaviors from most stressful to least stressful helped us to explore how stress manifests across sanitation activities. Women consistently ranked menstruation and carrying water as highly stressful activities, contributing to SRPS. Water is an essential component of sanitation related behaviors in this setting and was used in post-defecation cleaning, bathing and for menstrual hygiene management [15]. In urban areas, women usually rely on shared, public water sources that may be intermittently available, and the burden of collecting and carrying water to a site for defecation or urination was highly problematic. Despite the links between carrying water and other sanitation behaviors, water and sanitation provision in India are often operationalized independently. The delivery and provisioning of water may be coordinated by a state's Department of Public Health and Engineering or by the State Water Board; however, different state-level departments may implement sanitation programs. In theory, India's Total Sanitation Campaign (TSC, 1999-2012) aimed to incentivize user-and community-driven demand for sanitation, but the focus on infrastructure development has been criticized as a top-down, government-led approach [30]. The Swachh Bharat Mission (SBM), the recently launched government-led sanitation campaign in India, has committed billions of dollars to improve sanitation coverage through infrastructure development, user incentives, and communitymobilization. However, efforts remain targeted on sanitation infrastructure at the householdlevel. Though the nonprofit and private sectors play a role in increasing water, sanitation and hygiene services throughout the country, our data show that sanitation behaviors rely heavily on water access, suggesting the need for coordinated interventions among different levels of government and the public and private sector that respond to the social and physical needs of the users. Furthermore, the majority of sanitation interventions focus on defecation and fecal management and often ignore other sanitation related behaviors like washing and menstrual hygiene. In addition, though the psychosocial implications of menstruation and menstrual management have been documented among adolescent girls [31][32][33][34], few studies have critically examined the psychological, interpersonal, and social repercussions among older populations. Our data highlight that stress related to menstrual management is particularly salient among newly married and pregnant women. Newly married women described that menstruation is highly stressful because they are new in their households and have to curtail their regular activities based on cultural traditions restricting sanitation behaviors, they feel uncomfortable talking about menstruation with their husbands and in-laws, and the physical symptoms associated with The diameter of the circle is proportional to the percentage of women who reported that the stressor was applicable to them. The location of the midpoint of the circle on the horizontal and vertical axes reflects the proportion of those women who indicated that the item was a high severity stressor and high frequency stressor, respectively. Only stressors that were highly applicable, severe, or frequent are included in each graph. doi:10.1371/journal.pone.0141883.g003 menstruation inhibit their normal activities. Similarly, pregnant women described menstruation as highly stressful, even though they were not currently experiencing monthly periods. Newly married and pregnant women living in their in-laws' households face social restrictions surrounding menstruation and all sanitation-related behaviors such as restricted water access and taboos related to sexual intercourse, cooking, or religious practices during their periods [35,36]. Correspondingly, menstruation was also the least likely to be associated with a high degree of freedom among these women. Strategies that women may have had as adolescents may need to be renewed upon marriage and relocation into a new household. Our results highlight the dimensionality of sanitation-related of stressors. We found that even stressors that occur less frequently may still be high severity issues, and that the intensity of stressors vary by life stage and geographic location. Examining stress and food security, a recent Food and Agriculture Organization (FAO) study found a relationship between severity and frequency, discussing how more severe indicators of food security (e.g. "Adult did not eat for a whole day") are less frequently noted than less severe items (e.g. "Adult cut the size of meals) [37]. In our study, we similarly found that fewer women encountered some of the stressors that were most severe. For example, sexual assault was not commonly included as applicable, but when included, it was likely to be ranked as a high severity, high frequency issue, especially for adolescents and in urban areas. Violence that occurs due to inadequate access to water, sanitation, and hygiene facilities is of increasing concern in the water, sanitation, and hygiene community. Recently, rape and sexual assault associated with sanitation have received more attention in Indian media, explicitly linking lack of sanitation facilities with violence, rape, and lack of safety for women [38][39][40]. A review of literature examining gender-based violence and WASH shows how sensitivity, secrecy, and the complexity of violence inhibits the collection of reliable data, and the authors advocate for building an evidence base grounded in systematic, ethical evaluation of WASH related violence [41]. Our research identified violence and sexual assault as high severity stressors, but further research is needed to quantify the scope of the problem and suggest interventions. Beyond the physical and social stressors associated with sanitation, this study illustrated how fear of ghosts was also perceived to be highly severe, especially among rural, pregnant and adult women. The high severity of this issue may be due local, traditional beliefs linking miscarriage to encounters with ghosts. Though we were unable to find studies specific to Odisha, an ethnographic study by Pauline Mahar Kolenda of sweepers in North India discusses a range of anxieties related to ghost and supernatural encounters, including the attribution of miscarriages to malevolent female ghosts [42]. This example highlights the usefulness of examining the stratification of stressors, especially when culturally significant proscriptions impact sanitation behaviors. Understanding the dynamic sanitation behaviors, stressors, and the attributed level of severity is essential for informing practitioners about the context and implications of intervention. Identifying how stressors are related to location and life stage may help assign priorities in creating safe sanitation spaces. For example, for newly married women, physical barriers were less likely to be ranked as highly severe than for women in other life stage groups. Women in our study occasionally mentioned special places near the home where newly married women could defecate, and in some cases improvements to the home are used in negotiating a marriage. In rural Haryana, India, access to sanitation was used as bargaining power in a campaign called "No Toilet, No Bride," minimizing social restrictions for newly married women during sanitation and improving standards for sanitation access [43]. This example suggests that interventions focused on physical barriers are more greatly needed for adolescent, pregnant, and established women than for newly married women. In advocating for a contextualized, gender sensitive approach to sanitation, our research findings inform future study of SRPS, illustrating key differences across life stages and social settings. Additionally, given the numerous ways women experience stress related to sanitation, further study may illuminate factors that ameliorate stress. Using systematic data collection techniques helps to populate a range of factors and then explore them to identify relevance, key priorities, and more nuanced dimensions like stress, severity, and frequency. Women in different parts of India face a distinct constellation of stressors and their severity depending on physical surroundings, life stage, and access to sanitation facilities. Understanding the dynamics of how social geographies and life course stages shape women's sanitation experience may help to tailor sanitation needs given cultural and geographic diversity. --- Strengths and Limitations The systematic data collection methods employed in this study helped us to explore sanitation related psychosocial stress using an interactive format and generating comparisons between women of different ages living in different geographic locations. The results highlight some key areas that can help to inform future research on sanitation related to mental health; however, more research is needed to develop locally relevant psychometric scales. We recruited five women per life stage group per site for 60 total participants, allowing us to examine results in both social and geographic groupings. However, a larger sample size may afford more granularity in examining trends by life stage group and geographic site simultaneously (i.e. urban adolescents vs. tribal adolescents). It would also be valuable to explore the relationship between freedom and stress using a larger sample size. Additionally, some of the sanitation behaviors and stressors are shaped by cultural practices and socially defined roles, so the generalizability of some of our findings may be limited to low-resource settings of India. --- Conclusions Factors contributing to SRPS differ by life stage and geographic site, and the context of sanitation must be understood to inform successful sanitation interventions. Understanding the network of factors, relationships and activities influencing mental health and feelings of distress gives us a more nuanced understanding of the ways women negotiate their sanitation environments. Further research measuring SRPS may help to significantly inform sanitation interventions, signposting key areas for infrastructural development and behavior change messaging. --- All relevant data are within the paper and its Supporting Information files (database and interview guide). | Emerging evidence demonstrates how inadequate access to water and sanitation is linked to psychosocial stress, especially among women, forcing them to navigate social and physical barriers during their daily sanitation routines. We examine sanitation-related psychosocial stress (SRPS) across women's reproductive lives in three distinct geographic sites (urban slums, rural villages, and rural tribal villages) in Odisha, India. We explored daily sanitation practices of adolescent, newly married, pregnant, and established adult women (n = 60) and identified stressors encountered during sanitation. Responding to structured data collection methods, women ranked seven sanitation activities (defecation, urination, menstruation, bathing, post-defecation cleaning, carrying water, and changing clothes) based on stress (high to low) and level of freedom (associated with greatest freedom to having the most restrictions). Women then identified common stressors they encountered when practicing sanitation and sorted stressors in constrained piles based on frequency and severity of each issue. The constellation of factors influencing SRPS varies by life stage and location. Overall, sanitation behaviors that were most restricted (i.e., menstruation) were the most stressful. Women in different sites encountered different stressors, and the level of perceived severity varied based on site and life stage. Understanding the influence of place and life stage on SRPS provides a nuanced understanding of sanitation, and may help identify areas for intervention. |
Background Alcohol is a psychoactive substance that can produce addiction and dependence [1]. Chronic alcohol use is associated with a myriad of negative health outcomes, including damage to the central nervous system [1]. Alcohol use is causally associated with more than 200 diseases and injuries and is a major contributor to mortality globally [1]. The WHO global status report on alcohol and health reported that 3 million deaths and millions of disabilities are caused by alcohol consumption each year, which constitutes over 5.3% of deaths worldwide [2]. However, drinking behavior remains very common, especially in China. In 2015-2016, the prevalence of alcohol use in China, defined as the percentage of people who have drunk alcohol in the past 12 months, was 43.7% among adults 18 years of age and older. Prevalence is higher among adult men (64.5%) than adult women (23.1%) [3]. Multiple factors are associated with alcohol consumption, including biological [4], sociocultural [5], and psychological factors [6,7]. For example, genetics appears to play a critical role in alcohol dependence and consumption. Polymorphisms in Alcohol Dehydrogenase Genes, specifically, can lead to an increased risk of alcohol dependence [8][9][10]. Ecological Systems Theory posits that people' s behavior is influenced by nested ecological systems, including microsystem, mesosystem, exosystem, and macrosystem. Macrosystems, which refer to as the culture, subculture, and social environment, are of particular interest [11]. The macrosystem in China, where there is a strong historical influence of Confucian culture, may figure prominently in Chinese alcohol use, particularly among men. The Confucian culture emphasizes developing and maintaining social bonds via gift exchange, which is seen as a social norm [5,12,13]. Chinese consumers spend more money on alcohol when purchasing alcohol as a gift, compared to when it is purchased for their own use. This may reflect a desire for people to be perceived favorably among their peers [14]. This idea is further supported by findings that gifting alcohol serves as a mechanism to maintain good relationships with elders and promote camaraderie among peers, [15] especially in higher social classes [16]. Gender ideals are also associated with alcohol use, where men who consumed alcohol are seen as full of masculine charm and loyalty [15,17]. While global per capita alcohol consumption rose from 5.5 L in 2005 to 6.4 L in 2016, per capita alcohol consumption in China rose to an even greater extent, from 4.1 L in 2005 to 7.2 L in 2016. This increasing rate of per capita alcohol consumption may indicate a great challenge for alcohol control in China [1]. Identifying factors associated with alcohol gifting behavior in China may inform future interventions. However, there is little published literature on this topic. One of the few existing studies showed that spending more money on wine gifting in China is associated with younger age and higher education [18]. Top reasons for consuming wine included business, while a top reason for purchasing wine was gifting. It should be noted, however, that these relationships often vary by region [18]. Although this study offers some insight into correlates with demographic factors, the influence of the social environment should also be considered. Social Capital Theory predicts that individuals with strong social capital inherently have access to more supportive resources and have a higher capacity to utilize them [19,20]. Higher social capital can result in the spread of health information, such as information about the harms of drinking, via social networks, which can influence health-related behaviors [21]. According to Social Capital Theory and Behavioral Accessibility Theory, alcohol gifting as a norm may further increase contact with and consumption of alcohol for both non-drinkers and drinkers attempting to quit. It is also important to consider the potential adverse consequences of alcohol exchange in addition to factors and characteristics associated with alcohol gifting. Given the hypothesis proposed by previous empirical research [22], gifting alcohol may be associated with alcohol use. However, there is a lack of evidence-based studies that quantitatively identify the relationship between alcohol gifting behavior and potential hazardous behavioral outcomes, such as alcohol drinking and tobacco smoking in China. Although there is no research on this relationship in China, there have been studies assessing this relationship in other countries. Reviews from the U.S. identified a significantly higher risk for alcohol misuse among those who use tobacco [23]. Nationally representative data from the U.S.-based Add Health Survey also found a high prevalence of polysubstance use behavior, including the use of alcohol, marijuana, and cigarettes among adolescents in 2008 [24]. Polysubstance use of alcohol and tobacco is particularly concerning because they enhance the effects of each other, a reaction that tobacco and alcohol companies have exploited to promote sales [23]. The purpose of this study is to investigate the prevalence and correlates of alcohol gifting, including associations with social capital. We additionally aim to explore whether alcohol gifting is associated with alcohol or tobacco consumption in China. We employ quantitative analysis on a large sample at the regional/provincial level to inform evidence-based alcohol control practices relevant to China's alcohol gifting culture. --- Methods --- Study design and participants A multistage sampling design was utilized in this study and the sample consisted of the heads of households (HHs) from two provinces in China. Guangdong and Shaanxi Province were selected based on their regional diversity and existing research collaboration. Guangdong is a southeastern coastal province with a population of 126.84 million and $14,546. per capita GDP, whereas Shaanxi is a northwestern inland province with 39.54 million people and $11,153 per capita GDP in 2021 [25]. HH refers to the head of the family on the household register. In China, the head of the household is the person in charge of the current household [26]. One university each from Guangdong and Shaanxi Province was selected based on their regional diversity and existing research collaboration with the primary investigators. Within the two universities, all students that had health professional courses were invited to collect data as investigators. A survey link was given to all eligible students, which they were encouraged to distribute to their parents. In total, 982 HHs from Guangdong Province and 530 HHs from Shaanxi Province consented to participate in the study. More detailed information on the sampling and recruitment process can be found in Wu, et al. [22]. The online survey was developed on the Wenjuanxing Platform (https:// www. wjx. cn/ app/ survey. aspx) and conducted from April 30 to July 30, 2020. The study protocol was approved by the Ethics Committee of Guangdong Medical University, and all participants provided written informed consent before they began the survey. --- Measures --- Socio-demographic characteristics Socio-demographic information was collected, including age, gender, place of residence, ethnicity, marital status, educational attainment, and per capita annual family income. Given the participants in this survey were parents of college students which are probably between 45-50 years old in general [27], the age category was divided into " <unk> 45", " 45-49", and " > 49". --- Social capital Participants' social capital was assessed using the 12-item Social Capital Questionnaire [28], which has acceptable internal reliability. The higher this score on this scale, the greater the social capital. The Social Capital Questionnaire assesses the three factors of social capital: cognitive social capital, social participation, and social network. We analyze the subscales separately. The sub-scale for cognitive social capital contained four questions, and the Cronbach's alpha coefficient was 0.786. The sub-scale for social participation contained four questions, and the Cronbach's alpha coefficient was 0.805, suggesting good reliability. Social network was assessed by ascertaining the number of good friends, trustable classmates, helpful neighbors, close relatives, and cooperative partners. Cronbach's alpha coefficient for social network was 0.827, which suggests good reliability of this sub-scale. --- Drinking status Drinking status was ascertained by asking respondents on how many days they drank during the past month using the following response options: Yes, drank every day; Yes, drank on one or more days, but not every day; no days [29]. Daily drinkers were defined as drinking every day, occasional drinkers were defined as drinking on one or more days, but not every day, and current nondrinkers were defined as those who did not drink in the past month, including never-drinker and former drinker [30,31]. Categories for occasional drinkers and daily drinkers were combined and compared to non-drinkers to form a dichotomous indicator for drinking status. --- Smoking status Smoking status was ascertained by asking respondents on how many days they smoked during the past month using the following response options: Yes, smoked every day; Yes, smoked on one or more days, but not every day; No days. Participants who smoked every day were classified as daily smokers while those who smoked on one or more days, but not every day were classified as occasional smokers. Current non-smokers were defined as those who did not smoke in the past month, including neversmoker and former smoker [32][33][34]. Categories for occasional smokers and daily smokers were combined and compared to non-smokers to form a dichotomous indicator for smoking status [22]. --- Gifting alcohol behavior Gifting alcohol included two types of behaviors, offering and receiving alcohol. Offering alcohol was defined as offering at least one unopened bottle of alcohol as a gift to others in the past year. Receiving alcohol was defined as receiving at least one unopened bottle of alcohol as a gift from others in the past year. --- Data analysis The data was exported from the survey platform to Microsoft Excel and then uploaded to SPSS (version 22.0) for statistical analysis. Descriptive statistics for sociodemographic characteristics, social capital, and alcohol gifting behaviors are reported. The significance of differences between offering and receiving alcohol gifts across socio-demographic characteristics was determined using Chi-square analyses. Differences that reached statistically significance were included in a multiple logistic regression. The significance of each coefficient in the model was determined using the Wald test. Adjusted odds ratios (AORs) were used to express the odds of offering/ receiving alcohol compared to the odds of not offering/ receiving alcohol for each covariate, controlling for other covariates in the model. To determine the association between gifting alcohol and alcohol use and cigarette use, six logistic regression models were constructed where substance use was included as the outcome. The demographic characteristics which were significantly associated with smoking and drinking in the univariate analysis were included as covariates in the six multiple logistic regression models. Model 1 assessed the relationship between offering gifted alcohol and alcohol use. Model 2 assessed the relationship between receiving gifted alcohol and alcohol use. Models 3 included both offering and receiving gifted alcohol as covariates, with alcohol use as the outcome. Model 4 assessed the relationship between offering gifted alcohol and tobacco use. Model 5 evaluated the relationship between receiving gifted alcohol and tobacco use. Model 6 included both offering and receiving gifted alcohol with tobacco use as the outcome. --- Results --- Individual sociodemographic characteristics and drinking behavior The average age of the participants was 47.8 (SD 9.3) years old, with 39.2% of the participants aged 45-49 years. Participants were majority male (82.5%) and married (88.2%). More sociodemographic characteristics were shown in Table 1. Almost half of the participants reported being current drinkers, of which 6.2% were daily drinkers and 38.4% were occasional drinkers. --- The correlates of alcohol gifting The study showed that 43.5% of participants had received alcohol, and 29.9% had offered alcohol. The results from the Chi-square tests in Table 1 demonstrated that age, gender, marital status, education, region, social network, and social participation were all significantly associated with both offering and receiving alcohol. Place of residence and household annual income were only associated with offering alcohol, while smoking status and cognitive social capital were only associated with receiving alcohol. Ethnicity was unrelated to either offering or receiving alcohol. The results from the multiple logistic regression analysis in Table 2 further showed that participants aged 45 to 49 years old were more likely to offer alcohol than the people aged more than 50 years old (AOR = 1.4; 95% CI: 1.07-1.83). Meanwhile, those younger than 45 years were less likely to receive alcohol (AOR = 0.71; 95% CI: 0.52-0.98) than those older than 50 years. The male household heads were 2.34 times (95% CI: 1.51-3.61) more likely to offer alcohol and 1.40 times (95% CI: 1.02-1.93) more likely to receive alcohol than the female household heads. Participants from Shaanxi Province had higher odds of offering alcohol (AOR = 2.32; 95% CI: 1.81-2.97) and receiving alcohol (AOR = 1.42; 95% CI: 1.12-1.81) than participants from Guangdong Province. Drinking status and social participation were also significantly associated with offering and receiving alcohol. Participants who were daily drinkers (AOR offering = 2.69, AOR receiving = 4.01) and occasional drinkers (AOR offering = 3.05, AOR receiving = 3.91), or had a higher frequency of social participation (AOR offering = 1.84, AOR receiving = 1.37), were more likely to both offer and receive alcohol as a gift. We also observed that participants whose annual household income was more than one hundred thousand yuan were more likely to offer (AOR = 1.9; 95% CI: 1.30-2.78) and receive (AOR = 1.61; 95% CI: 1.11-2.36) alcohol than those with annual household income of less than 20,000. Similarly, those whose annual household income was between 80,000 and 100,000 yuan were more likely to receive alcohol (AOR = 1.81; 95% CI: 1.17-2.83). In addition, we observed that married participants (AOR = 1.62; 95% CI: 1.10-2.38), participants with an education level of junior high school (AOR = 1.61; 95% CI: 1.14-2.27), and participants with a large social network (AOR = 1.27; 95% CI: 1.01-1.58) had higher odds of receiving alcohol compared to those who were not married, had education level of junior college, college or higher, and had a small social network. --- Association between gifting alcohol and drinking and smoking use status The results from Table 3 demonstrated that receiving alcohol was associated with current alcohol use (AOR = 2.68; 95% CI: 2.16-3.34) and current tobacco use (AOR = 1.38; 95% CI: 1.10-1.72), while offering alcohol was only associated with current alcohol use (AOR = 3.06; 95% CI: 2.41-3.89), adjusting for sociodemographic characteristics and social participation. In addition, both alcohol offering and receiving were still significantly associated with drinking and smoking status even when controlling for the other gifting behavior. The household heads who offered (AOR = 2.16, 95% CI: 1.63-2.85) or received alcohol (AOR = 1.87, 95% CI: 1.45-2.41) had higher odds of being current drinkers than those who didn't offer or receive alcohol. Additionally, those who received alcohol were more likely to be current smokers (AOR = 1.64; 95% CI: 1.25-2.14), while those who offered alcohol were less likely to be current smokers (AOR = 0.71; 95% CI: 0.53-0.95). --- Discussion Research on alcohol has generally only focused on its use or overuse as a psychoactive substance [3,35], meaning there are few studies on the behavior of alcohol gifting. In this study, two provinces in southern and northern China were selected to explore alcohol gifting, associated factors, and behavioral outcomes. We also distinguished between actively offering and passively receiving alcohol gifts. This study showed that nearly half of the participants had received alcohol, and nearly one-third had offered alcohol, suggesting that alcohol gifting is common in China. There are some differences in alcohol gifting across sociodemographic characteristics. Our research showed that men were more likely to offer and receive alcohol than women. We posit two potential explanations for this difference. One, there are sex differences in drinking alcohol behavior in China, where drinking frequency in men is higher than in women [3,35]. This higher drinking frequency in men might explain why alcohol gifting is more common among men. Two, compared to women, men are more likely to participate in social interaction where alcohol consumption is normative in the Chinese socio-cultural context, especially on business occasions. In addition, the study also suggests that married people have higher odds of receiving alcohol. Consistent with this finding, a previous study of Chinese drinking behaviors showed that being married is associated with more alcohol consumption [36]. We hypothesize that married people may be more invested in maintaining interpersonal relationships than those who aren't married, especially on special holidays when alcohol gifting is common. Married people may have more social and family ties that are accompanied by gift-giving expectations, and therefore may be more likely to receive alcohol as a gift. Moreover, Chinese society emphasizes filial respect, and gift-giving is a way for the younger generation to show respect for the elder generation. As married people are generally more mature and have higher status within the family hierarchy, they may be more likely to receive gifts such as alcohol. The finding that participants with a high level of social participation were more likely to give and receive alcohol is consistent with the role that alcohol plays in Chinese culture, where alcohol consumption is commonly involved in social interaction. Chinese people traditionally consider drinking an important tool of social contact and emotional expression. Alcohol often accompanies business meetings, social activities, weddings, funerals, holidays, and other special celebrations [37]. Gift giving can reduce uncertainty while producing positive emotions, social cohesion, and commitment [38]. Feelings of obligatory reciprocity often accompany gift-giving, even when altruistic motives are also present [39]. As consequence, those who have received alcohol may feel obligated to reciprocate after receiving an alcohol gift by offering the gift-giver help, strong emotional ties, etc. While social participation was found to be significantly associated with offering alcohol, we did not find a significant relationship with cognitive social capital and social network. Social Exchange Theory holds that all human behaviors are exchange behaviors, and gift-giving is also a social exchange behavior [40]. In other words, the cognitive perception of social capital and the extent of social network might not promote the occurrence of gifting behaviors. Perhaps gift-giving behavior can only be promoted through social engagement, where there is real interaction with people in the context of social participation. The behaviors of offering and receiving alcohol were also related to family annual income. Higher annual household income indicates higher economic status. It was previously illustrated that individuals of higher economic status are more likely to offer expensive wines to demonstrate their prestige and high social standing [41]. Our results offer additional support for the relationship between higher SES and alcohol gifting. This study found that compared with Guangdong, the southern coastal area in China, the phenomenon of alcohol offering and receiving is more common in Shaanxi, an inland city in northwest China. This may be partially explained by the regional differences in drinking prevalence. According to a study on regional differences in alcohol consumption in China, the prevalence of regular drinking in the northern region is higher than prevalence in the central-southern region [42]. It is possible that northerners perceive drinking as an effective way to cope with cold weather, and northern culture emphasizes hospitality with frequent gatherings and exhortation to drink [43]. In addition, Guangdong has higher economic and cultural development than Shaanxi due to the advantages of economic reform and being a coastal area which has more foreign trade activities with the outside. This higher level of economic development may be associated with receiving more information about the harms of drinking, leading to more concern about its effects on health and avoidance of alcohol [44]. This difference may also be related to the cultural differences of gifting between the North and the South. Drinking status was also found to be strongly associated with giving and receiving alcohol. Behavioral Susceptibility Theory posits that behavior will gradually increase when that behavior is convenient [45]. Drinkers are more likely to approve of drinking than non-drinkers and may have more regular, convenient access to alcohol. Drinkers may therefore be more inclined to choose alcohol as gifts. In another study in China, smoking outcomes were associated with cigarette gifting behaviors [22]. Notably, a similar relationship was found in the current study, where gifting alcohol was significantly associated with not only drinking, but also smoking. Many studies have demonstrated that tobacco and alcohol were complementary products, and co-use is common [46][47][48][49]. Drinkers are also more likely to smoke cigarettes than nondrinkers [50]. These findings may indicate that receiving alcohol as a gift may facilitate the consumption of addictive substances including tobacco and alcohol. With regards to policy implications, the result of the current study can be used to inform prevention and intervention. First, alcohol gifting is associated with higher odds of current drinking and current smoking. Previous studies have suggested that limiting alcohol advertising is an effective intervention to control drinking [51,52]. While it might be difficult to ban all alcohol advertising, restrictions could be pursued that restricts advertising from using gifting themes and imagery. Interventions that teach people how to refuse alcohol as gifts and suggest alternative gifts could also be pursued. Such interventions should be targeted at specific populations that have higher odds of alcohol gifting, for example, people who are male, married, currently drink alcohol, reside in the northern region, have larger social networks and more social participation, and people with higher economic status. Finally, given the differences in alcohol giving in North and South, local alcohol gifting culture should receive attention when formulating policies and interventions programs, particularly in the northern region. --- Limitations Some limitations should be considered. First, the crosssectional design prohibits causal associations. Additionally, self-reported questionnaires are vulnerable to recall bias and social desirability bias. Second, selection bias might misrepresent the prevalence of alcohol offering and receiving because the sample only included the household heads whose children were college students. Moreover, results may not generalize to the entire country, and the selected provinces might reflect north-south cultural differences due to their geographical and economic characteristics. --- Conclusion In summary, the present study used a multistage sampling design to study alcohol gifting through both offering and receiving alcohol as a gift. The results showed that gender, household annual income, province, drinking status, social participation, and --- Availability of data and materials Because of the intellectual property policy of the funding body, the datasets generated and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request subject to approval from the funding body. --- Authors' contributions L.Z. and L.H. drafted the manuscript. D.W. participated in the conception and design of the project. D.W. and L.Y. participated in data collection for the study. D.W. conducted statistical analysis. L.Z., L.H., C.J., and C.W. edited and revised the manuscript. All authors reviewed and approved the final version of the manuscript. --- Declarations Ethics approval and consent to participate The study protocol was approved by the Ethics Committee of Guangdong Medical University (Approval Number: 2019-050). Written informed consent was obtained from all participants prior to the administration of the questionnaire, and data were treated with confidentiality. All procedures were performed in accordance with relevant guidelines. --- Consent for publication Not applicable. --- Competing interests The authors declare that they have no competing interests. • fast, convenient online submission • thorough peer review by experienced researchers in your field --- • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year --- • At BMC, research is always in progress. --- Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research? Choose BMC and benefit from:? Choose BMC and benefit from: --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Introduction: Alcohol gifting is a very common practice in China. However, little is known about the potentially adverse consequences of alcohol gifting. This study aimed to investigate the prevalence of, and factors associated with, alcohol gifting, and explore whether drinking and tobacco use were associated with alcohol gifting. Methods: Using a cross-sectional multi-stage survey, a sample of 982 household heads from Guangdong Province and 530 household heads from Shaanxi Province was collected online from 30 April to 30 July 2020 in China. Participants completed questionnaires regarding socio-demographic characteristics, social capital, drinking status, and gifting alcohol behavior. Chi-square analysis and multiple logistic regression analysis were used to identify the factors associated with alcohol gifting, and to identify its relationship with alcohol and cigarette use status. Results: Multiple logistic regression analysis showed that age, gender, household annual income, province, drinking status, and social participation were prominent correlates of both offering and receiving alcohol. Participants who were married, had an education level of junior high school, or had a large social network had higher odds of receiving alcohol. When both alcohol gifting behaviors were included in the models, participants who offered alcohol had 2.15 (95% CI: 1.63-2.85) times higher odds of current drinking than those who didn't offer alcohol and participants who received alcohol had 1.87 (95% CI: 1.45-2.41) times higher odds of current drinking than those who did not receive alcohol. Those who received alcohol had significantly higher odds of current smoking (AOR = 1.64; 95% CI: 1.25-2.14), while those who offered alcohol had significantly lower odds of current smoking (AOR = 0.71;95% CI:0.53-0.95). Conclusions: Social participation is an important correlate of alcohol gifting. Alcohol receiving behaviors were significantly associated with both current alcohol and tobacco use. These associations can be used to inform alcohol gifting interventions in China. |
Introduction Family and friends are important sources of individuals' health information, (Fisher & Naumer, 2005;Redmond, Baer, Clark, Lipsitz, & Hicks, 2010). Using family and friends to obtain health information is particularly preferred by older adults, many of whom are simultaneously managing multiple chronic conditions (Cotton & Gupta, 2004;Ramanadhan & Viswanath, 2006). Among rural adults, social networks are particularly important to the exchange of health information (Arcury, Grzywacz, Ip, Saldano, Nguyen, Bell, et al., 2012). Individuals with similar life experiences or health conditions may share knowledge and experiences that others, including family, cannot understand (Arcury et al., 2012). Acquiring health information can occur in both formal and informal social contexts that are not intended as venues for health information exchange, yet the exchange arises through the social encounter (Pettigrew, 1999;Fisher et al., 2005). Examples of formal and informal contexts of health information exchange include obtaining health information before or after worship services, in the workplace, and while having lunch with friends. These illustrations are consistent with the more general precept that people use well-established habits to acquire health information, habits that frequently prioritize ease of access and interpersonal trust of the source (Harris & Dewdney, 1994;Case, 2002). Research describing, much less understanding, older adults' health information seeking behavior is generally absent from the well-developed literature documenting that informal sources of health information are preferred and widely used by older adults. Previous research clearly describes individuals, frequently women, as central nodes or informal sources of health information within a community (Colon-Ramos, Atienza, Weber, Taylor, Uy, & Yaroch, 2009), whether it is for general health knowledge or more specific knowledge such as complementary therapies or traditional remedies (Arcury, Grzywacz, Stoller, Bell, Altizer, Chapman, et al., 2009). Less clearly delineated are the process and mechanisms by which older adults obtain and potentially become disseminators of health information. Understanding older adults' health information seeking and sharing behavior has substantial theoretical and practical value. Theoretically, the exchange of health information has both general and specific implications. The potential differences in how different subpopulations exchange health information likely contributes to persistent health disparities observed by gender, race, or socioeconomic status (Ackerson & Viswanath 2009). More specifically, the exchange of health information is one mechanism by which social networks are presumed to affect health outcomes (Ackerson & Viswanath, 2009;Berkman & Glass, 2000). Thus, a clearer understanding of how health information is exchanged can offer insight into health disparities, including the mechanisms by which the social environment "gets under the skin." A better understanding of how health information is exchanged also has practical implications, as it may inform strategies for minimizing the diffusion of poor or potentially threatening health information or improving the diffusion of useful health information. This analysis has two central aims. First, this analysis aims to improve understanding of older adults' health information seeking behavior. Second, this analysis aims to enhance the understanding of cultural factors that can contribute to a meaningful design of geriatric health education programs. To achieve this goal we emphasize two salient attributes of information seeking behavior: breadth and intensity. By breadth we mean the number and variety of venues or sources for acquiring information. Intensity refers to the level of effort expended by individuals in the acquisition or exchange of health information. It is expected that the older adults in this study will vary on both the breadth and the intensity of their health information seeking. The primary aims of the analysis are to: document older adults' sources of health information, to describe the purposes for health information seeking, and to describe the variation in effort given to health information seeking. --- Methods --- Sample This study was conducted in three south-central North Carolina counties. The counties represent variation on the urban-rural continuum (http://www.ers.usda.gov/Data/ RuralUrbanContinuumCodes/), such that one is in a metropolitan area of 250,000 -1 million population, one is a non-metropolitan county with urban population of 20,000 or more adjacent to a metropolitan area, and one is a non-metropolitan county with urban population of 2,500-19,999 adjacent to a metropolitan area. A site-based procedure (Arcury & Quandt, 1999) was used to implement the ethnographic sample design to recruit participants who reflect the range of knowledge, beliefs, and practices in the community (Werner & Bernard, 1994). We recruited 62 participants with approximately equal numbers of African American and white women and men from across the study counties that served different ethnic and social groups. Data collection occurred until saturation was reached and no new insights were gathered. Participants were recruited from 26 sites that included four congregate meal sites, two home-delivered meals programs, two senior housing sites, four senior centers and clubs, a local AARP affiliate, three churches, three county social service programs, three county health department programs, a local restaurant, and two other research projects. A gate-keeper at the various facilities or a project staff member presented information about the project to the older adults who were present at the facility. Based on the project information, older adults would provide contact information if they desired to participate in the project. Once contacted by a project team member, a time and location for an interview was decided upon. Attention was also paid to participants' educational attainment and migration history in recruitment. We asked participants about their migration history because living in different regions could shape social network size and individuals' activities within those networks. Migration history has three categories: non-migrants, return migrants, and in-migrants. Non-migrants had lived their entire life in the same community. Return migrants were born in the study area but had moved to other areas such as other cities in North Carolina, New York City, New Jersey, and various other places in the US and abroad for work or military deployment, before returning to their native communities. Inmigrants had lived in various places within North Carolina. This categorization method based on migration history has been applied in other research (Arcury, Grzywacz, Neiberg, Land, Nguyen, Altizer, et al., 2010). --- Data Collection Data collection was completed over a nine month period (February through October, 2007) by five trained interviewers. Interviewers conducted the interviews at a location of the participants' choice, usually their homes. Interviewers explained the project and obtained signed informed consent. Participants received a small incentive ($10) at the end of the interview. In-depth tape recorded interviews ranged in length from about one hour to three hours. The Wake Forest School of Medicine Institutional Review Board approved all study activities. --- Interview Content The main focus of the in-depth interview was to capture information about the use of complementary therapies and the beliefs surrounding use of these therapies. More detailed information about the interview content has been published (Arcury et al., 2009). A substantial component of the interviews sought to identify where individuals obtained information about health conditions and treatment, as well as the extent to which people shared that information. The context in which people shared health information was questioned. In particular, participants were asked what type of health information is shared with each other; where these conversations take place; who do people talk to other than a health care provider; who are the people who talk about health and illness; do men or women talk more about health and illness; and are there lay people in the community who are asked for advice about health and illness. To understand better community standards about the use of complementary therapies, participants were asked to what extent people discuss different types of therapy use and where people learn about such therapies. --- Data Analysis Data analysis was based on a systematic, computer assisted approach (Arcury & Quandt, 1998). Atlas.ti 6.0 software was used for qualitative data management, systematic coding and analysis. All interviews were transcribed verbatim and were edited for accuracy. Analysis was an iterative process. Initial case summaries were written for each participant and a coding dictionary was developed from the initial transcript review and case summaries. Each transcript was reviewed and coded by one member of the project team. At the end of coding, the initial case summaries were reviewed and revised by the project team member who coded the transcript. A second team member then reviewed the coded transcript and suggested revisions to the coding and the case summary. At the end of the process, each transcript and case summary had been reviewed by at least two project team members. There was a high level of inter-rater agreement in the coding process although no rate of inter-rater agreement was calculated. Instead, discrepancies were discussed as a team during project meetings. More often than not, discrepancies were due to errors of omission, not inclusion. While data were not quantifiable, relevant themes were highlighted and evaluated for salience. --- Results --- Participants Interviews were completed with 17 African American women, 14 African American men, 15 White women, and 16 White men. They included 21 participants aged 65 to 69, 15 participants aged 70 to74, 13 participants aged 75 to 79, and 13 participants aged 80 and older. Although participants varied in education, income, and migration status, these characteristics were not related to their sources and seeking of health information. More descriptive demographic information is included in Table 1. --- Sources of Health Information Friends were the dominant source of health information for older adults. Family was notably missing as a source of health information. Participants described how friends frequently shared information about health, including basic information about illnesses or specific symptoms. In most of the situations described, participants talked about how conversations at churches or other recreation sites for older adults oftentimes shifted to health topics. I think when you become friends with a different group of people and you know, especially women, women's groups, I can't ever think of when I go to Presbyterian Women's Group that something like that isn't discussed. Somebody doesn't show up because they're sick or something, you know. Oh, wonder what she's taking or wonder if she's been to the doctor or you know, I tried this or I tried that and maybe we ought to call her and see what she's taking. That's the way. It's just like recipes. (PART042, White Female, More than High School Education) However, in some instances the transmission of health information within group social settings was purposeful and intentional. For example, one African American female described how her church provides health related seminars, the most common social setting for older adults. To ease the potential for discomfort, men and women have separate meetings....women will have it [seminars at church] like on Saturday and the next Saturday men will have a seminar so they can get to ask about things that women wouldn't talk about around men and the men some things they wouldn't talk about around us, so I carry my husband and leave him so he can ask questions. (PART 029, African American Female, Less than High School Education) Health information was also obtained from several media sources. The dominant media source for health information was television programming. In most cases, health information was disseminated as part of a broader message: it typically was not focused on health content. --- Interviewer....do you know how it's [L-tyrosine] supposed to help your memory? Respondent No. It's supposed to just give your body the thing, the natural substance, that are low that cause the memory loss...I got it from a doctor on television, Dr. Perone on "Eye on Health"...Well he just said these help restore our natural functions. There are several things he advocates. (PART041, White Female, High School Education) Print sources of mass media were also sources of health information for older adults. Although several participants described how they obtained information about illnesses or disease from the newspaper, comparatively more participants described books or magazines that were regularly used for health information. Many participants maintained and were eager to show interviewers their "library" of health-related resources. Respondent I've got a big library. I love to read, and I get a hold of a medical book, Husband She's got cases full of them. Respondent I'll get a hold of a medical book and read it. My husband had emphysema and I read up on that to see what that incurred and I've taken care of him for fifteen years and his doctor told me I made a good doctor. (PART033, White Female, Less than High School Education) Although less widely used than television or print materials, several participants expressed a general belief that computers and Internet were sources of health information, in large part because they allow individuals access to previously unavailable information on specific health topics of interest. Nevertheless, there was a noteworthy absence of participants' discussions about how they personally used the Internet to gather health information:...and I would think television and computer has brought a lot of information for people they never had before, because they can access it, especially computer. (PART006, White Female, More than High School Education) Older adults also rely heavily on health care providers for health information. Participants reported receiving health information from a wide variety of providers, including doctors, nurses, physician assistants, and pharmacists. In most cases, physicians were seen as the definitive source of health information, especially about treatments for chronic conditions like heart disease or diabetes. However, there were also a noteworthy number of older adults who did not defer to health care providers as the definitive sources of information about health. In some cases participants enacted alternative practices rather than modern medicine,...I think I might refer to it as energy reception or something...I do know that I can receive energy from nature. From being in nature...You know that old saying about tree hugging and stuff like that? A tree will give you energy if you go to a tree and put your hands on it and ask for energy, you can receive energy from a tree...When I was so confined with this knee, I was in the house totally for a couple of weeks and got really stressed, really depressed, really out of my element...so I decided that what I needed to do was I had to get outside...so I had my son set up the porch in such a way so that I got outside and in contact, you know, everywhere you look there's nature...Yeah and that reenergized me. It centered me again. (PART045, White Female, High School Education) In other cases participants questioned the provider's motivation for sharing the information. --- Interviewer When you use these different remedies or tonics, what do you tell your regular doctor? Respondent I don't tell him nothing. They'd be surprised...Well if he would ask me did that hurt or something, I would tell him "no" and then he would ask me "well, wonder what stopped it from hurting," then I might would tell him, but since he don't ask I don't tell him...I feel like he's the doctor, he should know, but a lot of things doctors don't know about these old remedies and sometimes, some doctors is against old remedies because that's going to cut their money off. (PART014, African American Male, Less High School Education) --- Purpose of Health Information Seeking There was variation in the reasons for health information seeking. Many older adults engaged in health information seeking when they were confronted with unfamiliar new or novel symptoms. If the new symptoms were viewed as benign or non-threatening, older adults used their social networks, typically friends and peers, to obtain information....you know when you get to looking at me, saying man what's wrong with you? You're looking dull. I say, well my stomach's been bothering me...You come up with something I can do, tell me to do it...Well, take this here and they describe something or other home remedy, something that they say, 'if this don't help it then you need to go to the doctor. (PART019, White Male, Less than High School Education) However, if the symptom was viewed as being serious, older adults were more likely to seek advice from a health professional, although in most cases the advice seeking was more akin to "fix me" than "give me information"....The only time that I have any chest pains is when I get kind of aggravated and stressed. That kind of stress causes you to have pain in your chest...whenever I get that and I know I have stress I try to see a doctor, to see my regular doctor for that because I know I can't go on with the pain because sometimes it really hurts and it ain't been long since I had that and I know stress will kill you quick as anything else...(PART011, African American Female, Less than High School Education) Older adults also consulted a health care provider when they did not know of an appropriate or effective treatment or when a known treatment did not work. If I have something on hand that I know will work, I'll use it. Otherwise, I'll go to the doctor. (PART021, White Female, High School Education) Less common was information seeking from friends and peers for treatment, or ongoing management of chronic medical conditions. More often than not, participants turned to their health care providers for advice on treatment of chronic conditions, as the health professional was presumed to be more knowledgeable of medical conditions. I've always felt like he [doctor] knew what he was doing and I should follow his directions and not Grandpa's. (PART060, White Male, High School Education) Least common was information seeking for health promotion or strategies to reduce the likelihood of subsequent illness or disease. When older adults talked about information for health promotion, they frequently referenced historical information such as experiences during childhood. Respondent Way, way back my, who was that, my grandmother, my grandmother used to say take a little teaspoon of vinegar it'll help to keep your pressure down. Interviewer Did she say why it help to keep your pressure down? Respondent No, she didn't because back in them days you didn't ask your mother all these questions...you go on and take it. (PART043, African American Female, High School Education) The few individuals who did seek out this information on contemporary forms of health promotion were generally more health oriented or focused. I think probably it [Goji juice] gives you an extra edge against cancer and things like that because of the antioxidants and things...I read about it in one of my herbal magazines and ordered some and my son is on it in particular. I make sure that he stays on it all the time and one of the things it does is supposedly helps you fight against depression and I think it helped him do that. (PART044, White Male, More than High School Education) --- Active Versus Passive Information Seekers Participants expressed a clear range of effort put into health information seeking. Some older adults actively sought health information, whereas others passively consumed this information. Active health information seekers deliberately sought information about health from a broad variety of sources and incorporated a large volume of health information relative to their peers. I guess what you have a natural affinity for is what, somehow you get led to. If you are meditating and really trying to plug into that universal knowledge that's out there, if you're really open, you'll be led to where you need to go and that was all a part of it, I think. I started reading, you know. Certain articles in magazines that I would see would catch my eye, or I would be at the bookstore and certain book would catch my eye and it was generally within a theme. You know the meditation, the natural living, taking care of your own body, herbs, things like that. (PART045, White Female, High School Education) Within this group there were some women, African American and white, who portrayed themselves as nodes of health information. They were contacted when others had healthrelated questions, as they were known throughout the community for having health knowledge. These women arrived at this status in different ways, typically by way of medical education or life experiences. These experiences across time made them better prepared to be nodes of health information. I got a niece that she's the principal over here at [name] High and she said, "Aunt [name], I just have so many hot flashes I can't hardly stand it." I said, "Well, get you some sage and some sugar and put it in a bag and put it under your tongue... and let it dissolve... (PART029, African American Female, Less than High School Education) Evaluation of acquired health information was imperative for health information seekers. As a result of the increasing amount of available health information, some participants expressed concern about the quality of information they received. Critical thinking about presented information was common among those who were actively engaged with the health information they received. They evaluated and sometimes incorporated acquired health information into their health management. A greater proportion of men than women were passive consumers of health information. --- Women versus Men: Differences in Information Seeking Behavior There were notable difference in the breadth and intensity of information seeking by gender. Several participants stated that older women regularly engaged in discussions about health information. These discussions occurred across settings. It was suggested by numerous participants that whenever older women gather, the conversation inevitably addressed health issues....anywhere we go, my wife's friends will be talking about some ailments. (PART049, White Male, More than High School Education) Conversations about health were less common among older men. For a variety of reasons, men typically chose not to discuss their health with other men. Reasons cited for this lack of discussion included, "It's none of their business;" "They don't want to hear it;" and "Others will tell you what to do." Instead, they chose to discuss their health with only health care providers or their wives. Respondent I've heard women talk about their health. I ain't heard too many men talk about it. I think they're ashamed to. Interviewer Why might they be ashamed? --- Discussion This qualitative analysis focused on the sources and strategies that older adults used to obtain health information. A substantial portion of these older adults participated in health information sharing. The association between health behaviors and social networks is well documented (Colon-Ramos, et al., 2009;Fisher et al., 2005;Ackerson & Viswanath, 2009). Research describing the exchange of health information within these social networks, however, is generally missing. Our goal was to improve understanding of the variation in older adults' health information exchange, in order to inform future health education efforts. Our results suggest that friends are the primary source of health information for rural older adults. Unlike past research (Colon-Ramos, et al., 2009;Rains, 2007), family was not a central source of health information for our participants. An analysis of older adults with diabetes in the same counties indicated that they received more self-management help from "other relatives" and friends than from children (Arcury, Grzywacz, Ip, Saldana, Nguyen, Bell, et al., 2012). The one exception was a small group of men who turn to their wives for health information. Results from other studies suggest that family is a primary source of health information. However, participants in previous studies were much younger than participants in the present study; therefore, it is possible that older adults may have fewer family members to turn to or that family members may be primarily younger individuals with less experience than older adults' seek. However, data to confirm this were not collected. Rurality should also be considered as a factor in the absence of family as a central source of health information for our participants. For many older adults, children have moved out of the area to attain employment because of difficulties finding work in rural areas. As the number of proximal family members decreases, older adults shift their focus to maintaining ties in social groups made up of friends and community peers (Arcury, Quandt, Bell, 2001). The importance of peers and friends for health information is particularly salient among rural older adults who have less access to medical care and formalized sources of health information (Stoller, Grzywacz, Quandt, Bell, Chapman, Altizer, et al., 2011). Family and friends have been combined into a single variable in some research (Hesse, Nelson, Kreps, Croyle, Arora, Rimer, et al., 2005), but among our participants, these are clearly two independent sources of health information. An important finding from this study is the enduring importance of traditional print media to the current generation of older adults. Yet, among these older adults we find some also using electronic media. The number of individuals using electronic media for health is far greater for those aged 55-64, than among those 65 and older (Fox & Duggan, 2013). Further, those using electronic media for health information increases when there is a concern for a specific health problem. While print media and television were the predominant media sources used, some older adults commented that the Internet has made it easier to obtain health information, somewhat defying the stereotype of rural adults lacking knowledge or skills necessary for the Internet. Yet, even among those older adults who actively sought health information, more traditional sources of media like books, newspapers, and television programming were preferred, which should be taken into account by health educators. While this study finds that older adults prefer other mediums of accessing health information than online, it should be acknowledged that, consistent with other research, more older adults are accessing online health information (Montague, Zulman, & Lawrence, 2011). A second finding from this study is the substantial passivity in older adults' pursuit of health information. These results indicate that when providing health education to older adults, particularly rural older adults, health educators should be direct and emphasize the importance of information because many older adults are not actively seeking information. Some past research described health information acquisition as an issue of access to information. Ramanadhan and Viswanath (2006) try to explain information seeking within the context of communication inequality, which they define as disparities among social classes and racial and ethnic groups in access to and use of information channels, attention to health content, recall, knowledge, comprehension of health information, and capacity to act on relevant information" (Ramanadhan and Viswanath, 2006). Contrary to other literature (Cutili, 2007;Kivits, 2004) our data suggests that while many older adults have access to health information, a portion of them simply are not engaged in acquiring, synthesizing, or applying that information. This finding is compelling because it calls into question the general assumption that having health information is a desired end point. It also raises questions about how to disseminate information to a market that does not demand it. Even if some of our participants who lack access to health information were provided access, it is unclear whether or not they would utilize this access. It has been shown that a sizable proportion of ailments experienced by older adults are attributed to old age (Sarkisian, Liu, Ensrud, Stone, and Mangione, 2001). Thus, passivity may reflect the notion that some health issues are not a condition requiring attention, rather it is part of the body "acting its age." Another potential explanation of passivity may be that as health literacy is positively associated with social support, older adults lacking in social support have potentially lower health literacy than older adults who have adequate social support (Lee, Arozullah, Cho, Crittenden, and Vicencio, 2009). Understanding the varying degrees of information seeking, or non-seeking, among these older adults may help bridge any existing gaps in health between those who are active, passive, or non-seekers of health information. In this study, health information sharing was not related to ethnicity. About the same proportion of African American and White women, and African American and White men participated in health information sharing; highlighting the lack of ethnic differences among our study participants. There were no differences in health information sharing related to migration status. As past research indicates, far more women than men in our study actively participated in health information seeking (Carlsson, 2000;Weaver, Mays, Weaver, Hopkins, Eroglu, & Bernhardt, 2010). Older women in this study comprise the informal health care system that many older adults utilize. A partial explanation for this finding is that, historically, women have assumed the role of caretaker. Health information seeking, for themselves or others, perpetuates this role. Also, it may reflect their desire to understand health information as a resource to support better quality of life or successful aging (Manafo & Wong, 2012). Further, men commonly neglect to visit a health care professional when ill or fail to report the symptoms of illness or disease. Women rely more on social ties for the acquisition of health information than men, and this may delay their decision to seek health information from a health care provider; delaying more effective treatments (Grzywacz et al., 2011). While women were more involved in health information seeking than men, there was notable passivity among the majority of older adults in the study regardless of gender. Although consistent with previous research suggesting that "watchful waiting" is a common health self-management strategy (Stoller, Forester, Pollow, & Tisdale, 1993), it is counter to the common notion that people are quick to begin searching for health information when symptoms or illnesses arise. It is clear from this analysis that many older adults seek health information, especially when presented with health problems. This observation is consistent with the health selfmanagement literature, and approaches like the Self-Regulatory Model (Leventhal, Halm, Horowitz, Leventhal, & Ozakinci, 2004), which posits that people are active in solving health-related problems. However, in contrast to the presumption that most individuals engage in active problem solving, we found that older adults were relatively passive in their acquisition of health information. This suggests that seeking health information, via formal or lay networks, may not be a dominant strategy for health self-management. While past research has focused on health information seeking for the treatment of chronic conditions (Ackerman & Viswanath, 2009), older adults in this analysis primarily sought information in an attempt to manage a new or seemingly benign health condition, taking more serious health concerns to a health care provider. However, little evidence in our data indicated that health care providers were seen as sources of health information per se, as much as agents for change. When older adults wanted to feel better, they would visit a health care provider. This study has several limitations. First, this study uses qualitative data from a small sample of rural older adults; and the study design and analysis reflect the inherent limitations of qualitative studies. Second; participants were representative only of rural older adults living in south-central North Carolina and we cannot generalize beyond this population. Third; the central focus of the interviews was to gather information about older adults' use of complementary therapies. Within this conversation, participants were asked how they acquire or share health information. It is possible that participant responses were influenced by the overall focus of the interview. Lastly, participants were not randomly selected; and statistics are not applied to these data. However, the sample of 62 participants was relatively large for qualitative analyses; and participants were recruited from 26 sites. This study considers health information exchange among rural older adults, a subject that has not been well described in past research. Results indicated that friends, not family, were the most common source of health information; and most older adults had a relatively passive approach to acquiring health information. Both these findings have important implications for health care professionals; including health educators. Health information seeking was not related to ethnicity or migration history suggesting a common cultural influence that is more reflective of characteristics of their rural community than ethnicity (Arcury, Quandt, & Bell, 2001). Results were also unrelated to educational attainment. Women in these communities are more invested in the acquisition of health information than men; some are considered health experts or "nodes" of health information. This has an important implication for the growing need of culturally competent geriatric health educators. Friendship networks or other leaders can be invaluable to the dissemination of health information. Understanding the reliance older adults have on social networks will prepare health educators to tailor programs to meet their needs; help older adults help each other. The "nodes" of health information in various communities act as lay health mediators and foster social cohesion (Abrahamson, Fisher, Turner, Durrance, & Combs, 2008). Equipping these "nodes" of health information with accurate health information or access to such information can create much needed lay geriatric health educators. While they comprise only a small portion of the population, these lay-persons in the community play a role in the discussion of health information and have a vital role in the increasing importance of geriatric health education. These results indicate that, when providing health education to older adults, particularly rural older adults, health educators have three primary tasks. First, they should provide health information in forms that are most appropriate for their audience, print media or otherwise. Second, given the general passivity of many older adults, health educators should be direct and emphasize the importance of active health information seeking. Lastly, health educators should partner with the widely used friendship networks and lay intermediaries for a broader dissemination of accurate information. --- Author Manuscript Altizer et al. Page 15 Table 1 --- NCCAM Demographics --- White | This study documents older adults' sources of health information, describes the purposes for health information seeking, and delineates gender and ethnic variation in health information seeking. Sixty-two African American and white adults age 65 and older completed qualitative interviews describing their use of complementary therapies. Interviews identified how individuals obtained and shared health information. Friends, not family, were the dominant source of health information. Participants ranged from active seekers to passive consumers of health information. Information seeking was common for benign symptoms. More women than men discuss health information with others. Friends are the primary source of health information for rural older adults. There is substantial passivity in the pursuit of health information. Identifying health information sources of rural older adults can support the dissemination of information to those who share it with others. |
Background The factors shaping the health of the current and largest generation of adolescents in human history are multidimensional, complex and unparalleled [1,2]. Until recently, adolescent health has been overlooked and misunderstood, which is one reason why adolescents historically have had fewer health gains than any other age group [1], and hence are now central to a number of major current global health challenges [2]. However, addressing adolescent health potentially provides a triple dividend with benefits now, later in adult life and for the next generation of children [1]. Further, the period of adolescence may also provide a second chance to reduce or reverse early-life disadvantage [3]. In recent decades, theorists have argued that understanding the factors driving growing adolescent health concerns requires a broad focus [4]. Clearly, risk and protective factors of adolescent health include levels of physical activity, substance use, alcohol consumption, tobacco usage [5], diet [1], adolescent abnormal weight (underweight, over weight) and mental health [5]. However, it has been asserted that as well as focusing on an individual's health risk and protective factors, the upstream social patterns and structures in which adolescents exist needs to be considered [4]. Ecological theorists [6] argue that an individual's social environment, both present and past, influence their health behaviors and health outcomes, mediated by other factors including their demographics and physical and psychological makeup. Social environments are multifaceted and include peer, school, community, societal, cultural, new media influences and family dynamics [7]. Adverse childhood experiences (ACEs) such as psychological, physical or sexual abuse, violence, parental substance abuse, parental separation/divorce, parental incarceration or death of a parent, close relative or friend may also influence health behaviors and health outcomes in adolescents [8][9][10][11]. Conceptual frameworks have been developed to represent the complex web of causal "pathways" through which social factors interact with an individual's health risk and protective factors throughout the life course (Fig. 1) [12]. However, these models have not been tested among adolescent populations. Self-rated heath is a legitimate and stable construct used in adolescent populations [13][14][15][16][17][18][19][20][21]. Reviews by Idler and Benyamini [19] proposed that an individual's health status cannot be assessed without the SRH measure as it captures an "irreplaceable dimension of health status," spanning past, present and future physical, behavioral, emotional, cognitive [22] and social [20] dimensions of health. Widespread agreement in the literature [15,[23][24][25]] recognizes that SRH is a complex parameter affected by multifarious determinants. Specifically, SHR is influenced by higher body mass index [17], mental health (emotional wellbeing, acceptance [20], self-esteem [16]) select health behaviors [20] (diet [18], physical activity [20] substance abuse [16], lack of sleep [14]), demographics (age, gender [13]) and social factors (family dynamics, child-parent relationships, school achievement [16], positive school experiences [13], socio-economic status [18], religion [26]). Many of these factors have complex interrelationships [23], directly or indirectly affecting self-perception of health status [15]. While an increasing number of studies have been reported on SRH among adolescents [13], most research in this field [13][14][15][16][17][18][24][25][26] address only select factors Fig. 1 Conceptual Framework for Determinants of Health affecting health status, and thus yield only partial or confounded information on the determinants of adolescent health [23]. Investigations need to assess concomitantly the factors associated with this multi-faceted health measure [23]. Utilizing structural equation modeling and SRH as a measure of health status, this study aimed to explore concomitantly the complex relationships between SRH and social environments, health behaviors and health outcomes among adolescents attending a faith-based school system in Australia. --- Methods --- Study design and participants In 2012, 1734 students aged 12 to 18 years of age responded to a health and lifestyle survey that was administered in 21 Seventh-day Adventist (Adventist) private secondary schools in Australia. The database created by this survey has been used in previous studies [27,28]. Seven hundred and eighty eight students from this database met the inclusion criteria for this study which included useable data for the following domains: SRH, BMI, Mental Health, and Vitality. Notably, BMI data were not collected on over 900 students in the database, hence these cases did not meet the inclusion criteria. The study was approved by the Avondale College of Higher Education Human Research Ethics Committee (No:2011:21), and participation in the study was voluntary and anonymous. A hypothesized model informed by ecological theory and the conceptual framework for determinants of health [12] is presented in Fig. 2. The dependent variable was the measure SRH. In order to concomitantly explore factors associated with SRH yet retain a parsimonious model, we delimited the study by restricting the explanatory variables to the following: health outcome variables (BMI, mental health, vitality); health behavior variables (sleep hours per night, amount of moderate to vigorous physical activity, fruit and vegetable intake, vegetarian diet, marijuana use, alcohol consumption and tobacco use); and demographic and social variables (age, gender, ACEs, Childhood family dynamics (CFD), religious affiliation). --- Survey instrument The survey instrument recorded the participant's SRH as well as: BMI; measures of mental health and vitality; selected health behaviors; personal demographics; and social influences. Self-rated health status SRH status was assessed with a single item involving a five-point Likert scale ranging from "Excellent," "Very Good," "Good," "Fair" and "Poor." --- Body mass index Height and weight were self-reported and used to calculate BMI using the standard equation: BMI = weight in kg/ (height in m) 2. --- Mental health and vitality Mental health and vitality were measured using the validated and reliable [29] five-item mental health and fouritem vitality subscales from the SF-36 [30]. These subscales measure general mental health status and assess the individual's energy and fatigue. Each item in the mental health and vitality subscales has six response options ranging from "All of the time" to "Not at all." Standardized scores for these subscales were calculated creating a 0-100 scale according to the standard procedure for calculating the mental health and vitality scores [31]. Higher scores indicated better mental health and vitality. Internal reliability of the mental health and vitality subscales have been reported at <unk> =.78 to.87 and <unk> =.72 to.87 respectively in studies across eleven countries [32]. As seen in Fig. 2 Hypothesized Model for Factors Associated with Self-Rated Health in Adolescents Table 1, the reliability of vitality in this study was comparatively lower than in these reports. --- Selected health behaviors Sleep hygiene was assessed by an item that asked: "How many hours do you usually sleep per night?", with eight response options ranging from "3 h or less" to "10 h or more". Physical activity was measured by an item that asked: "How many times per week do you usually do any vigorous or moderate physical activity for at least 30 minutes?", with seven response options ranging from "none" to "6 or more times" [33]. Fruit and vegetable intake was assessed using food frequency questions adapted from items previously used in adolescent studies [34]. Fruit consumption was measured by an item that asked: "How many serves of fruit do you usually eat each day? (A serve = 1 medium piece or 2 small pieces of fruit or 1 cup of diced pieces)". Response options ranged from "I do not eat fruit" to "6 serves or more". Vegetable consumption was measured by an item that asked: "How many serves of vegetables and salad vegetables (exclude potatoes) do you usually eat each day? (A serve = 1/2 cup of cooked vegetables or 1 cup of salad vegetables)". Response options ranged from "I do not eat vegetables" to "6 servers or more". The fruit and vegetable items were summed to provide an overall fruit and vegetable intake score. As a measure of the respondents' overall diet, an item asked: "How would you describe your usual diet?" Response options included: 1. "Total Vegetarian (no animal products: no red meat, chicken, fish, eggs, or milk/dairy products)"; 2. "Lacto-ovo vegetarian (no red meat, chicken or fish but diet includes eggs and/or milk/ dairy products)"; 3. "Pesco-vegetarian (diet includes fish but no red meat or chicken, but may include eggs and/ or milk/dairy products)"; 4. "Non-Vegetarian (diet includes red meat, chicken, fish)". For the purpose of this study, this item was dichotomized as a vegetarian (response 1 or 2) or non-vegetarian diet (response 3 or 4). This item was included in the study because a high proportion of Adventists adhere to a vegetarian diet [35]. Alcohol consumption, tobacco and marijuana use were assessed with frequency questions ranging from "none" to "60+" for alcoholic drinks drunk and cigarettes or marijuana smoked in the last four weeks. --- Religion Religious affiliation was included in this study due to the special nature of the sample. Previous studies report associations between religion and SRH [36] with some reviews reporting that this association is unaffected when controlling for demographic variables [37]. Religious affiliation was assessed by asking the participants: "Which of the following best describes your religious belief now?" Options ranged from: 1. "Seventh-day Adventist Christian", 2. "Other Christian", 3. "Other Religion", 4. "No Formal Religion", and 5. "Don't Know". This item was dichotomized to "Non-Adventist" (response 2-4), and "Adventist" (response 1). --- Social factors In this study, an Adverse Childhood Experiences score [8][9][10][11] was generated by collating responses from the following nine items: 1. "One or both of my parents were in trouble with the law," 2. "My parents were separated or divorced," 3. "One or both my parents died," 4. "One or both parents were absent from home for long periods," 5. "There were times when family violence occurred," 6. "There were times when I was physically abused," 7. "There were times when I was sexually abused," 8. "One or both parents smoked tobacco," and 9. "One or both parents drank alcohol weekly or more often." Each of the nine items included no/yes response options which were given a corresponding value of zero or one. Responses from each item were summed to calculate an overall ACEs score. Childhood family dynamics were assessed by creating a CFD score from six items, namely: 1. "As a child, my parents showed me love," 2. "As a child, my parents understood me," 3. "While I was a child my family had a lot of fun," 4. "As a child, my parents didn't trust me," 5. "As a child, my parents didn't care what I did," and 6. "As a child, I enjoyed being at home with my family." Each item included five response options ranging from "strongly disagree" through to "strongly agree." Each response was given a corresponding value from one to five and was recoded so that higher scores represented positive outcomes. Responses from each item were summed to calculate the overall CFD score. --- Analysis The objective of this study was to simultaneously analyze all paths of the hypothesized model (Fig. 2) in order to explore the complexity of the associations between multiple factors and SRH. Hence, structural equation modeling (SEM) [38] was used to estimate the model fit of the data and analyze the direct and indirect effects of the multiple factors in the hypothesized model. Overall model fit was examined using multiple goodness-of-fit indices, namely; chi-square (X 2 ) statistic (CMIN), relative X 2 (CMIN/DF), baseline comparisons fit indices of NFI, RFI, IFI, TLI, CFI, and RMSEA. Structural equation modeling was carried out using AMOS (Versions 24; Amos Development Corporation, Crawfordville, FL, USA). The Bootstrapping method [39] was applied to verify statistical significance of indirect and total effects at p <unk>.05. The data were imported into SPSS (version 24; IBM, Armonk, NY) to calculate means, standard deviations, distributions and internal reliability. --- Results --- Descriptive statistics A summary of descriptive statistics and reliability estimates is shown in Table 1. Sixty-one percent of the students in the study reported "very good" to "excellent" health. This is comparable with the 2014-15 Australian Bureau of Statistics (ABS) survey [40] which reported that 63% of young Australians (aged 15-24 yrs) rated their health as very good or excellent. Unique to the study cohort was that 49% of the students reported an affiliation with a Christian faith and low rates of alcohol consumption (11% reported consuming alcohol in the past four weeks) tobacco use (4% reported using tobacco in the past four weeks) and marijuana use (3% reported using marijuana in the past for weeks). --- The model for factors associated with self-rated health in adolescents The hypothesized model (Fig. 2) based on theoretical considerations was submitted for analysis using techniques developed by Jöreskog and Sörbom [41] utilizing an iterative process of inspection of the statistical significance of path coefficients and theoretical relevance of constructs in the model to derive an optimal SEM that best fit the dataset and were theoretically meaningful. The items that asked the participants about alcohol, tobacco, and marijuana use were removed from the model due to their non-significant contributions generating a final structural model (Fig. 3). Modification indices suggested that the health behavior variables be allowed to covary, as well as the health outcome variables mental health and vitality. The final structural model (Fig. 3) as a whole fitted the data very well, as indicated by the goodness-of-fit indices (CMIN = 33.615; p = 0.214; CMIN/DF = 1.201; NFI = 0. 976; RFI = 0.933; IFI = 0.996; TLI = 0.988; CFI = 0.996 and RMSEA = 0.016). CMIN/DF statistic below three is considered good model fit [42] as are baseline comparisons fit indices above 0.9 [43]. The RMSEA value was less than 0.06, which indicated a close fit between the data and the model [44]. In Fig. 3, the standardized path coefficients are presented as single-headed arrows, and all shown paths are statistically significant including all indirect and total effect pathways. The final structural model (Fig. 3) describes the upstream associations of BMI, mental health and vitality, health behaviors, demographics and social factors on SRH as well as their interactions. The squared multiple correlation calculated for SRH was 0.20 which indicates that the model explained 20% of the variance in selfrated health. Based on standardized path weight coefficients (<unk>'s), the health outcome variables BMI (<unk> = -0.11), mental health (<unk> = 0.17) and vitality (<unk> = 0.15) had a direct association with SRH. This indicates that adolescents who reported a higher BMI reported a poorer SRH, and adolescents who reported higher mental health and vitality scores reported better SRH. The health behavior variables sleep hours (<unk> = 0.11), physical activity (<unk> = 0.09), fruit/vegetable consumption (<unk> = 0.11) and a vegetarian diet (<unk> = 0.10) had a direct association with SRH. This indicates that adolescents reporting more sleep each night, more physical activity, greater consumption of fruit and vegetables and a vegetarian diet also reported a better SRH. The health behavior variables were also associated with SRH indirectly through the health outcome mediating variables. Sleep hours was associated with SRH indirectly through the mediating health outcome variables BMI, mental health and vitality. Physical activity was associated with SRH indirectly through the mediating health outcome variables mental health and vitality. Fruit/vegetable consumption was associated with SRH indirectly through the mediating health outcome variables mental health and vitality. A vegetarian diet was associated with SRH indirectly through the mediating health outcome variable vitality. Of the health behavior variables, sleep hours had the strongest combined direct and indirect association with SRH (<unk> total = 0.178) followed by fruit/ vegetable consumption (<unk> total = 0.144), physical activity (<unk> total = 0.135) and then vegetarian diet (<unk> total = 0.103). Of the demographic and social variables, ACEs was the only variable that had a direct association with SRH (<unk> = -0.07) with the other demographic and social variables indirectly associated with SRH. Age was associated with SHR through the mediating health behavior variables sleep hours and physical activity, and through the mediating health outcome variables BMI and mental health. Gender was associated with SHR through the mediating health behavior variables sleep hours, physical activity, fruit/vegetable consumption, and vegetarian diet, and through the mediating health outcome variables BMI, mental health, and vitality. ACEs was Fig. 3 Structural Equation Model Predicting Self-rated Health Status associated with SHR directly and through the mediating health behavior variable sleep hours and the mediating health outcome variable mental health and vitality. CFD was associated with SHR through the mediating health behavior variable sleep hours and through the health outcome variable mental health. Religious affiliation was associated with SHR through the mediating health behavior variables sleep hours, fruit/vegetable consumption and vegetarian diet. Notably, of the demographic and social variables in the model, ACEs had the strongest association with SRH (<unk> total = -0.125). Hence, more ACEs were associated with lower SRH. Gender had the second strongest association with SRH of the demographic and social factors (<unk> total = 0.092) and also interacted with the greatest number of the mediating variables in the model. The association of age on SRH (<unk> total = -0.067) demonstrated that older adolescents reported poorer SRH, however, overall, males rated their health better which is in line with other studies [13]. The association of CFD (<unk> total = 0.047) on SRH demonstrated that adolescents reporting better CFD also reported better SRH. Finally, the model indicated that although the respondent's religion did have indirect links to SRH its association was small (<unk> total =.005). Adolescents who identified as Adventist were more likely to report higher SRH, and better health behaviors than those who identified themselves as not affiliated with the Adventist Church. --- Discussion This study explored concomitantly the relationships between factors associated with SRH in adolescents attending Adventist schools in Australia. By including a number of variables into one conceptual model and analyzing them simultaneously, the study is unique in that it was able to describe a complex network of associations between the factors that influence SRH. This study supports the need for a broad multi-component approach to the study of adolescent health. The findings in this study demonstrate the association between mental health and SRH which is in line with findings from previous studies [20,22]. The mental health measure used in this study had the strongest association with SRH of the three health outcome variables measured and was associated with the most health behaviors, demographics and social variables in the model. Several health behaviors (sleep hours, physical activity, and fruit/vegetable consumption), as well as demographics (age and gender), and social factors (ACEs and CFD) had a direct association with mental health. Notably, the association between the adolescent's childhood upbringing (ACEs and CFD) and mental health demonstrates how social factors early in life are associated with mental health status years later in adolescence. The vitality metric used in this study (a measure of energy and fatigue status) had the second strongest association with SRH of the health outcome variables. All health behaviors in the model (sleep hours, physical activity, fruit/vegetable consumption, vegetarian diet) along with gender and ACEs directly associated with the measure vitality. Research on vitality is limited; however, one study found that up to 30% of healthy teens experience symptoms of fatigue that affect their normal functioning [45]. The observed influence of health behaviors on vitality in this study highlights the importance of targeting healthy behaviors for improving the energy levels and lessening fatigue among adolescents. There is a wealth of literature supporting the importance of health behaviors on adolescent health [3], however, a unique aspect of this study was the simultaneous assessment of the association of four health behaviors (sleep hours, physical activity, fruit/vegetable consumption, vegetarian diet) with SRH. This allowed the health behaviors to be ranked according to their strength of association with SHR. While all health behaviors had a direct association with SRH and an indirect association through one or more of the health outcome variables, sleep had the strongest association with SRH, followed by fruit/vegetable consumption, physical activity, and vegetarian diet. This finding highlights the value of prioritizing healthy sleep hygiene among adolescent cohorts [46], although clearly, interventions that address all health behaviors are likely to be most efficacious and therefore desirable. In the SEM analysis, the items measuring the health behaviors: consumption of alcohol and the use of tobacco and marijuana had non-significant pathways to SRH. It is well documented [5] that these health behaviors influence adolescent health negatively. A possible explanation for the non-significant effect of these health behaviors in this study may be that the study cohort reported a low prevalence of these behaviors. While this low prevalence was expected given the Adventist community proscribes such behavior, further exploration as to what motivates the use of alcohol, tobacco and marijuana in a low using cohort would be of interest. Of the selected demographics and social factors included in the model predicting SRH, ACEs presented as having the strongest association. Indeed, it is remarkable that adolescents who reported higher incidents of adverse experiences in their earlier childhood, reported poorer SRH in their adolescent years. Although children may have no choice in the ACEs they experience, this study reinforces the necessity for childhood human rights, health promotion and resilience building [47] to be at the forefront of global policy and intervention development to provide benefits not only in childhood, but also later in adolescent life. Of the five demographic and social factors assessed, gender had the second strongest association with SRH, and was associated with the most number mediating variables, interacting with all health behavior and health outcome variables in the model. This suggest that interventions targeting improving general health of adolescents may be more effective if they were gender specific. The influence of CFD and religion on SRH in this model is noted, albeit not as strong as ACEs and gender. --- Strengths and limitations The strength of this study is that it concomitantly explored a number of factors associated with SRH and describe the complex interaction between these factors and SRH. It is acknowledged, however, that model presented in this study, although strong, represents only part of the big picture of the overall influences of SRH. For example, socio-economic status is a well-known predictor of SRH [17], and this was not assessed in this study as no data on socioeconomic status was collected. Another limitation of this study is that it focused on a comparatively homogeneous group of adolescents who were exposed to a faith-based community, namely, Adventist Christians who place a strong emphasis on health and a wholistic lifestyle. Since its inception in 1863, the Adventist religion has promoted the adoption of a healthy lifestyle to its members that includes regular exercise, a vegetarian diet and rest. Alcohol, caffeine, tobacco and illicit substances are also proscribed (Fraser, 2003). The Adventist population has been the focus of numerous health studies as they tend to experience good health and lower rates of disease [48]. Adventist schools espouse the health practices of the Adventist church. Hence, while approximately half of this study cohort did not identify themselves as Adventist, they were likely influenced the by health focus of the Adventist church. It is possible that the adolescents in this study potentially underscored their self-rated health status compared to adolescents in the general population due to the high health ideals advocated by the faith-based schools they attend. This may have resulted in these adolescents perceiving and judging "very good" or "excellent" health against a more rigorous standard. This limits the generalization of the findings to other populations. The cross-sectional nature of this study means that only associations could be measured, it is not possible to say whether these relationships were causal. Although SRH has been established as a legitimate and stable construct for use in adolescent populations [13][14][15][16][17][18][19][20][21] to measure general health status, objective measures of health including biomedical testing as represented in the conceptual framework for determinants of health [12] may improve the validity of the findings in this study. --- Conclusion This study presented a conceptual model that described the complex network of factors concomitantly associated with SRH in adolescents. The results highlight the association of mental health with SHR. Gender-sensitive interventions prioritizing modifiable health behaviors such as sleep, healthy diet, and physical activity may achieve a greater combined effect in improving adolescent health status than select single factor interventions. The association between ACEs and adolescent SRH reinforces the necessity to address childhood human rights, resilience, family dynamics, and health promotion in children for lasting benefits later in adolescent life. Further research into what influences the variables interacting with SRH may provide insight into more effective interventions to improve adolescent health. --- Availability of data and materials The datasets used and/or analysed during the current study available from the corresponding author on reasonable request. --- Abbreviations ABS: Australian Bureau of Statistics; ACEs: Adverse childhood experiences; BMI: Body mass index; CFD: Childhood family dynamics; CFI: Comparative Fit Index; CMIN: Chi-square statistic; CMIN/DF: Relative X 2 ; IFI: Incremental fit index; NFI: Normed Fit Index; RFI: Relative fit index; RMSEA: Root Mean Square Error of Approximation; SEM: Structural equation modeling; SRH: Self-rated health; TLI: Tucker Lewis index; X 2 : Chi-square Authors' contributions BC, DM, LK, PR and BG conceived of the study, participated in its design and coordination. TB and KP coordinated the data collection. BC and PM performed the statistical analysis and data interpretation. BC drafted the manuscript and DM, PM, LK, BG, PR assisted in critical revision of the manuscript. All authors read and approved the final manuscript. --- Ethics approval and consent to participate The study was approved by the Avondale College of Higher Education Human Research Ethics Committee (No:2011:21), and participation in the study was voluntary and anonymous. Written informed consent was collected from parents or guardians and students. --- Consent for publication Not applicable. --- Competing interests The authors declare that they have no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Background: The factors shaping the health of the current generation of adolescents are multi-dimensional and complex. The purpose of this study was to explore the determinants of self-rated health (SRH) of adolescents attending a faith-based school system in Australia. Methods: A total of 788 students attending 21 Seventh-day Adventist schools in Australia responded to a health and lifestyle survey that assessed SRH as well as potential determinants of SRH including the health outcomes mental health, vitality, body mass index (BMI), select health behaviors, social factors and personal demographics. Structural equation modeling was used to analyze the data and examine the direct and indirect effects of these factors on SRH. Results: The structural model developed was a good fit with the data. The health outcome mental health had the strongest association with SRH (β = 0.17). Several upstream variables were also associated with higher SRH ratings. The health behavior sleep hours had the strongest association with SRH (β total = 0.178) followed by fruit/vegetable consumption (β total = 0.144), physical activity (β total = 0.135) and a vegetarian diet (β total = 0.103). Of the demographic and social variables measured, adverse childhood experiences (ACEs) had the strongest association with SRH (β total = -0.125), negatively influencing SRH, and gender also associated with an increase in SRH (β total = 0.092), with the influence of these factors being mediated through other variables in the model.This study presents a conceptual model that illustrates the complex network of factors concomitantly associated with SRH in adolescents. The outcomes of the study provide insights into the determinants of adolescent SRH which may inform priority areas for improving this construct. |
M any studies in the disaster science literature have addressed disasters and mental health. 1,2 Relatively few studies have examined health outcomes after multiple back-to-back disasters. [3][4][5][6] Residents of the US Gulf Coast have had a decade of catastrophic disasters in rapid succession with the 2005 Hurricanes Katrina and Rita and the 2010 BP Deepwater Horizon oil spill. Considered the worst human-made environmental disaster in US history, the BP oil spill has been a significant stressor for coastal residents struggling with hurricane recovery. 7,8 Moreover, the oil spill has threatened the economy, commercial fishing industry, and cultural heritage of those whose livelihood depends on natural renewable resources. [9][10][11] Understanding the long-term health consequences of consecutive catastrophic events is a pressing challenge from both psychological and public health perspectives. For instance, elevations in the prevalence of symptoms of depression and posttraumatic stress among residents of disaster-affected communities highlight the need for coordinated responses among mental health professionals, local officials, and urban planners to promote resilience and prepare for future disasters. 4,6 There is ample evidence of health vulnerabilities among commercial fishers with recent trauma related to the BP oil spill. 4,7,10,12 Cherry and her colleagues have shown that Katrina-related stressors and prior lifetime traumatic events predicted different styles of coping with oil spill stress for commercial fishers, although only avoidant coping was associated with increased risk of depression and post-traumatic stress. 13 Cherry et al's first findings suggest that multiple disasters are devastating for coastal residents, particularly residents with economic ties to the commercial fishing industry. 4,13 However, these findings are limited because they did not examine age-related differences in post-disaster health or health-related quality of life. Prior research has shown that health and well-being are sensitive to demographic variables, including age, gender, education, and income. 14,15 There is a small but growing literature on the impact of disasters on older [16][17][18][19][20][21] and oldest-old adults. 22,23 From an epidemiological perspective, older adults are less likely than younger adults to survive disaster. 24 However, older adults who live through disaster may fare better than their middle-aged and younger counterparts on mental health indicators, possibly due to prior experience or more effective coping strategies born of experience. 15,21 Other evidence has shown that older survivors including nonagenarians and their younger counterparts were comparable across pre-and post-disaster measures of psychosocial and cognitive health, 23 although further research is necessary. The primary objective of the present study was to directly examine adult age differences in health-related quality of life in a sample of disaster survivors from south Louisiana who ranged in age from 18 to 91 years. A second objective was to examine the impact of social engagement on post-disaster physical and mental health outcomes. Many epidemiological studies document the associations among social relations and health, a topic of interest in the scientific community for many years. 25,26 In this study, we conceptualized social engagement as an umbrella construct encompassing 2 social behaviors, namely, charitable work done for others and perceived social support (instrumental, appraisal, and emotional support). Ample evidence has shown that perceived social support 27 and community-level support 28 may lessen post-disaster distress. Cherry and colleagues 4 found that social support was a protective factor for symptoms of depression and post-traumatic stress at least 5 years after Hurricanes Katrina and Rita. In the present study, we extend the literature by focusing on Katrina-related disruptions in charitable work done for others and the social support in the years before and after the 2005 hurricanes while controlling for the known influences of group, gender, education, income, objective health, and prior lifetime trauma. On the basis of previous literature, 20 we expected that disruptions in social engagement activities would be inversely associated with health-related quality of life. To summarize, the goals of this study were to (1) examine the impact of multiple disaster exposures on health-related quality of life in younger and older disaster survivors and (2) determine whether social engagement (defined as hurricane-related disruptions in charitable work done for others and social support) is associated with health-related quality of life. Taken together, the anticipated findings extend the literature on the long-term consequences of multiple disasters and may have noteworthy implications for the development of age-sensitive interventions to lessen distress among coastal residents exposed to a decade of disasters. --- METHODS Participants In all, 219 people participated in this study. Sampling, recruitment, and testing are reported in greater detail elsewhere. 4 Noncoastal and former coastal residents were 30 indirectly affected residents and 62 former coastal residents (n = 92) who relocated permanently in 2005 to Baton Rouge, Louisiana (mean age = 59.0 years, SD = 17.6 years; age range, 18-91 years; 35 males, 57 females). There were 63 current coastal residents with catastrophic property damage and storm-related displacement in 2005; they returned to rebuild and had restored their lives in their original coastal communities (mean age = 60.7 years, SD = 15.0 years; age range, 20-83 years; 26 males, 37 females). Current coastal fishers were 64 commercial fishers and their family members (mean age = 54.7 years, SD = 15.7 years; age range, 21-90 years; 34 males, 30 females). Fishers were also coastal residents who were displaced for up to 2 years or more but who returned to rebuild after Katrina. Fishers had an additional layer of stress related to the 2010 BP oil spill. That is, fishers are a particularly vulnerable group given their economic dependency on the Gulf of Mexico, which was severely impacted by the oil spill. Fishers could not work in the commercial fishing industry for up to 1 year or more after the spill. 7,11,13 --- Independent Measures All participants had completed a storm impact questionnaire with 4 modules: (1) hurricane exposure and threat to self/ family, (2) disruption and storm-related stressors (including property loss), (3) social support (charitable work done for others, availability of help if needed), and (4) lifetime exposure to potentially traumatic events. 4 In this article, we utilized original data from the last 2 modules, with separate questions that assessed disruptions in charitable work done for others, perceived social support, and prior lifetime trauma, respectively. To be precise, we re-coded the original data from the third module in a binary manner, where 0 = either no difference in or more charitable work after the 2005 hurricanes relative to before and 1 = a decline in charitable work after the hurricanes. Similarly, perceived social support = 0 if there was no difference or more social support and 1 = there was a decline in social support after the storms. Our rationale for the binary re-coding of these data here relative to an earlier report 4 was to capture disruptions in these 2 social behaviors in a parsimonious manner that we could model in logistic regressions. Data from the third module (social support) included charitable work (eg, volunteer work at your church, synagogue or in the community; neighborly assistance to people in need) and availability of help if needed, which included instrumental support (eg, having someone to help you if you were confined to bed), appraisal support (eg, someone to give good advice about a crisis), and emotional support (eg, someone to love you and make you feel wanted). Data from the fourth module (lifetime trauma) were the sum of 5 events (other natural disaster, serious accident, attacked with a gun/knife/other weapon, attacked without weapon but with intent to kill/ injure, and experienced military combat or war zone), where each event was scored as 0 (no), 1 (yes, but no fear), or 2 (yes, with fear of injury or death during trauma). --- Dependent Measures The Medical Outcomes Study Short Form-36 (SF-36) 29 comprises 8 indicators of general health, including physical functioning, role limitations due to physical health problems, bodily pain, perceptions of general health, vitality, social functioning, role limitations due to emotional health problems, and mental health. The psychometric qualities of the SF-36 include construct validity 30 and high internal consistency reliability for the 8 subscales. 31 Subscales are combined to form composite physical (PCS) and mental (MCS) health component scores that range from 0 (lowest functioning) to 100 (highest functioning). Normative data yield a mean of 50 and a standard deviation of 10 for the PCS and MCS scores. 32 Thus, we dichotomized these scores at 50 for the logistic regressions reported here. --- Statistical Analyses All statistical analyses were carried out by using SAS version 9.4 statistical software (SAS Institute Inc, Cary, NC). Prior research 2,15,21 has documented the potentially confounding influences of demographic factors (eg, gender, educational attainment, income), physical health, and lifetime traumatic events on post-disaster health and wellness indicators. Therefore, bivariate logistic regression analyses were run on all variables that might be expected to covary with health (not shown). Based on the outcomes of the bivariate analyses and prior literature, 6 variables were selected for inclusion as covariates in multivariate regression models, as follows: group (noncoastal and former coastal residents, current coastal residents, or current coastal fishers), gender, education (high school or less, some college or specialized training, college degree, or Master's/doctorate/professional degree), income (<unk>$2000/month, $2000 to $4000/month, $4000 to $6000/ month, or over $6000/month), chronic physical conditions (dichotomized at 2 or more vs. less including high cholesterol, hypertension, diabetes, arthritis, cancer, and heart problems), and lifetime prior trauma. All outcomes were dichotomous. --- RESULTS --- Psychosocial, Demographic, and Health Characteristics Table 1 presents a summary of the psychosocial, demographic, and self-reported health characteristics of the sample. The groups differed in prior lifetime trauma (P = 0.014), so these variables were controlled in the logistic regressions that follow. Gender composition was comparable across groups, but group membership was significantly associated with educational attainment by a chi-square test (P <unk> 0.001). Noncoastal and former coastal residents reported holding a college degree or master's degree more often than expected: more than half of the fishers reported having a high school degree or less. Participants' self-reported income level fell short of statistical significance with group by a chi-square test (P = 0.066). The groups did not differ statistically in number of chronic conditions. --- Logistic Regression Analyses Odds ratios appear in Table 2 for 2 dimensions of social engagement, changes in charitable work done for others and perceived social support, before and after the 2005 storms. Inspection of Table 2 indicates that age was significantly and inversely associated with higher PCS scores (OR = 0.15), which is consistent with the literature on post-disaster physical health in later life. 22 Among the covariates, group (current coastal fishers, OR = 0.34) and objective health status (2 or more chronic conditions, OR = 0.26) were significantly inversely associated with higher PCS scores. Furthermore, low income (<unk>$2000/month, OR = 0.33, and $2000 to $4000/month, OR = 0.26) was inversely associated with SF-36 MCS scores. With respect to social engagement, only the decrease in perceived social support after the 2005 storms relative to a typical year before the storms was inversely associated with higher SF-36 MCS scores (OR = 0.40). This aspect of the data indicates that participants were 60% less likely to have higher than average mental health with each additional point dropped on the social support ratings. Contrary to expectation, the drop in charitable work done for others since the 2005 storms was not associated with physical or mental health. --- DISCUSSION AND CONCLUSIONS Our primary objective in the present study was to examine adult age differences in health-related quality of life after a decade of consecutive disasters. In support of our hypothesis, we found that age was negatively associated with a higher than average SF-36 PCS composite score, which is composed of several subscales that measure of perceptions of physical functioning, ability to fulfill roles because of physical health problems, bodily pain, and general health. The age effect observed here was obtained after controlling for the known influences of group, gender, education, income, chronic conditions, and prior lifetime trauma. This finding joins others in the literature documenting lower perceptions of physical health among older persons compared to their younger counterparts. 33 Interestingly, age was not associated with the SF-36 MCS composite score, implying that perceived mental health was no different for younger and older disaster survivors. To address the possibility that the null effect of age in the analysis of SF-36 MCS scores was an artifact of dividing the sample at the median (58 years), we conducted sensitivity analyses where we first treated age as a continuous variable and then as a dichotomous variable using a higher cutoff age: both analyses yielded the same null effect of age. The most conservative conclusion to be drawn based on these data and the follow-up sensitivity analyses is that older persons may not be differentially vulnerable to adverse post-disaster psychological sequelae, although further research would be desirable before firm conclusions are warranted. Participants in this study were community-dwelling adults and nearly all had prior hurricane and other natural disaster experience. 4 Older adults with prior hurricane experience may possess effective problem-solving skills and coping strategies that could be positively impactful in future disaster preparation and relief planning. 13,34 Frail older adults in the community and those with reduced health status in assistedliving or nursing home facilities may need special assistance for evacuation safety and post-disaster relocation, 35,36 an important consideration for future disaster and emergency preparedness planning. A second objective in this study was to examine the impact of social engagement on post-disaster health-related quality of life. We conceptualized social engagement as hurricanerelated disruptions in charitable work done for others and perceived social support after the storms, relative to a typical year before the 2005 storms. Disruption in charitable work was not significantly associated with physical or mental health. In contrast, the drop in perceived social support was significantly and inversely associated with SF-36 MCS scores. This aspect of the data is compatible with other findings showing that disruptions in social network characteristics have a deleterious effect on older Hurricane Katrina survivors. 20 Our findings, among others in the disaster science literature, imply that perceived social support 27 and community-level support 28 may lessen post-disaster distress. On a broader note, the group variable was treated as a covariate here to allow a clearer assessment of age-related differences in health-related quality of life. However, the inclusion of commercial fishers is a noteworthy strength of the study that deserves further comment. Because commercial fishers have been doubly affected by Katrina-related losses and the more recent economic impact of the BP oil spill disaster, they are at greater risk of adversity. 9,11 The finding that group was a significant predictor of SF-36 PCS scores (commercial fishers, OR = 0.34) is suggestive of selfperceived health vulnerabilities among commercial fishers and their families with stressors related to the experience of natural and technological disasters in rapid succession. Relatively few studies have examined the mental and physical health consequences of natural and technological disaster exposures, although inherent differences between these F value for lifetime total trauma and health-related quality of life; chi-square for all other variables. b Lifetime trauma with fear from Cherry et al. 4 c SF-36 Health Survey, which includes composite indexes of physical (PCS) and mental (MCS) health. 29 d Based on the presence of 6 chronic conditions (high cholesterol, hypertension, diabetes, arthritis, cancer, and heart problems). e P <unk> 0.05; f P <unk> 0.01; g P > 0.05 and <unk> 0.10. --- Health-Related Quality of Life in Older Coastal Residents After Multiple Disasters Disaster Medicine and Public Health Preparedness different types of disaster are noted in the literature. For instance, natural disasters may bring sudden catastrophic damage and loss of life, although uncontrollable events of nature do happen from time to time and are generally not considered controversial. [37][38][39] In contrast, technological disasters involve failure of a human-made system that is presumed to be controllable. 37 For those directly impacted by technological disaster, a lengthy process of litigation may follow, as well as anger, hostility, and blame directed toward an individual or corporate entity at fault. 40,41 Accordingly, technological disasters may have longer-lasting impacts on mental health and well-being for those directly affected, although further research to address different types of disaster and their long-term effects is necessary. Last, we included chronic conditions and prior lifetime trauma (with fear for life or safety) among the covariates here based on the assumption that survivors' current health status and developmental history may shape post-disaster well-being after multiple consecutive disasters. The number of chronic conditions was predictive of SF-36 PCS scores as expected. This finding supports the notion that survivors' chronic conditions (an objective health indictor) impact health-related quality of life, a potentially important consideration for future disaster planning with older adults. Further, prior lifetime trauma was a marginally significant predictor (P <unk> 0.10). Other evidence has shown that cumulative adversities, including life stressors and prior lifetime traumatic event exposures, affect the trajectories of mental and physical health in later life. 21,42 Consideration of survivors' current health status and prior lifetime trauma is relevant for the design or implementation of programs in connection with disaster relief efforts. Future research to systematically explore the role that survivors' current health and developmental histories may play in the success of community-wide disaster relief programs would be valuable. At least 4 methodological limitations of this study warrant brief mention. First, the sample size was small and may not be representative of the population. Second, a cross-sectional design was used, so causal inferences are not warranted. Healthrelated quality of life is likely to be dynamic, varying over time as people adapt to new life circumstances. Future research that includes longitudinal comparisons is needed to measure trajectories of change in health and psychological well-being among older disaster survivors. Third, we did not estimate the impact of variations in the temporal intervals between exposures to the 2005 hurricanes and 2010 BP oil spill and participants' responses on the outcome measures included here. The present results should be considered in light of this methodological limitation. Fourth, we did not include biological indicators of stress responses, a potentially important direction for future research to permit a more precise estimate of the long-term health consequences after a decade of disasters. In closing, the present findings add to a growing literature on the human impacts of natural and technological disaster, bringing attention to older disaster survivors' physical and mental health risks in the years after these events. Interventions to address health challenges and health-related quality of life may be especially critical for commercial fishers in the years after disaster and are a potentially important direction that awaits future research. Correspondence and reprint requests to Katie E. Cherry, Department of Psychology, Louisiana State University, Baton Rouge, LA 70803-5501 (e-mail: pskatie@lsu.edu). --- About the Authors | Objective: Exposure to multiple disasters, both natural and technological, is associated with extreme stress and long-term consequences for older adults that are not well understood. In this article, we address age differences in health-related quality of life in older disaster survivors exposed to the 2005 Hurricanes Katrina and Rita and the 2010 BP Deepwater Horizon oil spill and the role played by social engagement in influencing these differences. Methods: Participants were noncoastal residents, current coastal residents, and current coastal fishers who were economically affected by the BP oil spill. Social engagement was estimated on the basis of disruptions in charitable work and social support after the 2005 hurricanes relative to a typical year before the storms. Criterion measures were participants' responses to the SF-36 Health Survey which includes composite indexes of physical (PCS) and mental (MCS) health. Results: The results of logistic regressions indicated that age was inversely associated with SF-36 PCS scores. A reduction in perceived social support after Hurricane Katrina was also inversely associated with SF-36 MCS scores. Conclusions: These results illuminate risk factors that impact well-being among older adults after multiple disasters. Implications of these data for psychological adjustment after multiple disasters are considered. |
INTRODUCTION Criticism of donor funding and research grants schemes processes and practices is common among applicants. Complaints are often related to the cost of wasted efforts, and concerns around various forms of biases including einsider bias, personal bias, dominant group bias and bias related to the incentive to do research that please the interests of those dispensing the funds 1 While different ways to allocate researchfunding are associated with different issues, global health research funding carries additional challenges due to unequal power dynamics related to the coloniality of power, of knowledge and of being (figure 1). [2][3][4] While the studies are designed to address health challenges in the Global South, the financial power including decision-making around delivery remains concentrated in the Global North. 5 6 In May 2021, in an open letter, African scientists called for the decolonisation of global health research funding after a US-led malaria initiative favoured partnering with Western institutions over African institutions. 7 They argued that funders continue to favour Western institutions by dismissing Global South expertise and undermining local agencies. 7 8 While funders deny favouritism, this paper introduces approaches to systematically interrogate the processes and practices that enable and maintain the dominance of Euro-North American-centric ways of doing by presenting some of the unacknowledged barriers between the researchers whose application --- SUMMARY BOX <unk> Global health research funding is affected by unequal power dynamics rooted in global coloniality that manifest in the prioritisation of outsiders' perspectives over local needs. <unk> This practice paper investigates why Global South actors' research funding applications are less likely to be successful than applications from Global North actors. <unk> It outlines a three-step decolonial approach to epistemic injustice analysis of research funding processes and practices. <unk> Findings suggest that epistemic wrongs occur when common biases and ethnocentrism are not mitigated during the review process. <unk> Global North and Global South funders can address current funding asymmetries by ensuring that pose and gaze are aligned from the design of the call for proposals to the review process. --- BMJ Global Health are being assessed, and the funders and reviewers of their application. 4 It aims to guide global health financing actors including member states, United Nations agencies and non-governmental organisations to identify discrimination and coloniality in their work, adopt a decolonial approach, and recognise the critical need to disrupt power asymmetries and promote ownership, participation and equity. 5 9 10 --- THE ROLE OF RESEARCH FUNDERS IN DECOLONISING GLOBAL HEALTH Given their central position in the processes involved in knowledge production, research funders have an important role to play in driving efforts towards equitable and decolonial research. 2 11 12 Epistemological colonialism refers to the way in which the expansion of colonial power enabled the expansion of colonial knowledge, the colonial way of understanding and acting in the world to the detriment of local knowledge systems. 13 14 With the overwhelming majority of funding being located in the Global North, organisations that issue calls for proposals can intentionally or unintentionally disadvantage and constrain Global South applicants through research priorities, language, eligibility criteria, due diligence rules and other expectations that can generate ethical and practical research issues. 15 16 Several studies have shown that global health research practices are currently geared towards the interests of certain social/epistemic groups over others. 11 17-19 This situation translates into less priority being placed on the knowledge and perspectives of certain groups and what is of pronounced interest and consequence to them, in addition to affording less credibility to the knowledge they hold. [20][21][22][23][24] Decoloniality is a movement focused on untangling the production of knowledge from a primarily Euro-North American-centric lens by challenging the perceived universality of Western knowledge and practices and the superiority of Western institutions and paradigms of research. 4 17 21 25 Applying a decolonial approach to research funding can therefore be defined as a process to acknowledge, understand and address Euro-North American-centric norms and structures inherited from colonialism that continue to act as a barrier to non-Western applicants during calls for proposals. 4 17 20 26 Lack of awareness and reflexivity on existing structural inequalities directly impacts resource allocation and ultimately knowledge production. 12 15 27 Research funders have reported lower number of successful proposals from Global South applicants despite the burden of global health challenges being situated in the Global South. 4 The resulting asymmetries manifest as both higher access to financial resources for applicants based in the Global North and the generation of inadequate, incomplete or not fit-for-purpose evidence to meet the needs of Global South communities. 4 16 18 28 There is growing pressure to drive substantive changes in funding practices and ensure that the definition of global health interventions is embedded in the broad social, cultural, economic and political contexts that underpin the issues being addressed. [29][30][31] As an example, in the humanitarian sector, the Inter-Agency Standing Committee' 2016 Grand Bargain pledged to 'get more means into the hands of people in need and improve the effectiveness and efficiency of humanitarian action'. 29 This paper outlines a decolonial approach to epistemic injustice analysis of research funding processes and practices. The overall aim is to guide practitioners towards greater equity in research funding and partnership and inform the development of transformational processes. The article is divided in four sections. After defining the key principles of epistemic injustice, the author will show how the design of a research call for proposals can favour foreign/dominant epistemic groups over local groups. Then, the author will discuss the influence of pose and gaze during a review process using the epistemic injustice framework created by Bhakuni and Abimbola. 18 Finally, the author will introduce different approaches to address current asymmetries in the research funding architecture. --- UNDERSTANDING THE EPISTEMIC INJUSTICE OF COLONIALISM IN GLOBAL HEALTH RESEARCH FUNDING Epistemology is a branch of philosophy that focuses on the nature, origin and scope of knowledge. It is concerned with the way in which knowledge is defined and validated. The intended receiver of the knowledge produced (ie, gaze or audience) and the standpoint from which knowledge is produced (ie, pose or positionality) directly impact the way it is understood and creates opportunities for epistemic injustices. 16 20 32 An epistemic wrong occurs when knowledge produced by a group is misinterpreted or undervalued by others epistemic groups. 18 It manifests when (figure 2) 18 : 18 This paper uses Bhakuni and Abimbola's epistemic injustice framework and the concept of global coloniality centred around the coloniality of power, knowledge and being to develop a three-step approach for investigating the presence of epistemic wrongs inherited from colonialism in global health research funding schemes. 5 --- STEP 1: COLONIALITY OF POWER AND ANALYSING THE AIM OF A CALL FOR RESEARCH PROPOSAL Practising decoloniality in research funding starts at the definition of the grant objectives. Analysing the aim of knowledge production systems can reveal our expected audience and our positionality and inform why some --- BMJ Global Health groups remain mostly represented as bystanders in knowledge production. 4 16 19 33 When a call for proposal bears the implicit assumptions that the primary purpose of knowledge production is to be used elsewhere, it highlights an expectation that knowledge produced in the grants to be funded must be a universal one that is easily transferable. 16 While public/private funders in Euro-North American settings may not insist that knowledge produced to answer national public health concerns be transferable, generalisable to population outside of the country and publishable in a peer-review journal, those expectations are often maintained as an essential requirement to fund research conducted in the Global South. 16 The notion that knowledge that is contextualised is of limited value because it would not have impact in other settings is a common fallacy that stem from Global North institutions distance from the issues being addressed and unchallenged colonial legacies that continue to present non-Western communities as a singular group/ context. 16 34 The resulting academic literature imply that large or multisite studies are inherently more valuable than small or single sites studies which leads to more support given to knowledge producers and systems that can claim to be universal. 16 35 In reality, health systems challenges are complex and require deeply local perspective to be responsive to local systems and realities. 36 Consequently, what is 'robust' for generating decontextualised, generalisable knowledge may not be 'robust' enough for generating contextualised and necessarily local knowledge. 16 28 When the objective of a call for proposal is mostly focused on 'addressing gaps in the literature' and finding 'universal truths', it can clash with Global South researchers' focus on making sense of and altering the social structures that disadvantage communities in their context. Such a call for proposal therefore ultimately advantages Global North applicants. 3 16 --- STEP 2: COLONIALITY OF KNOWLEDGE AND ACKNOWLEDGING THE INFLUENCE OF GAZE AND POSE DURING THE REVIEW PROCESS After submission, a funding committee makes an informed decision on the outcome of proposals using technical reviewers' feedback on the potential impact of the research findings, the scientific robustness, the feasibility, the value for money and in certain cases, the strength of a research consortium. These criteria and the associated comments directly influence the committee's decisions even though they are not completely responsible for the outcome. While these criteria are often considered neutral and universal, the way funders and reviewers define them and the background of the reviewers can have an impact on the review process, especially when funders and reviewers associate Euro-North American-centric ways of doing-including structures, methods, processes and practices-as the only legitimate and scientific ways of producing knowledge and of knowing and understanding the world. 3 5 15 The author will use Bhakuni and Abimbola's framework (figure 1) to present how the extent to which the review process accounts for and mitigates epistemological differences within the review criteria can systematically favour Global North (ie, foreign/dominant) over Global South applicants. Potential lines of analysis are organised in two categories-testimonial injustice and interpretive injustice-drawing on examples from commonly known, discussed and anticipated reviewer comments. 18 Testimonial injustice: credibility deficit and excess Testimonial injustice is defined as the act of prejudicially misrepresenting a knower's meanings or contribution to knowledge production. It leads to the undervaluation of one's status (eg, credibility deficit) and the overvaluation of others (eg, credibility excess). 18 Global South groups have relatively few interpretive tools in circulation available to be used or recognised as equal to those designed by foreign/dominant groups (Global North) that have a monopoly on both knowledge production and development of interpretive tools. 4 18 21 25 26 37 This situation directly impacts the credibility of Global South epistemic groups if reviewers are not familiar with their interpretive tools and physically distant from their context. 18 37 The following examples show how testimonial injustice during a reviewing process can discount the credibility of Global South applicants as holders and producers of knowledge while increasing the credibility of Global North applicants. When the need to produce knowledge is based on what is globally known or not known, rather than on what is locally known or not known, a credibility deficit is imposed on local applicants. It occurs when Global South applicants are encouraged to justify a study or publication based on a gap in the literature, as if the literature could be considered the sum of all available knowledge. 16 It implies a presumption that knowledge on issues about which people have day-to-day experience does not exist because it is not in the literature. 38 When the value of a proposal to generate local knowledge is determined using what is known or deemed valuable elsewhere, then local knowledge and needs are side-lined, and credibility deficit is imposed on local knowers. It occurs when the definition of novel knowledge is applied at the global level rather than the specific context. It seems to imply that local expertise is only valued in comparison to evidence from elsewhere even though knowledge that is relevant in a given context may not be deemed 'new' or of value elsewhere. --- BMJ Global Health It occurs when the assumptions used in the review process (eg, budgeting or the structure of research groups) do not match local practices in Global South contexts and are based on common Global North structures and processes. 39 Testimonial injustice reduces the success rate of Global South applicants. Lack of acknowledgement of what is often described as an 'expat bias' will continue to systematically impact Global South applicants' success in calls for proposals. [40][41][42][43] Interpretive injustice: interpretive marginalisation Interpretive injustice is a form of epistemic injustice that prevents certain groups from being able to efficiently communicate knowledge to other, perhaps more powerful groups. 44 An interpretive marginalisation occurs when foreign/ dominant groups prejudicially impose or only recognise their interpretive tools as valid, thus preventing other groups from sharing their experience of the world. 18 When it manifests, it contributes to the illusion that prejudicial low credibility judgements are epistemically justified. 23 In the absence of available legitimised collective interpretive tools, Global North groups often assume that their own approach to knowledge production and sensemaking is universal. 18 Consequently, the experiences of Global South groups can be misunderstood because they do not fit concepts known to Global North groups. 26 The following examples show how interpretive injustice during a review process can discount the credibility of Global South applicants as holders and producers of knowledge while increasing the credibility of Global North applicants. When the ability of local applicants to interpret their own reality for their own people is taken away, interpretive marginalisation is imposed on them. It occurs when decontextualised findings and needs are deemed more desirable in the selection criteria. It demands that Global South applicants' proposal needs to be aligned with the needs of a Global North audience and signals that only knowledge that claims to be universal is considered valuable. When foreign/dominant interpretive tools are expected to be used or imposed, it leads to interpretive marginalisation It occurs when review criteria assume that Global South applicants would/should justify conducting a study in their own setting similarly to how an applicant might justify conducting a study in a foreign settingfor example, by using'structured research' or information available in the literature. In practice, the kind of insight available to Global South applicants, which then influence how they frame and justify their work is inherently different. Local interpretive tools, ways of making sense of reality in data analysis, ways of deciding whether a study is necessary, or an intervention is appropriate are not allowed to flourish, risk remaining marginalised and at worse disappearing. 21 23 When a foreign/dominant group places its understanding of local realities above local groups perspective, interpretive marginalisation is imposed on local actors. Local practices and realities shape the way a project is proposed. The physical proximity or distance of a reviewer can affect the reviewer's interpretation of what is being proposed. 16 28 Global South applicants see the complexities of their setting and are compelled to engage with it given what they know and how they make sense of it. 23 25 Whereas Global North applicants see from afar and are prone to simplify complex realities in ways that Global South applicants tend not to. 16 26 Global South applicants are more likely to go for methods and approaches that allow them to make sense of the full complexity of their setting, system or reality. 17 28 Interpretative marginalisation reduces the success rate of Global South applicants and can lead to epistemic violence and epistemicide (ie, the erasure of marginalised knowledge systems), instead of the stated social transformation when Global South applicants are denied the opportunity to use approaches that challenge Euro-North American-centric research paradigms. 5 21 --- STEP 3: COLONIALITY OF BEING AND ADDRESSING ASYMMETRIES IN GLOBAL HEALTH RESEARCH FUNDING This analysis of calls for proposals and review processes from a decolonial perspective highlighted the ways in which the project definition and the pose and gaze of the reviewers can legitimise the inferiority of Global South applicants and influence their success rate compared with Global North applicants due to entrenched assumptions and expectations in the field. 18 20 In reality, despite calls for localisation, stated commitment to 'decolonise' research funding and the added logistical constrains created by COVID-19-related restrictions, research funders' expectations seem to remain strongly centred around Euro-North American-centric processes, structures, practices and norms. 21 For example, there is a tension in the way lived experience and contextual understanding is valued relative to training and institutional affiliation. Consequently, applicants from or who trained in the Global North are often implicitly afforded credibility excess due to their proximity to Euro-North American practices. 18 27 Meanwhile, one can argue that Global South applicants' time/efforts ratio during proposal writing is systematically underestimated as it is unclear whether funders take cognizance of the numerous logistical constraints including poor internet connection, unpaid labour, limited electricity, limited access to academic journals and libraries, etc. 45 46 As long as commonly used evaluation criteria remain perceived as neutral, their colonial epistemic foundations will continue to legitimise existing inequalities and Oral histories or other forms of local knowledge may not be 'citable' but if the information does not exist in academic literature, it does not mean that it doesn't exist or it is weaker. When the bulk of the academic knowledge is written by and from the perspective of the Global North, the exclusion of that 'knowledge' can be intentional and reflect a different perspective and understanding of the local challenges. Funders mayinclude funding for rapid scoping research to support the generation of evidence, create tools or invite Global South actors to create tools to formally introduce their knowledge or acknowledge the experience of local actors rather than assume that what is not in Western academic literature does not exist or is not valid. A statement of why Western academic research was not used or a rationale for the inclusion of only Western academic evidence may also be included. How is the rationale for the study being assessed? Recognising who is driving the need for the study is key. When objectives are defined from the Global North with little inputs from local communities, the problem definition inherently favours Global North applicants. Funders should aim to align their study rationale with national or regional research priority (eg, Africa Centres for Disease Control and Prevention, National Public Health institute, local research institutes, etc.) over international agenda. Feasibility Does the analysis of the proposal consider the dynamic nature of Global South context and institutional differences? Time and resources to write proposal are often scarce and logistical constraints including lower access to academic journals can negatively impact the final output. --- Funders should consider having different timelines for submission between Global North and Global South applicants and offering temporary access to key academic journals during the application process for equity reasons to support Global South applicants in their academic evidence assessments. How is the ability of the local team to deliver on the activities proposed being evaluated? A deeper understanding of local context and needs often lead to more complex proposals. Limited institutional funding in the Global South can act as an incentive for local knowers to try to conduct multiple activities in one research proposal rather than being an indicator of unrealistic planning. Funders should acknowledge these differences between Global North and Global South applicants building from experience of Global North applications in the Global North. The presence of Global South experts in the design of a call for proposal could allow funders to anticipate these situations. --- Value for money --- BMJ Global Health hinder the evolution of non-Western epistemic groups with the risk of epistemicides. 45 47-49 Redefining evaluation criteria towards knowledge equity Rather than hampering the production of contextualised knowledge, existing inequalities should be used as opportunities to design innovative and equitable processes to reduce funding access gap. 45 46 50 To do so, research funders need to move away from unidimensional diversity and equity criteria that are often focused on geography alone (eg, regional funding panels) and instead, systematically account for common biases and ethnocentric tendencies during the proposal review process of international grant schemes. 51 Table 1 presents key questions and considerations that highlight how knowledge equity objectives can be attained by adjusting for epistemological colonisation (eg, absence of collectively legitimised tools), power dynamics (eg, dominance and leadership in research partnerships and authorship order), positionality (eg, diaspora vs 'local'; Global North vs Global South diploma) and logistical barriers (eg, lack of publications vs systematic barriers of access to academic journals through exclusionist fee policies). 22 28 35 47 49 51 For instance, lower access to institutional funding in the Global South might results in lower experience managing large grants. While this is often an important criterion, it can be addressed by providing a grant management training or inviting Global South applicants to include one in the proposal. Similarly, prioritising lived experience requires research funders to place greater value on-oftenunpaid community work experience and balance potential technical skills gap with inviting Global South applicants to include national/regional/international training(s) that would be complementary to their project and their professional development. 45 50 Funders should consider asking reviewers to attach a reflexivity statement to their comments to highlight how they accounted for their gaze and positionality when reviewing the applications. 52 Further, including local civil society organisations in the definition of the call for proposal and the review process should also be considered to ensure that applications meet the needs of the communities. --- Aligning pose and gaze towards local knowers and fostering knowledge plurality Epistemic injustices are also facilitated by the current disconnect between the pose and gaze of funders, reviewers and local researchers (figure 3). 18 As demonstrated by our analysis, in the absence of clear commitments to epistemic diversity (ie, the ability to make sense of the world using diverse forms of knowledge, knowledge creation and dissemination methods), actors' positionality and interests can influence the outcome of a proposal review. 4 19 22 26 37 To reduce epistemic wrongs, actors' pose and gaze should be perfectly aligned. When actors from Global North groups or applicants from the Global South legitimise a single knowledge framework, they impose epistemic injustice on other groups while also omitting to consider the possibility that Euro-North American-centric interpretive tools can be rigid, imperfect and inappropriate especially regarding the experiences of those in the Global South. 4 16 19 20 26 It also raises questions around the ethics of analysing work conducted in the Global South with a dominant Euro-North American-centric framework rather than prioritising the voices of experts who use local approaches or have lived experience. 4 --- BMJ Global Health Supporting the production of the knowledge needed to accurately understand Global South issues, craft appropriate interventions, or design projects that are responsive to Global South applicants' culture, context and needs, requires research funders and Global South applicants to show clear commitments to the inclusion of diverse perspectives, accounts and ways of thinking and doing through practical transformational change. 4 6 21 It starts with being transparent about the grant objectives by clearly defining the preferred epistemic frameworks and the intended audience and receiver of the knowledge to be produced. In practice, research funders should increase opportunities for Global South applicants to develop alternative interpretive tools by allowing them to either adapt Euro-North American-centric tools to their context, use existing but marginalised tools or develop and disseminate novel contextualised methods. To reach equity goals, funding scheme guidelines and reviewers need to embody and acknowledge past and ongoing asymmetries and promote the coexistence of different research paradigms that reflect local needs rather than outsider's perspectives 4 15 17 20 CONCLUSION While proposal definition and reviewing processes may differ across funders, the primary objective of this practice paper was to challenge Euro-North Americancentric perspective and provide guidance to address the impact of global coloniality on epistemic diversity. The current research funding architecture is skewed towards Global North applicants. Limited analyses (including of primary data in the form of review reports) are conducted to better understand this phenomenon. Redressing current asymmetries will require deliberate analysis to identify existing unjust defaults and assumptions. The lack of understanding of the ways in which Euro-North American-centric epistemic domination hinders the success of Global South epistemic groups applications legitimises funding asymmetries and the exclusion of local voices to address local challenges. 6 8 10 49 This article presents a decolonial approach to analysing global health research funding processes and practices. This should inform novel perspectives to funding prioritisation that enable funders to move from thinking about how to make international funding more accessible to Global South actors to exploring ideas around the development of appropriate, decentralised and locally led funding mechanisms that increase the success rate of Global South applicants in the future. These reflections should also be taken into consideration by Global South funders. Twitter Emilie S Koum Besson @emilie_skb | Epistemic injustice is a growing area of study for researchers and practitioners working in the field of global health. Theoretical development and empirical research on epistemic injustice are crucial for providing more nuanced understandings of the mechanisms and structures leading to the exclusion of local and marginalised groups in research and other knowledge practices. Explicit analysis of the potential role of epistemic injustice in policies and practices is currently limited with the absence of methodological starting points. This paper aims to fill this gap in the literature by providing a guide for individuals involved in the design and review of funding schemes wishing to conduct epistemic injustice analysis of their processes using a decolonial lens. Placing contemporary concerns in a wider historical, political and social context and building from the intertwined issues of coloniality of power, coloniality of knowledge and coloniality of being that systematically exclude non-Western epistemic groups, this practice paper presents a three-step decolonial approach for understanding the role and impact of epistemic injustices in global health research funding. It starts with an understanding of how power operates in setting the aim of a call for research proposals. Then, the influence of pose and gaze in the review process is analysed to highlight the presence of epistemological colonisation before discussing methods to address the current funding asymmetries by supporting new ways of being and doing focused on knowledge plurality. Expanding research on how epistemic wrongs manifest in global health funding practices will generate key insights needed to address underlying drivers of inequities within global health project conception and delivery. |
Introduction Mathematical modeling of infectious processes is not recent, having been used since the late 17th century to understand the dynamics of contagion and to support control and mitigation strategies. However, the use of frequency of social contacts in these models, even today, is not usually present. This is largely explained by the lack of adequate estimates that reflect the reality of contacts in each population under study. Recent advances in epidemiological data collection have shown that the predictive and explanatory power of models is enhanced through the quantification of social contacts (PREM et al., 2021). In epidemiological studies, it is common to use systems of differential equations, called compartmental models, which includes the SIR model (susceptible, infected and recovered) (KEELING; ROHANI, 2008). The construction of these traditional epidemiological models requires the estimation of certain parameters of the system to adequately capture the dynamics of a disease in a population. One of these parameters is the daily contagion rate (<unk>), which indicates how many secondary cases an infectious individual generates, per day, in a susceptible population and measures the rate of interaction between the susceptible and infected compartments of the model. Usually, when estimating this rate, the contexts in which the contacts are taking place are not measured,1 using the assumption that the parameter is the same, or converges towards this, in all population subgroups or social contexts in which contacts occur. In this regard, it's possible to treat the case of groups of different age as compartments in a SIR like model, modifying the estimated daily contagion rate to construct an age specific rate using the product between the rate of social contacts and the rate of contagion of the pathogen, given the occurrence of contact. Some examples of the use of contact rates for compartmental epidemiological models are present in the work of Chin et al. (2021) and Prem et al. (2021). The contact rate, understood as the average number of daily contacts of a population segment, is known to be a function of both individual attributes and the environment in which the contacts are made. In a pandemic scenario, the promotion of non-pharmacological actions, such as restrictions on the function of economic activities to promote greater social distancing, requires an understanding of how contacts develop, a sine qua non condition for the construction of reliable epidemiological scenarios and the evaluation of public policies. The aim of this article is to present the significance of contact rates as a vital instrumental measure for epidemiological analysis. To this end, we present the general aspects of designing a contact data collection survey and then demonstrate how epidemiologically relevant contact rates can be used to improve traditional epidemiological models. Furthermore, we present an analysis of the socio-demographic constraints of epidemiologically relevant contact rates. These were obtained through field research carried out in a sector of the city of Belo Horizonte (Brazil) in June 2021. The contact rates collected also made it possible to simulate the dynamics of COVID-19 through the parameterization of a SIR model. Firstly, the paper describes the methodological field on epidemiological social surveys that aim to collect social contact rates, highlighting the main approaches with their challenges and limitations. Secondly, it presents the application of the former methodology in the universe of a slum community in Belo Horizonte Brazil, including sample size, strategy for data collection and post-stratification procedures. Thirdly, we present the statistics describing the social contact rates collected. Fourthly, aiming to test the heuristic power of social contact rates, we include a comparison of two SIR models, one informed with parameters that consider the social contact rates observed and another one using social contact rates projected for Brazil in international studies. Fifthly, via a log-lin model, we explore social determinants of social contact rate. Considering an epidemiological perspective, a proxy variable, density of cliques2, was constructed to operationalize the social contact rate as a dependent variable. Finally, as for practical recommendations, we present the advantages of informing SIR models with social contact rates just like the identification of some relevant determinants of social contacts. --- Development of survey methodology on social contacts The first large-scale quantitative survey was carried out in 2008 on contact patterns relevant to respiratory and close-contact infections. The study Improving Public Health Policy in Europe through the Modeling and Economic Evaluation of Interventions for the Control of Infectious Diseases (POLYMOD) (MOSSON et al., 2008) involved 7,290 people from eight European countries (Belgium, Germany, Finland, Great Britain, Italy, Luxembourg, Netherlands and Poland) and used the epidemiological diary to record participants' contacts in one day, providing data on different age groups and different interaction environments, such as school, home, work, among others. Other smaller-scale research has explored patterns of social interaction to understand the transmission of infectious diseases. An example was conducted in the province of San Marcos, in the northern highlands of Peru, involving rural communities (GRIJALVA et al., 2015). Another study carried out, called BBC Pandemic (KLEPAC et al., 2020) was an innovative research experiment conducted through the Pandemic app, specially created to identify the human networks and behaviors that spread infectious diseases. Their data were used by researchers at the University of Cambridge and the London School of Hygiene and Tropical Medicine to build a map of social interactions in the UK. Recently, Chin et al. (2021) studied the contribution of age groups to the dynamics of SARS-COV 2 in the United States. To this end, they performed a longitudinal study with six-wave data from the Berkeley Interpersonal Contact Survey (BICS). They worked with social contact information collected between March 2020 and February 2021 in six metropolitan areas in the United States. Other studies regarding the action of subjects during a pandemic involve the measurement of mobility during the COVID-19 related to the dynamics of the subjects on spatial structures, whereas in this case we are interested in the patterns of contacts, especially the frequency of contacts between individuals belonging to different age groups. However, the structures of the patterns of mobility describe the dynamics of the epidemic on a mesoscopic scale, whereas with the contact process we see the phenomenon at the microscopic scale. The former approach has been preferred in several studies, one suggested by the anonymous referee (OLIVEIRA et al., 2021) which uses Google COVID-19 Community Mobility data. All these studies have contributed to filling the gap in the production of empirical data about social contacts relevant for modeling the dynamics of infectious disease transmission. In the same direction, the present work, a result of the research "Covid-19: epidemiological model that incorporates structures of social contacts" (funded by the Ministry of Health of Brazil, public notice MCTIC/CNPq/FNDCT/MS/SCTIE/Decit no 07/2020 ), seeks to advance in the field of social contact research, and includes the main recommendations pointed out by Hoang et al. (2019) regarding sampling, instruments and data collection methods, especially with the elaboration of a complex sample design, taking into account, on the one hand, the relevant socio-demographic parameters and, on the other hand, the structural parameters of an unobserved network of contacts. --- Sampling design and data collection --- Data collection For the estimation of social contacts, we implemented a survey in an impoverished sector of Belo Horizonte, capital of one the largest federated states of Brazil. Located in the center-south region of the city, the Aglomerado da Serra comprises a contiguous space of eight villages, located on the slopes of the Serra do Curral, an old urban occupation with a complex environmental degradation situation. It is an occupied area on the fringes of public planning, with a low-income population, in which the public power recognizes the need to organize the occupation, through housing programs, urbanization interventions and land regularization actions. This population was chosen with the aim of observing the rates of social contact in a high vulnerability area. In this region, with a high population density, people share reduced housing units, rendering social distancing impractical. In addition, many houses present characteristics of precariousness and insalubrity, such as lack of adequate ventilation, poor sunlight and excess humidity. These factors increase housing insalubrity and reflect on people's health, especially children and the elderly (SILVEIRA, 2015). The sample size calculation of the survey considered socio-demographic and network structure parameters, which, due to network sampling, resulted in a larger sample size than would be necessary for a conventional survey. This presented several challenges. Firstly, A study on social contact rates relevant for the spread of infectious diseases in a Brazilian slum Higgins, S.S.S. et al. R. bras. Est. Pop., v.40, 1-20, e0241, 2023 the unavailability of an updated demographic census (IBGE), as the last one available dates from 2010. According to this, the total population of Aglomerado da Serra was 38,405 inhabitants who lived in 10,900 households. Secondly, we estimate network parameters so we assume as a population target a large, but unobserved, network for this universe of people. To calculate the minimum size of nodes to get credible estimates for the network, we assume as a sampling target a complete, but unobserved, network for this universe of people. To do so, we simulated 500 networks of 5,000 and 10,000 nodes, using a test power of 8% and a significance level of 95%, which allowed us to estimate the average number of cliques (groups of people where all of them are in contact at the same time) of sizes 2, 3, 4 and 5 giving us a sample size of at least 1,000 nodes. We use the ergm and graphlets packages, built and made available in the R package. Both are part of the Statistical Network Analysis library of R (YAVEROGLU et al., 2014;GJOKA et al., 2014;HUNTER et al., 2008). Given the total number of individuals in the sample and the plausible frequency of cliques, we designed a three-step stratified sample. At first, the sample was stratified, following a proportional estimate of households according to the neighborhood where they are located and the number of residents. In a second stage, the households were randomly selected in each neighborhood, according to a previously established systematic agenda. In the third stage, a respondent was drawn, at random, from each household. This provided a unit of information collection, which we call the observation unit. It consists of the individual drawn within the household, as well as several units of analysis that feed the explanatory models, some at the individual level, such as the contact rates aggregated by age groups, and others at the collective level, such as the size of the household, measured in number of inhabitants, and the social circles where relationships are held, inside and outside the home. Based on a confidence level of 95% and a sampling error of 2%, we determined a first sample of households of size 1,000 to apply the instrument. To correct the demographic census lag and the availability bias at the time of collection, we returned to the field and collected a second sample, following the stratified design of the first survey, with 450 households. With this, we subsequently calibrated the data taking into account two basic variables: sex and age group. To obtain the values of the standard deviation and confidence intervals for the estimators, a simple resampling procedure of 100 copies of the original database was applied, with replacement, plus a column with the calibrated weights for each resampled observation (CHEN;SHEN, 2019). An epidemiological diary (HOANG et al., 2019), adapted from the instrument used by POLYMOD in the United Kingdom (MOSSONG et al., 2008), was applied. Due to the application time and costs, the interview method was chosen. A self-administered questionnaire along several days in a week, as applied in Europe, would require a great follow-up effort, given the social conditions of the target population, while at the same time potentially compromise the response rate necessary to attain the optimal sample size. The questionnaire was designed in three blocks of questions. The first one identified the number of people living in the house and their sex. With this information the respondent was drawn by lot, then it was decided if the drawn respondent was qualified to give the information, asking if in the last twenty-four hours he/she talked face to face or had any physical contact (i.e. handshake, hugs, kisses, contact while doing sports) with one or more people at the same time. The second one asked about socio-demographic characteristics of respondents (age, sex, race, work condition, income and educational level). The third inquired, in detail, about the social circle where the contacts took place -in house, outside, neighborhood, church, work, school -, the number of people grouped together -which for the purpose of the analysis we call cliques -the sex and age of the alteri, the duration, the frequency among other characteristics. 3 This research considered qualified respondents as adults aged 18 or over, as well as children and young people who gave their consent under the guidance of a responsible adult, regarding the consent that the epidemiological diary assumes. The study was approved by the Research Ethics Committee of the Federal University of Minas Gerais (UFMG). --- Results --- Socio-demographic data It was found that the adult population predominates in Aglomerado da Serra, in the range of 20 to 59 years (61.6%), with a significant presence of children and adolescents from 0 to 14 years (18.7%) and the elderly (12.1%), which comprise the so-called dependent population. The presence of young people between 15 and 19 years old (7.61%) is not very expressive. As expected, for the Brazilian case, the vast majority of the population declared to identify as black and brown (80.8%), this is due to the fact that poverty affects mainly these population segments (OS<unk>RIO, 2019). In Aglomerado da Serra, the data indicate the greater presence of households with up to 3 residents (65.6% of households), within the limit of the average size of households projected for Brazil in 2020 -3.0 residents (GIVISIEZ, 2018). Another 19.8% of households have up to 4 people and only 14.7% have more than 4 people, confirming the trend towards smaller households, as a result of demographic changes that have taken place in recent decades. Household income in Aglomerado da Serra is low. 76.0% of households have a monthly family income of up to 2 minimum wages, and of these, almost 39.0% of households live with up to 1 minimum wage. However, only 36.9% of households received emergency aid in 2021, confirming the limited scope of social protection measures to reduce impact in times of health crisis. A study on social contact rates relevant for the spread of infectious diseases in a Brazilian slum Higgins, S.S.S. et al. 40,1234567891011121314151617181920e0241,2023 Social contact rate and their characteristics From this sample, it was possible to identify the social contact rate by age group, as well as the distribution of these contacts by meeting place (or social circle) and their duration. It was found that children and adolescents, from 0 to 14 years old, reported a higher average of contacts, compared to other age groups. Young people and adults between 20 and 34 years old were the second group to report more contacts, whose average also exceeds that of the other age groups. The age group of 60 years and over showed the lowest rate. R. bras. Est. Pop., v. As for contacts through social circles, 62.3% of the reported contacts took place at home, followed by contacts made in the neighborhood, 19.3%. Contacts in other circles were greatly reduced. Only 0.82% of contacts were made in school environments. The data are consistent with the period in which the research was conducted (end of the 1st semester of 2021), when Belo Horizonte had measures to restrict circulation and ban school attendance. The data also showed differences in the duration of each contact by social circles. Contacts were reported to last longer (more than 4 hours) at home (70.7%), at work (50.3%) and leisure (46.5%). Shorter contacts (less than 5 min and between 6 and 15 min) occurred mainly in commerce/services (38.3%) and neighborhood (33.8%). It is important to note that in addition to the average number of reported contacts, location and duration, the dynamics of contagion also depend on the interaction between different age groups, more specifically, on knowing which age groups interact with each other, which corresponds to the rates of transmission, intra-and inter-band contacts. When we see the contact process as a Poisson process, then the contact rate of the process is estimated by the average number of contacts. The contact rates, or average number of contacts, between age groups can be seen in the Figure 1. --- +. Source: Research data. Note: The matrix is read in the direction of the line to the column. For example, if we want to know the observed rate of contacts between people aged 0-14 and those aged 35-59, we look for the respective vertex, which indicates it is 0.99 contacts per person per day. In other words, on average, each child or adolescent reported one contact with an adult in the age group in question. We then looked at the line for the 35 to 59 age group to see how many contacts they indicated with a child or adolescent, and found that it was 0.5 on average. The rates do not match because, within the sample, the alteri (indicated) do not necessarily coincide with those indicating (ego). This makes it necessary to symmetrize the matrix, using the arithmetic mean, to include it in the SIR models. --- SIR model by ages using contact rates To simulate pandemic behavior, incorporating the effect of the structure of social contacts, a SIR epidemiological compartment model was applied -(S) susceptible, (I) infected and (R) recovered -by age (KEELING; ROHANI, 2008). Five age groups were used, corresponding to individuals with ages in the ranges, in complete years, from 0 to 14 (group 1), 15 to 19 (group 2), 20 to 34 (group 3), 35 to 59 (group 4) and 60 years or older (group 5). This set of intervals was obtained by grouping adjacent age groups in the contact rate matrix, until it was reduced to five age groups. The system of differential equations describing the evolution of the number of individuals in each compartment, in the aged SIR model, is:, ( ) ( ) ( ) j i i j i j j I t dS t S t t dt N <unk> = - <unk>(1), ( ) ( ) ( ) ( ) j i i j i j i j I t dI t S t t I t dt N y <unk> = <unk> - (2) ( ) ( ) i i dR t I t dt y = - (3) Here <unk> is the infection rate and 1/<unk> is the recovery rate, S i (t), the number of susceptibles, I i (t) the number of infected e R i (t) the number of recovered at time t. If N is the total population being considered then S i (t) + l i (t) + R i (t) = N.Since the formulas consider age groups, the subscripts i and j designate the ith and jth age group, respectively. Thus, t i,j means the contact rate between the i age group and the j age group. The COVID-19 infection rate at the time of the survey was <unk> 0 = 0,05 (YANG et al., 2020;ZHOU et al., 2020) and, for the calibration of the model, we adjusted this rate taking into account the average of the contacts, which is also the average of rates, of the matrix of contact rates, using the estimated matrix for Brazil in POLYMOD, we obtain. t = and then, to simulate the results in Belo Horizonte, we use the estimated contact matrix for Aglomerado da Serra, obtaining /. t <unk> <unk> = =, this value corrects the effect of contacts in estimating the rate <unk> 0. The recovery rate 1/<unk> = 1/7 corresponds to the inverse of the average recovery time for COVID-19. Three values were used for the rates, corresponding to the estimated average rates and the upper and lower limits of the confidence intervals (of 95%) for the rates. Rate matrices were symmetrized to reduce bias (HAMILTON et al., 2022). 4 The system of equations implementation was done using the Epimodel package from R (JENNESS et al., 2018). To assess the consistency of the model proposed here, two simulations were carried out, using epidemiological parameters of the reference week of data collection as reference. One, using the adjusted contact rate data for Brazil, derived from the POLYMOD (PREM et al., 2021;MOSSONG et al., 2008) and the other with the empirical rates of Aglomerado da Serra/BH, both with a projection horizon of one week. 5 The results are shown in Figure 2, where we plot in a graph the simulated and observed proportions of infected by age group. 4 The non-symmetry present on contact matrices was observed on several studies and evaluated recently on the work by Hamilton (2022). This study compared symmetric versus non-symmetric contact matrices, via simulation of SIR type models using POLYMOD estimates and comparing also with observed data. According to the study, models with nonsymmetric matrices "underestimated the basic reproduction number, had delayed timing of peak infection incidence, and underestimated the magnitude of peak infection incidence". Non-symmetric matrices also "influenced cumulative infections observed per age group, and the projected impact of age-specific vaccination strategies". 5 Prem et al. (2014) performs sophisticated demographic projection work to find social contact rates by age group in three social circles (home, school, and work) in 152 countries covering 95.9% of the world's population. They use three data sources: POLYMOD, Demographic and Health Survey and national data from different countries. The projection process starts with a hierarchical Bayesian model that estimates contact rates by age and social circles in each of the eight European countries covered by POLYMOD and for the whole set. This first exercise allows for the construction of three matrices with contact rates by age groups, one for each social circle. Subsequently, the contact rates of each matrix are projected for the countries that were not part of POLYMOD considering the following demographic parameters available in national databases: (a) population profile by age groups, (b) labor force participation, (c) student-teacher ratio, (d) school enrollment rates. The estimates produced by the model proposed in this research, from the survey of contact rates by age groups, offer a better approximation between predictions and observations than the approximation that uses estimated rates of the POLYMOD (Figure 2). In both simulations, the average contact rates for Brazil from POLYMOD were used as a correction factor. --- Contact rates and their conditions: a clique approach Next, we present the main determinants of contact rates collected in Aglomerado da Serra. Since this is the most epidemiologically relevant data from the perspective of contact structures, it is pertinent to explore it from the perspective of some socio-demographic factors that were raised at the time of collection. First, it should be explained that we approach contact numbers using a proxy variable: the cliques or groupings declared by respondents. Each respondent was asked about the contacts he had in the last 24 hours, according to the specific place where they happened (house, neighborhood, business, etc.), but also asked to indicate how many other people they had been in contact with simultaneously, as well as the age and sex of these alteri. Given that we conducted a basic SIR model on diseases that are transmitted person-to-person, such as respiratory diseases, it is useful to understand which covariates are associated with these agglomerations, of variable size, where the contagions happen. To this end, we must highlight that we chose to name the variable of interest as "clique density", due to the sociometric concept that defines a clique as a group where all its members are adjacent to each other, that is, where all are in contact with each other. We assume that a clique is a cluster with k(k-1) contacts, where k is the number of vertices, which in this case corresponds to number of people in contact. Respondents declared cliques with a minimum size of 2 and a maximum of 11 people.6 At this point, two clarifications are necessary. First, we use the concept of clique in the mathematical-formal sense that it has in graph theory, i.e., a grouping where all nodes are adjacent to each other. It does not have the substantive sense of a cohesive group by a common identity recognized by the members. Secondly, the contact rate we have discussed is nothing but an average of the relationships considering all the cliques in which a person takes part. For example, if a respondent declared that, in the last twenty-four hours, he or she was grouped into three cliques of size 3, 5 and 7 respectively, then his or her average number of contacts is <unk>(k-1) * 2/ <unk>k, which corresponds to 1.6. The frequency distribution of the clique size variable presents a concentration in the smaller size cliques. Only contacts declared in the first contact situation were used for the purpose of this analysis, as the memory bias delivered a decreasing valid data frequency: 99.4% in the first situation, 30.5% in the second situation and 2.5% in the third situation. Since the number of contacts in the cliques of k size follows a geometric progression, the natural logarithm of this progression was used as a scale for our independent variable: clique density. 7 In this way, we tested, using a log-lin model, the associations between the response variable and its determinants with explanatory power. We interrogated our data using two "log-lin" models, following the expression used by Gujarati and Porter (2012). The general equation is: lnY = <unk> 0 + <unk>x 1...+ r (4) In both models, the response variable is clique density. We use two treatment criteria, social circle, and age group, to see how a set of determinants impact the fact that an individual clusters in cliques with different numbers of social contacts. These are the main independent variables. Both variables were transformed from categorical into a set of binary ones where each category corresponds to a new variable. In the first model, only the social circle home was included while the last age group (> 60) was used as the omitted reference category. In the second model, home is the omitted reference category, keeping all other variables, including age group, in the same way. We should remember that at the time of our data collection, the Brazilian federal government had already implemented economic aid for low-income families to allow them to subsist during the social isolation measures. The models assume the point of view of the interviewee. members (size k), and, on the other hand, of the orientation of the relationship. The latter means that we can consider, or not, the direction of the relationship. For example, if two people are married, their relationship is not oriented, it does not make sense to say that A is married to B, but that B is not married to A. In the case of contagious relationships, we must consider that A can infect B, but that B does not necessarily infect A, or vice versa. If the non-orientation perspective of the relationship is adopted, the number of contacts will be k(k-1)/2, but if orientation is adopted, as we have done in this article, the number of contacts will be k(k-1). 7 Considering k(k-1) as the number of contacts in the clique of size k, we have a variable with a geometric progression and with a nonlinear distribution. That is, in a clique of size 3, we have 6 contacts, if the size is 4, we have 12 contacts and so on up a clique of size 11 which has 110 contacts. Since the scale of the dependent variable is the natural logarithm, the coefficients are percentage proportions of how much each explanatory variable increases the response Higgins, S.S.S. et al. Est. Pop., v.40, 1-20, e0241, 2023 variable (clique density). In the first model, where the contact situation is the house (Table 1), one more resident in the household increases the clique size by 28.4%. Gender has no significant effect. The contact situation within the household reduces the clique density by 60%. In the opposite case, when we invert the binarization, recording the situation outside the home, there is a 60% increase in clique density. Neither the emergency aid nor the fact of using public transport had significant effects. Children aged 0 to 14, and young adults aged 20 to 34 had 41%, and 39%, respectively, more contacts in the clique group than people aged 60 and over (<unk> 60). The age groups from 15 to 19 and 35 to 59 did not present a significant coefficient at the conventional value of 5%. --- R. bras. On the second model, there are several contact situations that correspond to forms of socialization that take place outside the home. The number of residents maintains its aggregation effect on clique density. Gender, emergency aid and public transport, like in the first model, do not present a statistically significant effect. Among the contact situations, two were not significant, namely, commerce and school. It is important to remember that on the date of data collection, the Belo Horizonte school system had suspended activities due to the pandemic. It is possible that the contacts declared at school correspond to occasional visits to pick up some study material to work on at home. The other contact locations show important impacts on the size of the clicks compared to the home. Groupings at church generate 186% more contacts than those at home, followed by leisure (87%), workplace (65.5%) and neighborhood (58%). The only age groups with predictive power over clique size correspond to the same age groups as in the first model, children, and young adults. Taken together, the previous results demonstrate that the size of households, in terms of number of residents, is an important determinant in the formation of epidemiologically relevant clusters. However, the formation of cliques within the household does not mean that they took place in the respondent's household. In fact, cross-table analyses show only a 65% correspondence between the number of residents and the size of the declared cliques, a percentage obtained by dividing the total diagonal values -between values 2 and 10 for both the number of households and for the size of cliques -by the total of reported cliques. In the case of cliques located outside the home, the correspondence is only 10%. The formation of intra-household cliques appears as a phenomenon formed by children and caregivers-parents between 20 and 34 years old, something to be expected when the school system was closed. Figure 3 explains the distribution of clique size according to age groups and contact location. In the latter, greater variability in the clique size can be observed in the situation outside the home, as evidenced by the interquartile range and the amplitude of the clique size. This is consistent with the results of the second model, which showed how church, leisure, work and neighborhood are spaces that encourage people to meet. This is something we can expect if: (1) people are outside the more controlled environment of the home; (2) physical spaces contain vital socialization circles for people and (3) we assume a margin of randomness in social encounters. --- Discussion Following the current pandemic, triggered by SARS COV 2, a wide range of work has undertaken the challenge of monitoring the expanding course of the pandemic. Some global initiatives turned to technological devices to identify the mobility of human populations almost in real time. The Google platform provided data on the mobility of its users by making use of smartphones' geolocation. To this end, the Google COVID-19 Community Mobility Report (GCCMR) was made available with data from 131 countries. This initiative, with specific research purposes, ended in 2022-10-15. Oliveira et al. (2021) used the GCCMR data in ten Latin American countries to associate mobility indexes with the COVID-19 stringency index from Oxford. Without undervaluing the advantages of the previous analysis strategy, it is important to point out several limitations imposed by the baseline data when working with conventional SIR models. As the authors themselves rightly acknowledge (OLIVEIRA et al., 2021), the mobility data provided by GCCMR, in various social circles (parks, work, commerce, among others), constitute a digital proxy for face-to-face human interactions. The attribution of an individual to a place depends on whether the user has activated their phone's location history. Furthermore, Google reserves the right to provide data, about social circles, where there is low frequency of visits, as this may compromise the anonymity of the information. However, the two most serious limitations, marking a substantial difference with our strategy, are the absence of relational information between human beings and the no disaggregation of age groups. We know nothing about physical contacts, nor about one of the most important behavioral determinants, age. In summary, having to refine parameters in mathematical SIR models imposes the rigorous collection of primary data through surveys that provide social contact rates. When comparing our work with other studies inspired by the POLYMOD strategy and carried out in developing countries, we found some important convergences and differences. Johnstone-Robertson et al. (2011) conducted a survey of social contacts in a Township of just under 20 thousand inhabitants, near Cape Town, a rural population with a well-defined census. This allowed for a random sampling of individuals considering age groups. The results accurately demonstrated that the young population between 5 and 19 years of age was at the highest risk of infection by respiratory diseases endemic to that community (tuberculosis and influenza), thus, confirming that, by disaggregating the data by age groups, an epidemiologically relevant determinant of social behavior is identified. In turn, the work of Grijalba et al. (2015) highlights the difficulties of collecting data on social contacts in several population universes at the same time. Wanting to cover 54 rural communities in Peru, they had to give up probabilistic sampling to settle for convenience samples in which the members of at least two households per community were interviewed. Costs and logistics make probability sampling plans less feasible. Furthermore, when estimating infection rates of pathogens, the incorporation of social context is extremely important to determine the evolution of the epidemic, especially in the case of airborne viruses such as SARS (YANG et al., 2020;ZHOU et al., 2020;LIU et al., 2020). As already pointed out by other research (ZHOU et al., 2020;PREM et al., 2017;BARMPARIS;TSIRONIS, 2020), factors such as household size and age are intrinsically linked to SARS-like virus infection rates, and these factors are linked to socioeconomic conditions that need to be evaluated in situ to have a more realistic determination of what infection rates mean. In turn, the exploration of the socio-demographic conditions of contact rates in this study, through two log-lin models, showed that research of this type is also useful for the | Inspired by the POLYMOD study, an epidemiological survey was conducted in June 2021 in one of the most densely populated and socially vulnerable sectors of Belo Horizonte (Brazil). A sample of 1000 individuals allowed us to identify, within a 24-hour period, the rates of social contacts by age groups, the size and frequency of clique in which respondents participated, as well as other associated sociodemographic factors (number of household residents, location of contact, use of public transportation, among others). Data were analyzed in two phases. In the first one, results between two SIR models that simulated an eight-day pandemic process were compared. One included parameters adjusted from observed contact rates, the other operated with parameters adjusted from projected rates for Brazil. In the second phase, by means of a log-lin regression, we modeled the main social determinants of contact rates, using clique density as a proxy variable. The data analysis showed that family size, age, and social circles are the main covariates influencing the formation of cliques. It also demonstrated that compartmentalized epidemiological models, combined with social contact rates, have a better capacity to describe the epidemiological dynamics, providing a better basis for mitigation and control measures for diseases that cause acute respiratory syndromes. |
universes at the same time. Wanting to cover 54 rural communities in Peru, they had to give up probabilistic sampling to settle for convenience samples in which the members of at least two households per community were interviewed. Costs and logistics make probability sampling plans less feasible. Furthermore, when estimating infection rates of pathogens, the incorporation of social context is extremely important to determine the evolution of the epidemic, especially in the case of airborne viruses such as SARS (YANG et al., 2020;ZHOU et al., 2020;LIU et al., 2020). As already pointed out by other research (ZHOU et al., 2020;PREM et al., 2017;BARMPARIS;TSIRONIS, 2020), factors such as household size and age are intrinsically linked to SARS-like virus infection rates, and these factors are linked to socioeconomic conditions that need to be evaluated in situ to have a more realistic determination of what infection rates mean. In turn, the exploration of the socio-demographic conditions of contact rates in this study, through two log-lin models, showed that research of this type is also useful for the determination of covariates associated with the formation of small agglomerations that result in epidemiologically relevant contacts. In this case, we saw how the demographic size of the household is a fundamental covariate when planning mitigation or epidemic control scenarios, as it increases the density of groups between people. Understanding social circles is important to understanding how forms of socialization increase the risk of contagion. In popular communities, places of worship, neighborhoods, and places of leisure, among others, are favorable scenarios for forms of socialization that substantially increase contact rates. The low coverage of emergency assistance provided by the Brazilian federal government was not associated with the reduction of social contacts. When we interpret this finding together with the statistically significant effect of the work circle, we can infer that government aid, in the case of the SARS COV 2 pandemic in urban areas, was useful for the survival of families and less effective in pandemic control. Families from popular sectors that survive on up to two minimum wages have no other option than to seek their livelihood in a job market with a high rate of informality. Beyond methodological divergences, this finding has been reinforced by research that digitally captured social mobility indexes (OLIVEIRA et al., 2021). This study also indicated, as already demonstrated in specialized literature, that compartmental epidemiological models combined with social contact rates have greater ability to describe epidemiological dynamics because they incorporate interaction between ages (GJOKA et al., 2014;CHIN et al., 2021;PREM et al., 2017). In this regard, we observed the social contact rates collected in Aglomerado da Serra provided a better fit in the SIR model relative to the demographic projection made by Prem (PREM et al., 2017) for Brazil. Therefore, this study reveals the importance of investing in epidemiological diary research that provides information on the covariates associated with the formation of epidemiologically relevant clusters, and informs compartmental models better, improving their fit and allowing projecting the effect of mitigation processes, such as vaccines or isolation (KEELING; ROHANI, 2008;RAM;SCHAPOSNIK, 2021;COLOMBO;GARAVELLO, 2020), in different age groups, which increases the relevance of their use. --- Conclusions The crisis triggered by SARS COV 2 was a significant opportunity to adapt the technique of epidemiological dairy in the context of health surveillance in Brazil. This study demonstrates how the empirical, in situ, estimation of social contact rates improves the descriptive power of compartmental models widely used in epidemiology. In general, these models work at average levels of contact rates, disregarding the heterogeneity of contacts between social groups. In this work, we estimated the rate of social contact by age and the results are more sensitive to the reality of the pandemic. The technique of epidemiological diaries, adapted as an interview, makes it possible to gather information on rates of social contacts as well as on factors of the socio-demographic structure that affect the rates of social contacts. With greater clarity, on one hand, about the morphological factors of social life, such as the demographic size of households and age composition of the social universe, and, on the other hand, about socialization circles, we can broaden our comprehension of the infectious processes in terms of the different structures of interaction between human beings. --- Resumo Um estudo sobre as taxas de contatos sociais relevantes para a difus<unk>o de doenças infecciosas em um aglomerado brasileiro Inspirado no estudo POLYMOD, foi realizado, em junho de 2021, um survey epidemiológico num dos setores de maior densidade populacional e vulnerabilidade social de Belo Horizonte (Brasil). Uma amostra de 1.000 domic<unk>lios permitiu identificar, num per<unk>odo de 24 horas, as taxas de contatos sociais por faixas etárias, o tamanho e a frequência de cliques do qual participou o respondente, assim como outros fatores sociodemográficos associados (n<unk>mero de moradores do domic<unk>lio, local do contato, uso do transporte p<unk>blico, entre outros). Os dados foram analisados em duas fases. Na primeira, foram comparados os resultados entre dois modelos SIR que simularam um processo pandêmico de oito dias. Um incluiu parâmetros ajustados a partir das taxas de contatos observadas. O outro operou com parâmetros ajustados a partir de taxas projetadas para o Brasil. Na segunda fase, mediante uma regress<unk>o log-lin, modelamos os principais determinantes sociais das taxas de contato, utilizando o adensamento de cliques como uma variável proxy. A análise dos dados mostrou que o tamanho da fam<unk>lia, a idade e os c<unk>rculos sociais s<unk>o as principais covariáveis que influenciam a formaç<unk>o dos cliques. Também demonstrou que modelos epidemiológicos compartimentais, combinados com taxas de contato social, têm melhor capacidade de descrever a dinâmica epidemiológica, fornecendo uma melhor base para medidas de mitigaç<unk>o e controle de doenças que causam s<unk>ndromes respiratórias agudas. Palavras-chave: Survey epidemiológico. POLYMOD. Taxa de contato social. Cliques. --- Resumen Un estudio sobre las tasas de contactos sociales relevantes para la propagación de enfermedades infecciosas en un barrio popular del Brasil Con inspiración en el estudio POLYMOD, se hizo una encuesta epidemiológica, en junio de 2021, en uno de los sectores más densamente poblados y socialmente vulnerables de Belo Horizonte (Brasil). Una muestra de mil hogares permitió identificar, en un per<unk>odo de 24 horas, el tama<unk>o y la frecuencia de los cliques en los que participó el encuestado, las tasas de contactos sociales por grupos de edad, as<unk> como otros factores sociodemográficos asociados (n<unk>mero de residentes en el hogar, lugar de contacto, uso del transporte p<unk>blico, entre otros). Los datos se analizaron en dos fases. En la primera, se compararon los resultados entre dos modelos SIR que simularon un proceso pandémico de ocho d<unk>as. Uno incluyó parámetros ajustados a partir de tasas de contacto observadas; el otro operó con parámetros ajustados a partir de tasas proyectadas para Brasil. En la segunda, mediante una regresión log-lin, se modelaron los principales determinantes sociales de las tasas de contacto, utilizando la densificación de cliques como una variable proxy. El análisis de los datos mostró que el tama<unk>o de la familia, la edad y los c<unk>rculos sociales son las principales covariables que influyen en la formación de camarillas. También demostró que los modelos epidemiológicos compartimentados, combinados con tasas de contacto social, son más capaces de describir la dinámica epidemiológica, proporcionando una mejor base para las medidas de mitigación y control de las enfermedades causantes de s<unk>ndromes respiratorios agudos. Palabras clave: Encuesta epidemiológica. POLYMOD. Tasa de contacto social. Cliques. Received for publication in 02/12/2022 Approved for publication in 19/04/2023 --- About the authors | Inspired by the POLYMOD study, an epidemiological survey was conducted in June 2021 in one of the most densely populated and socially vulnerable sectors of Belo Horizonte (Brazil). A sample of 1000 individuals allowed us to identify, within a 24-hour period, the rates of social contacts by age groups, the size and frequency of clique in which respondents participated, as well as other associated sociodemographic factors (number of household residents, location of contact, use of public transportation, among others). Data were analyzed in two phases. In the first one, results between two SIR models that simulated an eight-day pandemic process were compared. One included parameters adjusted from observed contact rates, the other operated with parameters adjusted from projected rates for Brazil. In the second phase, by means of a log-lin regression, we modeled the main social determinants of contact rates, using clique density as a proxy variable. The data analysis showed that family size, age, and social circles are the main covariates influencing the formation of cliques. It also demonstrated that compartmentalized epidemiological models, combined with social contact rates, have a better capacity to describe the epidemiological dynamics, providing a better basis for mitigation and control measures for diseases that cause acute respiratory syndromes. |
INTRODUCTION Physical inactivity is increasing across Europe, threatening human health and costing the European economy over e80 billion per year (International Sport and Culture Association, 2015). Raising the physical activity levels of the less active members of the population is a public health priority, and promoting walking is potentially the most effective means of achieving this (Ogilvie et al., 2007), in part because it is low cost and even normal walking pace can be health-enhancing (Rowe et al., 2013). Importantly, natural environments appear to be locations which may be effective at encouraging healthenhancing bouts of walking. Survey research, for instance, suggests that individuals tend to spontaneously engage in longer episodes of physical activity, including walking, in natural rather than urban settings, and thus expend more energy on visits to these environments (Elliott et al., 2015); while experimental research has demonstrated that people are more likely to conduct uninterrupted bouts of brisk walking in natural environments than in urban locations (Sellers et al., 2012). Further, walking in natural environments can heighten affective benefits of walking compared to walking in urban settings (Thompson Coon et al., 2011), leading to a greater likelihood that the activity will be repeated (Rhodes and Kates, 2015). Combined, these findings suggest that greater systematic efforts to promote walking in natural settings may play an important role in enhancing sustainable improvements in physical inactivity. This is certainly the perspective of the UK's National Institute for Health and Care Excellence (NICE, 2012), which identified the need for public health professionals to collaborate with colleagues in countryside management and park services in promoting walking among inactive individuals. Importantly, NICE also specified that there was a need to: "ensure programmes are based on an understanding of...factors influencing people's behaviour such as their attitudes, existing habits, what motivates them and their barriers to change" (NICE, 2012, p.14), and, "develop walking programmes for adults who are not active enough, based on an accepted theoretical framework for behaviour change" (NICE, 2012, p.18). In commenting on how these programmes should be promoted, NICE stated that programme directors should, "ensure programmes include communications strategies to publicize the available facilities (such as walking or cycling routes) and to motivate people to use them" (NICE, 2012, p.14). The aim of the current research was to investigate the extent to which a sample of brochures promoting specific walks in natural environments in England, contain the kind of theoretically derived messages to motivate walking in natural settings that NICE recommends. In the UK, brochures advertising recreational walking (i.e. walking during free time for the purposes of enjoyment, Hurd and Anderson, 2011), in natural environments, are commonly produced by local authorities, councils, charities, and tourism organisations, and are aimed at both local residents and tourists/visitors (Hayes and MacLeod, 2007). Although we recognize that such walking leaflets may not have been produced as 'health promotion materials' per se, it is nonetheless informative to investigate whether they already contain many of the techniques suggested by theory, and whether such an investigation could provide insights into how future leaflets could be developed to include more theory-based techniques, in line with NICE's recommendations, to motivate people to undertake more walks in the future. An examination of walking brochures, in particular, makes sense because written materials are a widely used medium for communicating persuasive messages (Brito and Pratas, 2015), and promoting behaviour change (Bull et al., 2001). They have also been found to be among the most effective tools in promoting walking programmes (Hunter et al., 2015). Nevertheless, there is evidence that written materials advertising physical activity more generally are not always informed by behaviour change theory. For instance, one content analysis of 22 physical activity brochures identified a lack of messages relating to goal-setting, planning and affective benefits of physical activity (Gainforth et al., 2011). The omission of such messages may mean these materials only motivate active people and may deter inactive people from taking up physical activity, which may be especially important in the case of walking, a relatively simple and cost-effective way to become less sedentary. Nevertheless, to our knowledge, no content analysis of persuasive messages in recreational walking brochures has yet been undertaken. Consequently, our main task was to develop a relevant taxonomy of potentially persuasive message categories that could feasibly be contained within such brochures and then to identify their prevalence among a selected sample. To do this we adapted a pre-existing taxonomy developed for the analysis of health promotion materials. Our two main research questions were: a) Can the content of recreational walking brochures be reliably categorized?; and b) If so, what persuasive messages tend to be included in recreational walking brochures? --- METHODS Specifically, we used the Content Analysis Approach to Theory-Specified Persuasive Educational Communication (CAATSPEC) (Abraham et al., 2007) to inform the development of our coding taxonomy. CAATSPEC is an approach to quantitative content analysis of persuasive texts and can be used to outline messages used in health promotion materials. It uses mutually exclusive coding categories to classify content and was suited to this study as recreational walking brochures are (potentially) persuasive texts that promote a change in a health-related behaviour (the uptake of a specific walk). This is the first known application of CAATSPEC to materials in which health promotion may not necessarily have been the primary aim. --- Sampling Brochures were collected from July to December 2013 in the county of Devon, UK. Convenience sampling was employed; sourcing brochures from councils, holiday parks, visitor information centres and supermarkets. This involved visiting as many of these places as was feasible in three principal holiday destinations (Exmouth, Dawlish and the north Devon coast) and one major city (Exeter). The following inclusion criteria were applied: a) the brochures existed in printed and digital form and advertized recreational walking in natural environments including mixtures of urban and natural environments and; b) brochures had to be available free of charge to ensure they could have the widest readership. While convenience sampling results in an unrepresentative sample, it is justified here as: (i) all possible printed recreational walking brochures in the county were difficult to obtain; (ii) it would have been extremely labour-intensive to have even attempted to do so, and; (iii) the current selection of brochures is still useful for generating hypotheses about the effectiveness of content in recreational walking brochures; three conditions necessary for selecting convenience sampling for quantitative content analysis (Riffe et al., 2014, pp. 75-76). In total, twenty-six brochures were collected (see details in the Supplementary Material, S1). Brochures had a range of 54 to 712 paragraphs and 524 to 17,126 words (M 1<unk>4 3,539). They were associated with 29 different organisations and printed by nine different production companies. Two pages from a specific brochure are displayed in the supplementary materials for illustrative purposes (Supplementary Material, Figure S2). --- Taxonomy Following initial readings, message categories corresponding to specific messages included in the brochures were devised. All categories were arranged under five superordinate headings which encompassed the key components of behaviour change in a variety of evidence-based theories, namely, providing information, highlighting potential consequences and opportunities, establishing normative beliefs, promoting intentions and planning, and enhancing self-efficacy (Albarrac <unk>n et al., 2005;Fisher and Fisher, 1992). In a previous application of CAATSPEC, the latter two superordinate headings were collapsed (Abraham et al., 2007), but are separated here to highlight their exclusivity in conceptions of behaviour change (e.g. in the theory of planned behaviour). The final taxonomy had three further levels of specificity arranged hierarchically and can be viewed in Figure 1. We attempted to categorize brochure text into message categories using established taxonomies of behaviour change techniques (Abraham and Michie, 2008;Michie et al., 2013). A taxonomy emerged where each category represented a distinct potentially persuasive message. However, categories warranted greater specificity than techniques defined in general taxonomies. To take an example, Abraham and Michie identified the general change technique "provide information on consequences" as derived from explanatory theories (Abraham and Michie, 2008). The authors defined the technique as, "information about the benefits and costs of action or inaction, focusing on what will happen if the person does/does not perform the behaviour." (p.382). This technique was rendered domain-specific by Michie and colleagues (Michie et al., 2013) who identified the technique as comprising health, social, environmental, and emotional consequences (p.92). In the present study, we further adapted the technique to better represent persuasive messages found in recreational walking brochures. Specifically, consequences of recreational walking in the present taxonomy comprised health, social, environmental, financial, heritage, aesthetic, and recreational consequences (see definitions below). In a similar way to previous applications of CAATSPEC (Gainforth et al., 2011), categories were created to classify pictures of people walking (modelling behaviour) and graphics of maps (aids to planning). Listed below are details of categories under each superordinate from the finalized taxonomy. The full coding manual can be viewed in the supplementary materials (Supplementary Material, S3). --- Providing information Category 1 reflected information on PA recommendations or the prevalence of PA or walking in a population. Categories 2-7 detailed characteristics of the route such as the terrain or distance. Categories 8-11 concerned amenities such as public transport or refreshments on the route. --- Highlighting potential consequences and opportunities Categories 12-17 concerned general consequences of PA or walking including: financial (e.g. saving money over car trips); environmental (e.g. sustainable travel mode); physical and mental health (e.g. improving cardiovascular health; feeling happier); and social (e.g. family enjoyment). Categories 18-26 described opportunities on the advertised route such as heritage (e.g. historical sites); aesthetics (e.g. wildlife, scenery); social (e.g. opportunities for children's enjoyment); and recreation (e.g. leisure opportunities). --- Establishing normative beliefs Categories 27-34 outlined normative information about PA or walking, or the consequences of these including: expert recommendations on PA, and financial, environmental, health, and social consequences. In a similar way to highlighting potential consequences and opportunities, categories 35-43 detailed normative information about opportunities related to walking the advertised route. --- Promoting intentions and planning Categories 44-47 prompted behaviours related to PA or walking including: setting goals based on distances (e.g. decide how far you will walk); or times (e.g. consider freeing up some time for walking); reducing barriers (e.g. think what would make being active easier for you); or prompting activity maintenance (e.g. make sure to 'keep up' your walking once you have started). Categories 48-57 identified messages specific to the advertised route such as prompting goals based on distance (e.g. try breaking up the route into segments); attending to signage (e.g. use the waymarkers); or managing the terrain (e.g. be careful of the busy road). --- Enhancing self-efficacy Following CAATSPEC, most categories under this superordinate were dichotomized as encouraging or guiding behaviour. Encouragement categories conveyed that behaviour was easy to execute, and guidance categories instructed on how to execute behaviour. Categories 58-68 related to building confidence for PA or walking in general and included: guidance on reducing barriers to activity, for example not knowing where to walk (e.g. go to a website and you can find guided walks in your area); encouraging setting walking goals based on time (e.g. it is easy to find everyday opportunities to go walking); or modelling walking pictorially. Categories 69-87 related to building confidence for completing the advertised route and included: guidance on maintaining recreational walking behaviours (e.g. purchase more outdoor walking brochures from the visitor information kiosk in the city centre); encouraging the use of appropriate equipment (e.g. it is simple to get walking boots from your local outdoors shop); or guidance on direction taking (e.g. turn left at the end of the road). As can be imagined, this last category was likely to be central to recreational walking brochures. --- Coding procedures A pilot coding manual was tested by two coders but demonstrated insufficient reliability. To improve the manual, categories were added and deleted, definitions were revised, and coding procedures were modified. With the revised manual, and in accordance with previous research (Gainforth et al., 2011), a line-by-line coding procedure was utilized to facilitate inter-coder reliability testing. Sentences acted as 'units of analysis' and coders were instructed on how to detect semantic changes within and across sentences, and how to code these. Categories were exclusive; text could only be coded under one category. The manual also provided guidance on distinguishing semantically similar categories. For example, some messages prompted behaviours whilst others provided guidance on the same behaviours e.g. category 53 refers to messages suggesting ways to deal with the terrain on the advertised route whereas category 79 refers to messages explicitly providing guidance on how to deal with these. Coders were instructed that any category prompting behaviour will refer to specific behaviour (e.g. be careful climbing the muddy hill) but any category guiding behaviour will inform them on how to execute that behaviour (e.g. taking shorter strides will ensure you do not slip up on the muddy hill). Coding instructions can be seen in the supplementary materials (Supplementary Material, Figure S3). Coding a brochure took approximately 90 minutes. --- Reliability Inter-coder reliability was assessed using the AC1 statistic (Gwet, 2002). The prevalence of some categories was very small and AC1 adjusts reliability accordingly where alternatives (e.g. Cohen's Kappa) (Cohen, 1960) would not. The protocol for reliability testing was as follows: Two brochures were selected by the first author on the basis that they varied in style, length and publisher; thus potentially encompassing the broadest range of categories. Two coders (including the first author) would code the brochures, line-by-line, as described above. If reliability was established at all hierarchical levels (AC1 0.7, p <unk> 0.05), testing would stop, providing that individual categories demonstrated reasonable reliability too (AC1 0.6; p <unk> 0.2). This generous alpha level was selected so that categories with only one agreed instance (identified by both coders) were judged reliable despite the lack of more instances to determine reliability at conventional alpha levels. This is because coders selecting one piece of text and identifying it as the same category of a possible 87 was unlikely to be due to chance. If any individual categories did not meet this criterion, consensus would be sought using an independent coder (the second author) and the category removed if agreements on disagreed instances were not reached. If any level of the hierarchy demonstrated unsatisfactory reliability, the manual would be revised and testing repeated with two further brochures. If any individual category's AC1 exceeded the alpha level (p > 0.2), or if there were no instances of a category found, the category was deemed a 'potential category of persuasive message', but with insufficient data to determine reliability. --- Analysis strategy To examine frequently employed persuasive messages, only categories which appeared in more than three brochures were included in the main analysis. Categories which appeared in more than three brochures but had insufficient data to determine reliability in the testing phase were noted as requiring further reliability testing. We examined frequencies and proportions of content firstly across and then within superordinate categories. --- RESULTS --- Reliability Consult supplementary materials (Supplementary Material, S4) for reliability statistics. 476 category instances (9.3% of all content) were double-coded. Coders agreed on the same categories for 363 (76.26%) of these. Satisfactory reliability was achieved at all levels of the hierarchy (superordinate level: AC1 1<unk>4 0.77, 95% CI 0.73, 0.82; individual category level: AC1 1<unk>4 0.76, 95% CI 0.72, 0.80). There were only 35 categories (including an 'uncoded text' category) that contained enough instances to confirm reliability with a statistically significant AC1 value. We believe this reflects the lack of diverse persuasive messages used in brochures and not inadequate sampling. The number of additional categories for which reliability could have been established through double-coding more brochures did not justify the labour involved in further line-by-line doublecoding. There were six categories that did not meet our reliability criteria (AC1 0.6; p <unk> 0.2). All instances coded under these categories were discussed between the first and second author, and categorisations agreed for all, so no categories were removed. Afterwards, 448 of the 476 category instances were agreed upon and the reliability of all levels of the hierarchy had improved significantly (superordinate level: AC1 1<unk>4 0.96, 95% CI 0.94, 0.98; individual category level: AC1 1<unk>4 0.94, 95% CI 0.92, 0.96). As a consequence of this resolution phase, two further categories did not meet our reliability criteria (category 53: prompting ways to overcome difficulties with the terrain on the advertised route; category 73: encouraging attention to signage on the advertised route). In total these categories only comprised five disagreements, so in line with previous content analyses (Abraham et al., 2007), decisions of the first author were accepted as they had the benefit of coding all brochures in the sample. --- Content analysis All percentages reported reflect subordinate categories which were included in more than three brochures in the sample. Using this criterion, 33 of the original 87 categories formed a useful taxonomy of potentially persuasive messages frequently used in recreational walking brochures. Descriptive statistics for these 33 categories are displayed in Table 1; the supplementary materials (Supplementary Material, S5) contain descriptive statistics for all categories. Of these 33, seven had insufficient data in reliability phase to determine reliability (categories 3, 18, 49, 55, 70, 77, and 81) and another was category 53, which, as discussed earlier, did not meet the 0. 6 AC1 threshold after the resolution phase. Interpretations on all of these categories should therefore be considered cautiously. Of the 25 with sufficient data in the reliability phase, AC1's ranged from 0.69 to 1.00, so good reliability can be assumed for the rest of the categories included here. There were 4,800 instances of coded text within these 33 categories (94% of all content). Messages providing information accounted for 30. 92% of all coded content (M 1<unk>4 57 instances per brochure). Messages highlighting consequences accounted for 26.94% (M 1<unk>4 50 instances). Messages promoting intentions and planning accounted for 5.58% (M 1<unk>4 10 instances). Messages enhancing self-efficacy accounted for 36.56% (M 1<unk>4 68 instances). No categories pertaining to messages establishing normative beliefs appeared in more than 3 brochures. --- Messages providing information The most prevalent messages under this superordinate were those categorized as information about the overall course of the advertised route (category 6), accounting for 26.48% of all content which provided information, "% all content" refers to the percentage of all content (encompassed by these 33 categories) which is accounted for by the category (or superordinate content area). "% of superordinate" refers to the percentage of superordinate content which is accounted for by the category. and 8.19% of content overall. This included summaries of where the route would take the reader e.g. 'this walk explores an inland section of the Bude Canal on the Devon-Cornwall border' or information on the location e.g. 'Exmouth is a gateway town'. Other widely used categories included information about public transport options related to the advertised route (category 8) e.g.'many of the trails have convenient parallel public transport routes -bus or train', information about the terrain of the advertised route (category 4) e.g.'mostly level and easy although there is one steep climb on an inclined plane', and information about the distance of the advertised route (category 2) e.g. 'a 13km/8 mile circuit'. --- Messages highlighting potential consequences and opportunities The most frequently occurring types of messages were those categorized as viewing historical points of interest as consequences of walking the advertised route (category 19) accounting for 51.28% of content which highlighted consequences and 13.81% of content overall. This was also the only category to appear in every brochure. This incorporated descriptions of geology e.g. 'celebrating 95 miles of internationally important rocks displaying 185 million years of the Earth's history, the Jurassic Coast is a geological walk through time'. It also detailed historical facts about the advertised route e.g. 'in 1861, the arrival of the railway, linking the town with Exeter, brought with it a dramatic population explosion'. Other common categories included viewing scenery as a consequence of walking the advertised route (category 21) e.g. 'the South West Coast Path is a superb way to experience a range of fine Devon scenery, from cliff tops to wide estuaries, sandy bays to wooded valleys', and leisure opportunities as consequences of walking the advertised route (category 26) e.g. 'the estuary is a hub of activity for recreational activities; such as sailing, canoeing, windsurfing, fishing and scuba diving'. --- Messages promoting intentions and planning Prompting repeated recreational walking similar to the advertised route (category 52) was the most utilized message category, responsible for 39.18% of promoting intentions and planning content and 2.19% of content overall. This included the promotion of related brochures without instruction on how to obtain these e.g. 'an introductory leaflet and a detailed route book on the Tarka Trail are both available'. It also included ways to enjoy the advertised walk, again without instruction on how to do so e.g. 'why not try your hand at Geocaching when on the trail'? It further included contact details on guided walks e.g. 'why not join one of a number of free guided tours'? Another often used category was prompting ways to overcome difficulties with the terrain on the advertised route (category 53). This included directions to 'be aware' or 'take care' e.g. 'care should be taken at all times when walking on roads', or, 'take care crossing the Exe river over Bickleigh Bridge'. Another common category was prompting barrier reduction on the advertised route (category 57) e.g. 'you can pick up short sections of the trail from a number of easily accessible points'. --- Messages enhancing self-efficacy The most often used category was guidance for direction taking on the advertised walk (category 85). This category was present in 23 of the brochures and accounted for 90.20% of all self-efficacy content, and 32.98% of content overall. It embodies the nature of walking brochures; instructing on how to progress through a route. This is different from the provision of route information as it builds confidence for wayfinding. Examples include, 'just before you reach a cattle grid turn left alongside a bank', or, 'go through the gate at the top left corner of the next field, to the road'. In a similar way to messages promoting intentions and planning, other common categories included guidance on repeated recreational walks similar to the advertised route (category 76). This is different from the promotion of repeated recreational walks as it provides means by which the reader can access further walking information. For example, 'free booklets about Devon's coast and countryside including walking trails, cycling, horse riding and wildlife can be ordered through the Devon County Council website at www.devon.gov.uk', or, 'leaflets on all of these walks are available from Exeter City Council and the Visitor Information Centre'. Other frequently used message categories were guidance on ways to overcome difficulties with the terrain on the advertised route (category 79) e.g. 'this route is closed during the shooting season from 1st October to 1st February, and walkers should follow the alternative route along the quiet road instead at that time', or, 'aim to walk this part of the route within two hours of low tide (see local press or visit www.teignestuary.org)', and modelling walking on the advertised route pictorially (category 77). --- Uncategorized content 4.04% of all content was unable to be categorized under any of the 87 categories. This equated to 206 instances of uncategorized text compared to 4,893 instances of categorized text. The proportion of text which went uncategorized per brochure ranged from 0% to 10.71%. Examination of this text revealed no systematic exclusion of content related to recreational walking. The majority of this text related to authorship credits, website addresses unrelated to walking, and advertisements for holiday attractions. The only recurring behavioural message types that went uncategorized concerned the advertisement of cycle routes and the prompting or instructing of environmental behaviours e.g.'support local shops and services' or, 'take your litter home and recycle it where possible'. --- DISCUSSION This is the first known study to develop a specific coding taxonomy for, and conduct a content analysis of, recreational walking brochures. Acceptable reliability of this taxonomy was established at each hierarchical level and for most frequently occurring categories. The content analysis suggested that brochures promoted walking in natural environments through messages which provided information on the route, highlighted potential consequences and guided on wayfinding. However, they lacked variety in message types; frequently omitting information which could raise normative beliefs, promote intentions, or enhance self-efficacy, for walking. How do brochures encourage recreational walking in natural environments? Brochures often provided information that aimed to facilitate easier access to a walking route, as opposed to information about PA more generally. They also provided information on the course, distance, duration, and terrain of a route, seemingly in order to detail the amount of time and level of expertise required to undertake the walk. In contrast to traditional PA promotion, messages highlighting consequences often framed scenic features as reasons to walk rather than potential health gains. Importantly, previous research has demonstrated that for people who visit natural environments infrequently, subjective qualities like this are more important motivators for visiting than the achievement of physical fitness (Dallimer et al., 2014). Thus, highlighting these may persuade less frequent visitors, who are also more likely to be less active (Coombes et al., 2010), to visit natural environments. Promoting intentions and enhancing self-efficacy in the brochures mainly drew the reader's attention to other recreational walking materials and how to access them. This could support walking maintenance behaviours, but the aim of those messaging strategies may have been simply to drive further interest in a destination or organisation. --- Do brochures conform to NICE guidance on walking promotion? A public health priority is to encourage those who are least motivated, to engage in recreational walking (Ogilvie et al., 2007), and natural environments could support this. Considerable investment has been directed towards improving environments and opening walking routes (Hunter et al., 2015) but little is known about how to sell these opportunities through printed media to those who are less motivated to walk. In the present study, walking brochures lacked general and normative information about PA for health, behavioural prompts and efficacy information (especially content encouraging general walking behaviours). Messages containing such information can be effective in motivating inactive people to set better plans to undertake PA (Sweet et al., 2014). Most brochures and much of the content therein, whether intentionally or not, was therefore intended for people who already do recreational walking in the natural environment. This is at odds with guidance on walking promotion (NICE, 2012). While further research is needed to explore which messages are most effective, there appears to be more scope in the brochures to change cognitions about recreational walking (e.g. build confidence to complete walks, raise descriptive norms about outdoor walking), and encourage behavioural strategies (e.g. provide walking goals in terms of distance or time). Doing so would help meet NICE's recommendation that local authorities "develop walking programmes for adults who are not active enough, based on an accepted theoretical framework for behaviour change" (NICE, 2012, p.18). An example of how to achieve this is illustrated in one of the brochures in the sample. Exeter Walking Map stood out as the brochure having both the highest category-to-instance ratio (24 categories featured comprising 51 textual instances) and the most even distribution of categories across superordinate content areas. This brochure was also devoted to the promotion of walking more generally as opposed to its related recreational walking routes (around the city of Exeter, UK). For example, it outlined physical health consequences (category 14) e.g. 'walking can help you live longer, helps protect you from heart disease, diabetes, cancer, osteoporosis and much more' and included four references to mental health consequences (category 15) e.g. 'walking can activate the happy hormone which makes you feel good, improves your mood and reduces stress'. It contained normative information on benefits to children (category 34) e.g. 'children like to walk to school so they can chat to their friends.' Furthermore, it included text reducing general barriers to walking (category 46) e.g. 'walking need not require any special equipment', and provided guidance on walking goals based on time management (category 63) e.g. 'by walking to work, school, the shops or the station you can get your daily exercise as part of your normal routine'. It was also one of only two brochures in the sample to state PA guidelines; in this case providing guidance on how someone could achieve them (category 59): 'Doing 10,000 steps per day will contribute to the recommendation of moderate-intensity physical activity for at least 30 minutes on 5 or more days per week'. This brochure demonstrates how a variety of theory-derived persuasive messages can be incorporated into a recreational walking brochure. Naturally, many more considerations are involved in creating a brochure. The overall layout, typesetting, language style and numerous other features are important in attracting or deterring a potential reader from picking up a brochure or persuading them to change their behaviour (Abraham and Kools, 2012). Nonetheless, the selection of appropriate behavioural antecedents to write into messages remains important (Brawley and Latimer, 2007). --- Strengths, limitations, and future research The main strength of this study is that it produced a flexible taxonomy for analysing materials that advertise recreational PA in a variety of different communication channels such as websites or mobile applications. Furthermore, it has identified for the first time the range of messages used in walking brochures which attempt to attract people to recreate in certain landscapes. The coding taxonomy was designed to facilitate easier analysis of other recreational PA materials by maintaining stable superordinate content areas within which users could define individual categories to suit different environments, PA conventions and cultures. Notwithstanding their geographical specificity, the sample of brochures did nonetheless cover a variety of environments (coastal, rural, city) near smaller and larger conurbations. Some of the brochures detailed long-distance trails. While long-distance trails traverse many settlements, they tend not to locate near to larger conurbations meaning that they may not facilitate everyday recreational walking for populations such as those living in urban areas of high deprivation who experience a greater burden of inactivity-related poor health (Ball, 2015). Focusing on how to best promote shorter-distance recreational walking in urban green spaces may be more effective in ameliorating the relative lack of greenspace use by these populations (Jones et al., 2009). While convenience sampling was employed to generate hypotheses about the effectiveness of brochure content, if feasible, future content analyses of recreational walking materials may wish to employ probability sampling methods to ensure better representativeness. Although the taxonomy was reliable at all levels of the hierarchy, reliability for eight frequently occurring categories could not be established. While this suggests inadequate sampling, not one of these categories alone accounted for more than 1% of all content, suggesting that further reliability testing may still not have yielded enough instances for confident reliability assessments. Perhaps in the future a combination of traditional presence-or-absence methods (e.g. Abraham et al., 2007) supplemented by line-by-line procedures (e.g. Gainforth et al., 2011) could improve reliability protocols in comparable content analyses. Nevertheless, categories may need to be omitted or revised in any future applications of the taxonomy should they fail to meet acceptable reliability criteria. Developing the categories in the present taxonomy was achieved in part by expanding behaviour change techniques from other taxonomies (Abraham and Michie, 2008;Michie et al., 2013). This suggests that in any context-specific content analysis, especially those examining materials where health promotion is perhaps a secondary aim, such taxonomies could possibly only be used to derive more relevant message categories. Even with the present taxonomy, categories such as mental health consequences of walking (category 15) could be subdivided into affective benefits, restorative benefits, and spiritual benefits for instance. Each may be differently persuasive for different readers. In the future, researchers must consider the strengths of comprehensiveness and parsimony when deciding upon message categories. In future, controlled trials could use the taxonomy prospectively as a guide to creating intervention materials that target different antecedents of behaviour change, and test with more precision which 'ingredients' are most effective and appealing to different groups (eg, urban vs. rural dwellers, tourists vs. home based, disadvantaged vs. affluent communities). Future research might also wish to test different types of brochure in terms of their ability to alter attitudes towards walking or intentions to walk. For example, controlled studies could administer brochures which were identical in style but varied in terms of the type of message employed. This would allow researchers to test how original vs. tailored information could be differently persuasive and thus inform guidelines on how to produce recreational walking brochures. --- CONCLUSION Content in recreational walking brochures sampled from Devon, UK, was coded for the presence of potentially persuasive messages using the coding taxonomy developed here. These brochures' principle persuasive strategies are to guide wayfinding, provide information on amenities and access, and enhance the appeal of various properties of natural environments. Whilst highlighting attractive properties could motivate inactive people, omitting messages related to the promotion of intentions or self-efficacy and failing to raise normative beliefs may fail to encourage inactive people to engage in recreational walking in natural environments. In future, brochures could utilize a wider variety of message strategies in their text in order to engage such populations. Public health bodies could support the creation of recreational walking brochures to achieve this. --- SUPPLEMENTARY MATERIAL Supplementary material is available at Health Promotion International online. | Although walking for leisure can support health, there has been little systematic attempt to consider how recreational walking is best promoted. In the UK, local authorities create promotional materials for walking networks, but little is known about whether they effectively encourage walking through persuasive messaging. Many of these materials pertain to walks in natural environments which evidence suggests are generally visited less frequently by physically inactive individuals. Consequently the present study explores whether and how recreational walking brochures use persuasive messages in their promotion of walks in natural environments. A coding taxonomy was developed to classify text in recreational walking brochures according to five behavioural content areas and 87 categories of potentially persuasive messages. Reliability of the taxonomy was ascertained and a quantitative content analysis was applied to 26 brochures collected from Devon, UK. Brochures often provided information about an advertised route, highlighted cultural and aesthetic points of interest, and provided directions. Brochures did not use many potentially effective messages. Text seldom prompted behaviour change or built confidence for walking. Social norm related information was rarely provided and there was a general lack of information on physical activity and its benefits for health and well-being. The limited range of message strategies used in recreational walking brochures may not optimally facilitate walking in natural environments for inactive people. Future research should examine the effects of theory-informed brochures on walking intentions and behaviour. The taxonomy could be adapted to suit different media and practices surrounding physical activity in natural environments. |
d from Devon, UK, was coded for the presence of potentially persuasive messages using the coding taxonomy developed here. These brochures' principle persuasive strategies are to guide wayfinding, provide information on amenities and access, and enhance the appeal of various properties of natural environments. Whilst highlighting attractive properties could motivate inactive people, omitting messages related to the promotion of intentions or self-efficacy and failing to raise normative beliefs may fail to encourage inactive people to engage in recreational walking in natural environments. In future, brochures could utilize a wider variety of message strategies in their text in order to engage such populations. Public health bodies could support the creation of recreational walking brochures to achieve this. --- SUPPLEMENTARY MATERIAL Supplementary material is available at Health Promotion International online. | Although walking for leisure can support health, there has been little systematic attempt to consider how recreational walking is best promoted. In the UK, local authorities create promotional materials for walking networks, but little is known about whether they effectively encourage walking through persuasive messaging. Many of these materials pertain to walks in natural environments which evidence suggests are generally visited less frequently by physically inactive individuals. Consequently the present study explores whether and how recreational walking brochures use persuasive messages in their promotion of walks in natural environments. A coding taxonomy was developed to classify text in recreational walking brochures according to five behavioural content areas and 87 categories of potentially persuasive messages. Reliability of the taxonomy was ascertained and a quantitative content analysis was applied to 26 brochures collected from Devon, UK. Brochures often provided information about an advertised route, highlighted cultural and aesthetic points of interest, and provided directions. Brochures did not use many potentially effective messages. Text seldom prompted behaviour change or built confidence for walking. Social norm related information was rarely provided and there was a general lack of information on physical activity and its benefits for health and well-being. The limited range of message strategies used in recreational walking brochures may not optimally facilitate walking in natural environments for inactive people. Future research should examine the effects of theory-informed brochures on walking intentions and behaviour. The taxonomy could be adapted to suit different media and practices surrounding physical activity in natural environments. |
INTRODUCTION Several studies suggest that individuals with more social connections tend to live longer and healthier lives than those with less social connections [1][2][3][4]. Several plausible pathways link social relations to health [5]. For example, supportive social relations may buffer the impact of stress, by promoting less threatening interpretations of adverse events and providing cues for better coping strategies and emotional and instrumental social support [6]. Moreover, it has been suggested that social relations affect physiological outcomes, such as resting blood pressure, heart rate, stress hormone levels, and immune function [7]. Social relations may also affect health risk behaviours, such as heavy alcohol use, smoking and low physical activity [8,9]. An individual's personal social network may affect their health behaviour by shaping norms and enforcing patterns of social control, by providing health-related information, and by improving an individual's sense of responsibility for their own, as well as others' health and well-being [8]. Although not all social relations are beneficial, and some can even lead to risky health behaviour, compared to small social networks, larger social networks may have the potential to offer more diverse social relations with relatively more positive influences on health behaviours [10,11]. Previous studies among American middle-aged and older adults, for example, have found that a higher number of social ties, being married [12,13] and participation in religious activities [14] are all associated with healthier lifestyles, such as higher levels of physical activity, non-smoking and low levels of alcohol use. A cross-sectional study among patients in cardiac rehabilitation showed a positive association between the number of most important members in a social network and healthy life style as well as coping efficacy [15]. Similarly, cross-sectional studies among low-income adults [16] and adults at a higher risk of diabetes and cardiovascular disease [17] have found larger social networks to be positively associated with physical activity. However, prospective evidence on the role of social network size in predicting long-term health behaviour among adult populations remains scarce. Thus, little is known about how persistent the associations between social network size and health risk behaviours are. In the present study, based on two occupational cohorts and one population-based cohort of working-aged adults, we used repeated measurements on health risk behaviours over a 15-20year follow-up to examine whether the size of social network at baseline was associated with persistent differences in health risk behaviours over time. We hypothesized that compared to participants with large social networks, those with smaller social networks would be more likely to have unfavourable patterns of health behaviours over time, as indicated by heavy alcohol use, smoking, and low physical activity. We also hypothesized that health risk behaviours would accumulate among those with a small social network. Since sociodemographic factors are also associated with both social networks and health behaviour, we also examined the association of network size with health risk behaviours by gender, agegroup and educational level [5,[18][19][20]. --- METHODS --- Participants We used data from three cohort studies: the Raisio-Turku cohort and the Hospital cohort from the Finnish Public Sector study (FPS) [21] and the Health and Social Support Study (HeSSup) [22]. The Raisio-Turku cohort was established in 1990 to investigate the impact of the The studies were conducted according to the principles of the Declaration of Helsinki. The Raisio-Turku and the HeSSup studies were approved by the Turku University Hospital Ethics Committee and the Hospital study by the ethics committee of the Finnish Institute of Occupational Health. --- Measurement of social network size Social network size was assessed in all cohorts at baseline using the social convoy model described by Antonucci [23]. Participants were asked to write the initials of their social network members on three concentric circles. The people who were closest and most important to the respondent, without whom life would be hard to imagine, were placed in the innermost circle. The people who were not quite that close but still important were placed in the middle circle, and those not already mentioned, but who were close and important enough to belong to their personal network were placed in the outer circle. The total number of members in these circles was calculated and classified into three categories, based on the data distribution; 0-10 (corresponding to the threshold at the lowest quartile), 11-20, and at least 21 members (corresponding to the threshold at the highest quartile). Similar categorization of social network size has been used previously [24]. The convoy model has been used successfully among people of different age ranges and from different countries [18], and has been shown to have relatively good test-retest reliability over time [24]. --- Measurement of health risk behaviours Baseline and follow-up information on health risk behavioursheavy alcohol consumption, smoking and low physical activitywas drawn from the questionnaires. Three dichotomous variables of health risk behaviours were created on the basis of similar questions used in all cohorts and over time. Alcohol use, expressed as absolute ethanol in grams/week, was estimated on the basis of the reported average consumption of beer, wine and/or spirits. The cut-off point of heavy alcohol use was set at 288g/week for men and 192g/week for women as proposed by the Finnish guidelines [25]. These limits also correspond with the medium risk levels of daily consumption presented by the World Health Organization [26]. Smoking status was categorized into non-smokers (including former smokers) and current smokers. Information regarding average time spent in physical activities with different intensities was used to estimate average metabolic equivalent (MET) hours/week [27]. Participants whose physical activity corresponded to less than 14 MET hours/week were regarded as having a low level of physical activity [27]. In addition, a summary variable (overall unhealthy lifestyle score) was created at each wave by summing up the total number of each participant's health risk behaviours (heavy alcohol use, smoking and low physical activity) into a measure of none to three risk behaviours. --- Measurement of potential confounders Age, gender, education and chronic conditions at baseline were selected as potential confounders on the basis of an a priori assumption that these factors are associated with both social relations and health behaviours [5,[18][19][20]28]. Information on education was based on the highest self-reported vocational education classified into three categories: basic, intermediate and high. Information regarding chronic conditions at baseline (diabetes, rheumatoid arthritis, asthma, coronary heart disease) was obtained from the National Drug Reimbursement Register and diagnosis of cancer (within five years) from the Finnish Cancer Registry. The total number of these conditions was calculated and classified into no chronic conditions and at least one chronic condition. --- Statistical analyses Descriptive statistics were calculated to evaluate baseline characteristics of all study participants in each cohort, and by social network size. Differences in these characteristics by social network size were assessed using the Kruskall-Wallis Test for continuous variables and the chi-square test for categorical variables. Relative risks (RR) with 95% confidence intervals (CI) of health risk behaviours across the follow-up periods were calculated in each cohort by means of repeated-measures log-binomial regression analysis using the generalized estimating equations (GEE) method [29].The GEE method enables the analysis of correlated data arising from a longitudinal study with repeated measurements on the same subject. Those with at least 21 members in their social network at baseline were used as a reference group. Three types of models were performed in each cohort; 1) age, gender and survey year -adjusted models with each health risk behaviour (heavy alcohol use, smoking and low physical activity) as a dependent variable, 2) models further adjusted for education and chronic conditions, and 3) cumulative logistic regression models with the total number of health risk behaviours (overall unhealthy lifestyle score ranging between 0 and 3) as the dependent variable, adjusted for age, gender, survey year, education and chronic conditions. Trends in health risk behaviors according to baseline social network size were examined over the 10-year period, treating year as a continuous variable, to assess whether the potential changes in risk differed between the groups. After separate analyses in each cohort, fixed-effects meta-analysis [30] was used to pool the cohort-specific results into summary estimates. Fixed-effect analysis was chosen because the number of studies was small, which results in poor precision of the between-studies variance estimate. In such cases, the random-effect model may not be applied correctly [31]. However, random-effect models were also performed in order to verify the consistency of the results with both of these methods. Finally, stratified analyses of the associations between baseline social network size and health risk behaviours over time were performed by gender, age group (<unk>50 vs. <unk>50 years) and education (basic and intermediate vs. high). In order to test whether the selective drop-out during the follow-up affected the results, we performed sensitivity analysis including only those participants who had answered both to the first and the last questionnaires. Statistical analyses were performed with the use of SAS software, version 9.4 (SAS Institute Inc., Cary NC) and the R statistical package (R version 3.2.3). --- RESULTS Table 1 shows the baseline characteristics of the three cohorts (for descriptive statistics according to social network size, see Appendix A, Tables A.1-A.3). The follow-up period extended up to 20 years, including on average, 3-5 repeat measurements depending on the cohort (range 2 to 6). Figure 1 shows the results from meta-analyses of each health risk behaviour separately and a summary variable of overall unhealthy lifestyle score (total number of health risk behaviours ranging between 0 and 3), with summary estimates for pooled results of the three cohorts adjusted for age, gender, survey year, chronic conditions, and education. Compared with participants with at least 21 network members, those with 0-10 members in their social network were at a significantly higher risk of heavy alcohol use (RR=1.15, 95% CI: 1.06, 1.24), smoking (RR=1.19, 95% CI: 1.12, 1. a heavy alcohol use defined as weekly consumption of absolute ethanol exceeding 192g among women and 288g among men b low physical activity as metabolic equivalent (MET) hours less than 14/week c cumulative odds ratio (OR) for overall unhealthy lifestyle score (total number of health risk behaviours ranging from 0 to 3) There was no clear difference in trends of health risk behaviours over time between those with 0-10 members and those with at least 21 members in their total social network (Appendix C, Figures C.1-C.3 ). If anything, the risk of heavy alcohol use increased slightly more among those with at least 21 members in their social network as compared with those with the smallest social network examined over the ten-year period (Table 2). On the other hand, additional analyses of participants with healthy lifestyle at baseline (none of the studied health risk behaviours) showed that health risk behaviours accumulated differently according to the size of social network. Compared with participants with at least 21 members in their social network, those with 0-10 members were at a higher risk of overall unhealthy lifestyle over the followup period (cOR=1.26, 95% CI: 1.16, 1.38) (data not shown). Table 2. Trends in health risk behaviours according to social network size examined over the 10-year period, treating year as a continuous variable. Relative Risks (RR) with 95% confidence intervals (CI) are derived from repeated-measures log-binomial regression analysis using the generalized estimating equations (GEE) method. Summary estimates pooled from cohort-specific (Raisio-Turku, Hospital and HeSSup cohorts) results. --- DISCUSSION Our findings from two occupational cohorts and one population-based cohort from Finland suggest that smaller social networks are associated with persistently more unhealthy behaviours over the adult life course. Compared with individuals with at least 21 members in their social network at baseline, those with up to 10 members were at a significantly higher risk of being heavy alcohol users, smokers or physically inactive over the follow-up period extending up to 15-20 years. In addition, these individuals were at a higher risk of having multiple risk factors as part of an overall unhealthy lifestyle score. Our findings are consistent with previous, mainly cross-sectional studies on the association between social networks and health risk behaviours [12,13,[15][16][17]20,[32][33][34]. Previous studies have shown, for example, that individuals who drink heavily report decreased levels of social activities, worse social anchorage and low contact frequency [32]. Our results are also in line with those reporting a significant association between smoking and social isolation, low levels of social support, participation and network heterogeneity [33,34]. It has been suggested that for some people smoking provides a means of managing negative moods and stress that might result from having inadequate social relations [35]. None of these studies, however, have addressed the question of the persistency of the associations between social network size and smoking or heavy drinking. An association with physical inactivity has previously been reported for various measures of low social engagement, such as low social integration and a small number of friends and close network members [12,13,16,17,20]. Similarly, our results highlight the importance of social network size on physical activity, the strongest and most robust association observed in the present study. Potential mechanisms linking social network and physical activity include the higher levels of social support offered by a larger network, the establishment of social norms, the provision of resources, and encouragement for activity [36]. On the other hand, it could be speculated that those who are more physically active obtain more social contacts through their participation in leisure activities. However, as the difference in the risk of being physically inactive according to social network size persisted over the follow-up period, it is also possible that having a larger social network promotes a physically active lifestyle over time. It is noteworthy that social relations may also discourage a healthy lifestyle. For example, those who are closely connected to smokers are more likely to smoke themselves, and conversely, a decision to quit smoking is affected by the choices made in groups of inter-connected people [34]. Drinking habit is also largely influenced by the drinking habit of a social network [37]. In the present study, no information regarding the attitudes or health risk behaviours of social network members was available. Yet, the social network size at baseline was a robust predictor of these health risk behaviours over time. Women tend to have larger social networks than men, as do better educated people compared with the less-educated and to a lesser extent, younger adults compared with the elderly [5]. Some studies have reported the associations between social relations and health behaviour to be stronger among people with lower as compared to those with higher socio-economic positions [20]. In line with this observation, we found a tendency toward a stronger association between social network size and health risk behaviours among participants with basic or intermediate education compared with those with high education. Yet, these differences could not be proven statistically. The effects of social relations are likely to accumulate and create a growing advantage or disadvantage for health [5]. However, with respect to health risk behaviours, we found no evidence of accumulation according to social network size over time. The change in the prevalence of separate health risk behaviours did not differ significantly between participants with small networks and those with larger networks. It is possible that the age phase of the study members of the present study (ranging from 20 to 63 years) is relatively stable with respect to social relations, potentially diminishing the likelihood of clear differences in separate health risk behaviours between the groups. Follow-up periods extending over critical life transitions, such as changes in marital status or retirement, might provide more specific information regarding the contribution of social relations to trajectories of separate health risk behaviours. In addition, more detailed information on the various dimensions of social networks might be more efficient in predicting separate health risk behaviours. --- Strengths and limitations The strengths of this study were that we were able to use data from three large cohorts of working-aged adults with long follow-up periods and repeated measurements of health risk behaviours. Information regarding sociodemographic factors and chronic conditions was also readily available. However, some limitations should be considered. First, behavioural outcomes were assessed by self-reporting, which may be subject to bias and under-reporting in some (e.g. smoking, alcohol use) and over-reporting in other (e.g. physical activity) health behaviours. The information regarding social network size was similarly based on selfreporting, and may thus not correspond to the actual number of members in the social network, but depend on the person's willingness to provide details of their social network. On the other hand, the importance (and closeness) of social relationships is always more or less based on subjective assessment, and may be difficult to evaluate objectively. Another limitation was that social network size was only assessed at baseline, and therefore it was not possible to evaluate how changes in network size may have contributed to the changes in health risk behaviours over the follow-up period. However, previous studies have shown that social relations are relatively stable across adulthood [38], which is also likely to be the case among the workingaged study population of the present study. Selective drop out during the follow-up was also a possible important limitation of the study. However, our sensitivity analyses, including only those participants who provided information about their health risk behaviours in both the first and last questionnaire showed unchanged results compared to the whole study population. Further, although we controlled for major potential confounders, e.g. chronic conditions and education, confounding can never be ruled out in observational studies such as ours. Finally, clustering of participants in geographic regions could potentially affect the results if the participants remain in the same regions. However, during the two decades of follow-up of health behaviours, many cohort members moved from their baseline residential regions. The fact that the same pattern was found in the occupational cohorts and the population cohort which was not drawn from geographic regions further suggest that clustering of participants in geographic regions is an unlikely source of major bias. --- Conclusion In conclusion, the data from three longitudinal cohort studies of working-aged adults suggest a sustained association between small social networks at baseline and an increased likelihood of persistent risky alcohol use, smoking, and low physical activity over a follow-up of up to 15-20 years, as compared with those who had large networks. The findings of the present study may serve as a rationale for designing public health interventions that focus on strengthening social networks in order to support beneficial health behaviour patterns. However, further follow-up studies are needed to assess the specific factors (e.g. size of total social network, closeness or other qualities of the relations) of social networks that have the most affect, and whether the changes in these factors have an impact on the trajectories of health risk behaviours, and ultimately on health outcomes. --- Competing interest The authors have no competing interests to report. | To determine the associations between social network size and subsequent long-term health behaviour patterns, as indicated by alcohol use, smoking, and physical activity.Repeat data from up to six surveys over a 15-or 20-year follow-up were drawn from the Finnish Public Sector study (Raisio-Turku cohort, n=986; Hospital cohort, n=7307), and the Health and Social Support study (n=20115). Social network size was determined at baseline, and health risk behaviours were assessed using repeated data from baseline and follow-up. We pooled cohort-specific results from repeated-measures log-binomial regression with the generalized estimating equations (GEE) method using fixed-effects meta-analysis.Participants with up to 10 members in their social network at baseline had an unhealthy risk factor profile throughout the follow-up. The pooled relative risks adjusted for age, gender, survey year, chronic conditions and education were 1.15 for heavy alcohol use (95% CI: 1.06-1.24), 1.19 for smoking (95% CI: 1.12-1.27), and 1.25 for low physical activity (95% CI: 1.21-1.29), as compared with those with more than 20 members in their social network. These associations appeared to be similar in subgroups stratified according to gender, age and education.Social network size predicted persistent behaviour-related health risk patterns up to at least two decades. |
Background Nutrition has a big impact on people's health and is closely tied to social and cognitive development, particularly in children's formative days [1,2]. Children cannot receive their complete recommended age-appropriate nutrition in environments with low income and social resources [1]. In children, suboptimal infant and young child feeding (IYCF) practices remain serious public health problems [3]. To overcome these concerns, complementary feeding should be started in children who are 6 months of age and above [2]. Complementary feeding is the introduction of liquids and other foods along with breast milk for 6-23 months age children [4]. World Health Organization (WHO) defines minimum acceptable diet (MAD) practices for 6-23 months age children as a combination of both minimum meal frequency and minimum dietary diversity in both breastfeeding and non-breastfeeding children [5,6]. In many countries, less than one-quarter of children are reported not getting the nutrition they need to grow well, particularly in the crucial first 1000 days [7,8]. Child undernutrition is a major public health problem in many resource-poor communities in the world [3]. Among children aged 6 to 23 months from low socioeconomic status, only one in five can feed the minimum recommended diverse diet which is one component of MAD [8]. The first 2 years of age of the child's life provide an opportunity to ensure the growth, development and survival of the child, through optimum infant and young child feeding (IYCF) practices [4]. Therefore, inappropriate IYCF practices during this period result in significant threats to child health by compromised educational achievement, impaired cognitive development, and low economic productivity which become difficult to reverse later in life [4,9,10]. Inappropriate feeding practices during the first 2 years of life are a cause for more than two-thirds of malnutrition-related child deaths [11]. Malnutrition is linked to just half of all deaths of under five children in each year [4,8]. Optimal complementary feeding practices prevent approximately one-third of child mortality [12]. Research has shown that in sub-Saharan Africa children lost up to 2.5 years of schooling if there was a famine while they were in utero and during their childhood [8]. Even though the minimum acceptable diet problem has multiple causes, it is widely agreed that inadequate IYCF due to socioeconomic inequalities is one of the most immediate determinants [1,13]. Socio-economic inequalities in child nutrition are a concern for health differences since it is resulting from factors considered to be both avoidable and unfair [14]. The global burden of childhood undernutrition is concentrated in low-income and lower-middle-income countries and becomes a vicious cycle with their economic status [15]. In countries with low socio-economic status with inadequate food and resources, children can not have full growth and developmental possibilities [1]. Sub-Saharan Africa is loaded with half children living in extreme poverty among 385 million around the world, whereas over a third live in South Asia [8]. According to the Global nutrition report in 2020, there were inequalities in dietary diversity, meal frequency, and minimum acceptable diet. Children from the richest households do far better, as do a more educated mother or those who live in urban areas [15,16]. There were an 11.5% wealth gap, 4.9%, location gap and a 7.7% education gap of minimum acceptable diet intake [16]. A lot of interventions have been taken to overcome these problems [16,17]. The United Nations (UN) Secretary-General, launched the zero hunger challenge in children, by fulfilling objectives such as; 100% access to adequate food all year round, zero stunted children under 2 years, and sustainability of all food systems [17]. World health organization (WHO) set as strategies for complementary feeding practice by using multiple micronutrient powders for home fortification of foods, and vitamin A for children 6-23 months of age [18]. Despite these lots of interventions that have been taken, the minimum acceptable diet usage is still low [8]. Therefore identifying and reducing the avoidable socioeconomic inequalities of minimum acceptable diet intake and its contributing factors are an important issue in improving the overall health and well-being of the child [14]. There have been studies reporting the burden and determining factors of childhood MAD usage in different program that endorses women empowerment such as income generation, cash assistance for mothers who have under 2 years of children and women employment using affirmative actions, and nutrition education such as media campaigns and promoting breast feedings. Long-term plans are also needed for those SSA countries with lower income status through programs to enhance their country's economy to the middle and higher economic level and to improve the wealth index of individual households to narrow the poor-rich gap in the minimum acceptable diet intake. Keywords: Minimum acceptable diet, Socioeconomic inequalities, And sub-saharan African countries of sub-Saharan Africa. But those studies were used regionally varied local food items to assess MAD intake among children, which makes it difficult to make pooled estimates and regional comparisons. However, this study used the most recent standard DHS dataset which was collected in a similar design and standardized parameters, makes easy to have pooled prevalence of MAD intake among children. Therefore this study aimed to assess the pooled prevalence, the level of socio-economic inequalities of MAD intake, and contributor factors for the inequalities among children aged 6-23 months in SSA countries. It will be the crucial point for policymakers to know child nutrition status in the region and draft child nutrition policy and take actions based on the evidence. --- Methods --- Study design, setting, and period The data source for this study was the recent standard DHS data of Sub-Saharan African countries conducted within 10 years (2010-2020), which was a crossectional study conducted every five-year interval to generate updated health and health-related indicators. The sub-Saharan is the area in the continent of Africa that lies south of the Sahara and consists of four geographically distinct regions namely Eastern Africa, Central Africa, Western Africa and Southern Africa [19]. But economically, according to the 2019 World Bank list of economies classification categorized as low income (Burundi, Comoros, Ethiopia, Malawi, Mozambique, Rwanda, Tanzania, Uganda, Zambia, Zimbabwe, Cameroon, Chad, the Democratic Republic of the Congo, Gabon, Benin, Burkina Faso, Gambia, Guinea, Liberia, Mali, Niger, Senegal, Sierra Leone, Togo), lower middle income (Kenya, Congo, Zambia, Ivory Coast, Ghana, Lesotho, and Nigeria), and upper-middle-income country (Angola, Namibia, and South Africa) [20]. Together they have a total population of 1.1 billion inhabitants [21]. The datasets are publicly available from the DHS website www. dhspr ogram. com [19]. DHS collects data that are comparable across countries. The surveys are nationally representative of each country and population-based with large sample sizes. All surveys use a multi-stage cluster sampling method [22]. --- Population The source population was all children aged 6-23 months preceding 5 years of the survey period across 33 Sub-Saharan African countries. Whereas the study population was children aged 6-23 months preceding 5 years the survey period in the selected Enumeration Areas (EAs) and the mother or the caregiver was interviewed for the survey in each country. Mothers who had more than one child within the 2 years preceding the survey were asked questions about the most recent child [23]. --- Sampling procedures and sample size A total of 47 countries are located in sub-Saharan Africa. Of these countries, only 41 countries had Demographic and Health Survey Report. From these, five countries that did not have a survey report after the 2010/2011 survey year were excluded. These countries are Central Africa Republic (DHS report 1994/95), Eswatini (DHS report 2006/07), Sao Tome Principe (DHS report 2008/09), Madagascar (DHS report 2008/09), and Sudan (DHS report 1989-90). As well as three sub-Saharan Countries (Botswana, Mauritania, and Eritrea) were excluded due to the dataset not being publicly available. Then, after excluding countries that had no DHS report after 2010 and countries where the DHS dataset was not publicly available, a total of 33 countries were included in this study. Typically, DHS samples are stratified by geographic region and by urban/rural areas within each region. DHS sample designs are usually two-stage probability samples drawn from an existing sample frame. Enumeration Areas (EAs) were the sampling units for the first stage of sampling. In selected EAs, households (HHs) comprise the second stage of sampling. Following the listing of the households, a fixed number of households is selected by equal probability systematic sampling in the selected cluster [22]. The detailed sampling procedure was available in each DHS reports from the Measure DHS website (www. dhspr ogram. com) [22]. Weighted values were used to restore the representativeness of the sample data and were calculated from children's records or kid's records (KR) DHS datasets. Finally, a total weighted sample of 78, 542 children in the age category of 6-23 months from all 33 countries were included in this study [Table 1]. --- Study variables Dependent variables The outcome variable of this study was taking minimum acceptable diet (MAD) of children 6-23 months which is combined from children who had minimum meal frequency and minimum dietary diversity in both breastfeeding and non-breastfeeding children. During the survey, their mother was asked questions about the types and frequency of food the child had consumed during the day or night before the interview [22]. If a child is taken four out of seven food groups fed during the day or night preceding the survey the following food items are considered as getting minimum dietary diversity. These are Grains, roots and tubers, legumes and nuts, Dairy products (milk, yogurt, and cheese), Flesh foods (meat, fish, poultry, and liver/organ meats), Eggs, Vitamin A-rich fruits and vegetables, and other fruits and vegetables. Whereas minimum meal frequency is the provision of two or more feedings of solid, semi-solid, or soft food for 6-8 months, three or more feedings for 9-23 months breastfeed, and four times for non-breastfed children. The data of the above variables were collected similarly across all SSA countries [6,22].. Since minimum meal frequency has a different cut-off value for different age groups and breastfed and nonbreast feed children, so as the overall meal frequency computed after computing for each group. --- Independent variables Socio-demographic factors such as; marital status and household family size, Socioeconomic factors such as; educational attainment of women, occupation of women, and country income status, health behavior factors such as media exposure and breastfeeding status and geographical factors such as place of residence and subregion in SSA are all taken into account. The countries income status was categorized as low income, lower middle income, and upper-middle-income country based on the World Bank List of Economies classification since 2019 [20]. World Bank calculated country income based on Gross National Income (GNI) per capita, which categorized as low income $1025 or less; lower middle income, $1026-3995, upper middle income $3996-12,375,and high income $12,375 or more [20]. --- Data processing and analysis This study was performed based on the DHS data obtained from the official DHS measure website www. measu redhs. comafter permission has been obtained via an online request by specifying my objectives. Data from the DHS dataset were downloaded standard DHS data in STATA format then cleaned, integrated, transformed, and append to produce favorable variables for the analysis. Microsoft Excel and STATA 16 software were used to generate both descriptive and analytic statistics of the appended 33 countries' data to describe variables in the study using statistical measurements. The variance inflation factor (VIF) was used to detect multicollinearity, and all variables had VIF values less than 10 and the mean VIF value of the final model was 1.57. The pooled estimate of MAD intake among children in Sub-Saharan Africa and Sub-regions was estimated using the metan STATA command. --- Model building Concentration curve and index The concentration index and concentration graph approach are used to examine socioeconomic inequalities in health outcomes [24,25]. The concentration curve is used to identify whether socioeconomic inequality in some health variables exists and whether it is more pronounced at one point. It displays the share of health accounted for by cumulative proportions of individuals in the population ranked from the poorest to the richest [25,26]. The two key variables underlying the concentration curve are the health variable and the distribution of the subject of interest against the distribution of the variable capturing living standards [27]. The concentration curve plots the cumulative percentage of MAD usage (y-axis) against the cumulative percentage of children 6-23 months ranked by living standards beginning with the poorest and ending with the richest (x-axis) households [27]. A 45 0 line running from the bottom left-hand corner to the top right-hand corner concentration curve would be indicated the absence of inequity. Furthermore, the concentration curve lying above the equality line (45 0 ) indicated that MAD intake is disproportionately concentrated between poor and whereas below the equality line (45 0 ) indicated concentrated on rich [28]. To quantify and compare the degree of socio-economic related inequality in MAD intake, concentration index (C) is used [26,29] and it is twice the area between the concentration curve and the line of equity with the range of -1 to + 1. The sign indicates the direction of the relationship between MAD intake and the distribution of living standards (wealth status) [25,27]. Where hi is the health outcome (MAD intake in this study), <unk> is the mean of hi and n is the number of people. Ri represents the fractional rank of individual i in the living standards distribution used (the wealth index), with i taking the value of 1 for the poorest and the value of n for the richest [27,29,30]. As a result, C > 0 showed that MAD intake was disproportionately concentrated on the rich (pro-rich), and CI <unk> 0 revealed that the MAD intake is disproportionately concentrated on the poor (pro-poor) [27,28] whereas C = 0 indicated that the distribution is proportionate. Accordingly, C = 1 showed that the richest person had children taken MAD, whereas C = -1 indicated that the poorest person had all of the children taken MAD [27,30]. But the outcome variable in the present study is binary (taken/not taken MAD), the bounds of C depend on the mean (<unk>) of the outcome variable and do not vary between 1 and -1. Thus the bounds of C vary between <unk>-1 (lower bound) and 1-<unk> (upper bound) and the interval shrink when the mean (<unk>) increases. As a correction, the present study applied the Wag staff normalization to calculate the concentration index by dividing C by 1 minus the mean (1-<unk>) [27,30]. C = 2 n<unk> n i=1 hiRi -1 --- Wag staff decomposition analysis Wag staff-type decomposition analysis was performed for those variables that were screened statistical significance based on multi-level analysis and clinical significance after the concentration index and curve were assessed and showed income-related inequality to the magnitude of MAD usage. The Wag staff-type decomposition analysis quantifies the degree of incomerelated inequalities of the minimum acceptable diet intake and explains the contribution of each factor to the observed inequality [31]. Concentration index (C) decomposed based on regression analysis of the relationship between an outcome variable and a set of determinants. The overall concentration index can be decomposed into k social determinant contributions, in which each social determinant's contribution is obtained by multiplying the sensitivity of the outcome (MAD) related to that determinant and the degree of income-related inequality in that factor [27,32]. Based on a linear additive regression model, the concentration index for minimum acceptable diet intake (y) can be expressed as follows. Where <unk> is the mean of y x <unk> k is the mean of Xu, Ck is the concentration index of xk, and GC<unk> (residual) is the generalized concentration index for the error term (<unk>). The overall concentration index of MAD intake (y) includes the explained part which is the sum of the contributions of k determinants, and the unexplained part (residual). Based on the Wag staff normalization, the normalized decomposition of the concentration index, obtained by dividing the concentration index by 1-<unk> [30]. Absolute contribution is expressed in the same unit as the C whereas relative contribution was the percentage of the C of each covariate to the total observed income-related inequality in MAD. --- Data quality control The DHS data are comparable across countries. The missing values were clearly defined by the DHS guideline. If there were missing values and "don't know" in breastfeeding, assumed as not breastfeeding, but if there were in specific foods, excluded from further analysis [22]. The magnitude of MAD usage among children in each country was compared with the respective DHS reports. C Normalized = c 1 -<unk> C = k(<unk>kxk/<unk>) Ck + GC <unk> /<unk> --- Results --- Socio demographic characteristics of mothers or caregivers A --- The pooled magnitude of minimum acceptable diet intake among children aged 6-23 months The overall pooled estimate of the minimum acceptable diet intake among children aged 6-23 months in Sub-saharan African countries was 9.87% (95%CI: 8.57, 11.21%), with I 2 = 97.8% and ranging from 3.10% in Guinea to 20.40% in Kenya. Moreover, the pooled magnitude of MAD intake across country income levels was determined. The pooled estimate of MAD intake in low-income countries was 8.99% (95%CI: 7.39, 10.59%), lower-middle-income countries 11.75%(95%CI: 8.96, 14.53%), and 10.96% across upper middle-income countries (95%CI: 8.84, 13.04%) (Fig. 1). --- Wealth related inequality in minimum acceptable diet usage Concentration index and curve The concentration index is used to quantify the degree and show the direction of socio-economic-related inequality in a health variable. The value of negative sign indicates the more concentration of MAD intake among the poor, where a positive value indicates concentration among the rich. In this study, the overall wag staff normalized concentration index (C) analyses of the wealth-related inequality of MAD showed that the pro-rich distribution of MAD intake with [C = 0.191; 95% CI: 0.189, 0.193]. This shows that MAD intake among children aged 6-23 months was disproportionately concentrated on the richer groups (pro-rich). The concentration index is twice the area between the concentration curve and the diagonal line (Fig. 2). Then when multiplying the C by 75 (0.191*75) =14.33, which showed that 14.33% of the MAD intake would need to be (linearly) redistributed from the richer half to the poorer half of the population to arrive at a distribution with Fig. 1 The Forest plot showed that pooled magnitude of MAD intake among 6-23 children in SSA based on income status an index value of zero (perfect equality). The finding from the indices is in agreement with the results of the concentration curves in Fig. 2. Similarly, the concentration curve in the following figures showed that the concentration graph of minimum acceptable diet usage was below the line of equality which indicated that the distribution of minimum acceptable diet used children was concentrated in rich households (prorich distribution) [Fig. 2]. The wealth-related inequality of MAD intake was significantly higher among the urban (0.134) residents than rural (0.125) (p-value = 0.013) and similarly, the concentration curve showed that the concentration graph of MAD intake among children aged 6-23 months who were live in urban residence was below the graph of rural residence (Fig. 3). --- The wag staff decomposition analysis After the concentration index and curve were assessed and showed income-related inequality to the MAD intake, wag staff-type decomposition analysis have been fitted for those variables that were statistically significant during multi-level analysis and clinical important variables for wealth-related changes. The wag staff-type decomposition analysis is used to decompose the overall income-related inequalities of the MAD intake by variables and explains the contribution of each factor to the observed inequality. Table 3 reveals the wag staff decomposition analysis for the contribution of the various explanatory variable for wealth inequalities of MAD intake among children aged 6-23 months in sub-Saharan African countries. The table contains information about coefficient, elasticity, Concentration(C), absolute contribution, and percent contribution. Elasticity is the sensitivity of MAD intake for each factor. The concentration index in each variable is the degree and direction of socio-economicrelated inequality in MAD intake corresponding to specific explanatory variables. The value of negative sign in C indicates the more concentration of MAD intake among the poor, where a positive value indicates concentration among the rich. Absolute contribution is calculated by multiplying elasticity with the concentration index of each factor and indicates the extent of inequality contributed by the explanatory variables. Whereas percent contribution means the contribution of each variable to the overall concentration index. In this study, more than half (55.55%) of the wealthrelated inequalities of MAD intake in children were explained by the combination of variables fitted in the model. Geographical-related factors contribute most of (35.58%) the pro-rich wealth-related inequality on the usage of MAD among children. More than one-third (36.12%) of the pro-rich inequalities in MAD taking among children is explained by the residents. Having media exposure also explained nearly one-fourth (23.93%) of the pro-rich wealth-related inequality for children who had taken MAD. The other 11.63% of the estimated pro-rich inequalities in MAD usage are explained by maternal secondary educational status [Table 3 --- Discussions Inadequate infant and young child feeding (IYCF) practices are the major determinants of undernutrition, optimal growth, and development, especially in the first 2 years of life is a major problem both globally and in developing countries [33]. Identifying and reducing avoidable socioeconomic inequalities and other determinants of malnutrition is a critical step toward improving children's overall health and well-being [14]. This study aimed to determine the pooled estimate, socio-economic inequalities of minimum acceptable diet intake, and contributor factors among children less than The lowest magnitude (9.89%) of MAD intake in our study is in line with research conducted in India which was 9% [34]. But lower than a multi-site study conducted in America, Asia, and Africa 21% [35], South Asia countries [36], Bangladesh 20% [37], and Indonesia 40% [38] of children aged 6-23 can access a minimum acceptable diet. The discrepancy might be due to geographical variation, population growth, and socio-economic status of the countries [35]. Cultural beliefs and knowledge paradigms about MAD are also known to influence feeding practices [4,34]. Studies showed that growth faltering among Sub-saharan African children becomes evident from early infancy and is sustained through the second year of life which is the period with the highest reported prevalence of overall malnutrition [39]. But our finding is higher than a study conducted and in the Philippines 6.7% [40] of children aged 6-23 can access a minimum acceptable diet. This is due to the that, the current study included a large population from different geographic Sub-saharan African regions with various cultures, beliefs, and traditions which make it a real estimation of the magnitude in SSA. In this study, a significant variation of MAD usage of children among SSA countries was observed, in which Kenya (20.40%) had a significantly higher, whereas, Guinea (3.10%) had a statistically significant lowest magnitude of MAD usage am children. This is in line with a study in India [34], Indonesia [3], South Asia [36], and West African countries [41] which reported regional variation in MAD usage. This might relate to the difference in governmental actions toward the application of national nutritional programs, and addressing cultural beliefs around complementary feeding [42]. For instance, the better magnitude of MAD intake in Kenya was achieved by implementing a health platform which is called the Baby-Friendly Community Initiative platform and by integrating WASH (Water, Sanitation and Hygiene) into complementary feeding sessions [42]. The availability and accessibility of foods in the region may have also a contribution. Children in agrarian dominant and city dwellers were more likely to have MAD [3,43,44]. For instance. Guinea is among the poorest countries in the world which ranks 179 of 187 countries with 10% population were food insecure [45]. Therfore this low magnitude MAD intake might be associated with it. Evidence also showed that there is an ecological association between dietary diversity and child nutrition in SSA, due to ecologyspecific crops production and livestock farming [39]. In this study, we found that the concentration index and curve result showed that the MAD intake was disproportionately concentrated on the rich (pro-rich) households [C = 0.191; 95% CI: 0.189, 0.193]. This is in line with a study conducted in India [46,47], South Asia [36], Tanzania [48]. It is known that children from a family of higher-income can feed diversified foods and frequently as their families could be more likely to afford to have diversified foods as compared to children from a low household income [49]. In this study, the pro-rich inequalities in MAD intake were explained by maternal educational status, having media exposure of household, and living in a rural residence. The contribution of secondary and above maternal education towards explaining wealth-related inequality of MAD intake in this study was positive. The result was consistent with studies in India [46,47,50]. The study in New York also showed that the association of maternal education and child nutrition was positive in intermediate and high socioeconomic conditions [51]. The global nutrition report 2020 also pointed out that, the education gap contributes 7.7% of child nutrition inequalities [16]. This is might be due to those children of educated mothers having health advantages due to their higher socioeconomic status [47]. Maternal schooling can help to foster the positive association between household wealth and child linear growth [52]. Media usage of the household also has a large contribution to explaining pro-rich wealth-related inequality of MAD intake among children aged 6-23 mo in SSA. This might be due to that those media user households have more likely to be the richest and eventually to feed MAD for their kids. In this study, the concertation of the rural residence was negative for pro-rich wealth-related inequalities of MAD intake. This is in line with studies conducted in India [47,53]. According to the global nutrition report 2020, the location gap contributes 4.9% of child nutrition inequalities [16]. This is due to that factors that determine nutritional status differ between urban and rural areas. Nutrition in urban children is characterized by life events of their residence which have a greater dependence on cash income but lower reliance on agriculture and natural resources [54]. It is also supported by the multilevel result of this study which showed rural areas had a lower likelihood for MAD intake and only 4.15% of children from rural areas belonged to the richest household wealth status whereas was two-fifths (40%) of the urban, which resulted in a negative contribution. The main strength of this study was the use of the weighted nationally representative data of each Sub-saharan African country with a large sample which makes it representative at Sub-Saharan and regional levels. Therefore, it has appropriate statistical power that can be generalized of the estimates in minimum acceptable diet intake in the study setting to all children 6-23 during the study period. Furthermore, the concentration index and curve and wag staff decomposition analysis are appropriate statistical models to shows the direction and degree of socioeconomic inequality of MAD between the poorest to the richest household. Since the data were collected cross-sectional at a different point in time by self-reported interview would be prone to recall and social desirability bias. The drawback of the secondary nature of data was inevitable. The heterogeneity of the pooled estimate of MAD intake was not managed by further analysis. --- Conclusion and recommendations The proportion of minimum acceptable diet usage among children aged between 6 and 23 months in Sub-saharan Africa was relatively low. Minimum acceptable diet intake was disproportionately concentrated on the rich households (pro-rich concentration). Secondary and above maternal education, having media exposure of household and rural residence were positively contributor whereas, breastfeeding was a negative contributor for pro-rich socioeconomic inequalities in MAD intake. To increase minimum acceptable diet intake among children age 6-23 months in sub-Saharan Africa, policymakers in nutritional projects and other stakeholders should work as an integrated approach with other sectors, and give prior attention to modifiable socio-economic factors such as promoting women's education and employment, increase wealth status, and media exposure of the household, and promoting breastfeeding behavior. The government of sub-Saharan African countries should plan and work in short terms through the program that endorses women empowerment such as income generation, cash assistance for mothers who have under 2 years of children and women employment using affirmative actions, and nutrition education such as media campaign and promoting breastfeedings. Long-term plans are also needed for those SSA countries with lower income status through programs to enhance their country's economy to the middle and higher economic level and to improve the wealth index of individual households. Interventions to improve MAD practice should not only be implemented factors at the individual level but also be tailored to the community context. SSA especially in East Africa regions needs equity-focused interventions to curb the inequalities and low magnitude of MAD intake, not only by taking measures for economic equity but also need the balance by supporting the marginalized group such as uneducated women, households with no media usage, and rural residence. --- Availability of data and materials Data is available publically access from the open databases. It can be accessed by the following website.https:// dhspr ogram. com/ data/ datas et_ admin/ login_ main. cfm? CFID= 10818 526& CFTOK EN= c1310 14a48 0fe56-4E0C6 B7F-F551-E6B2-50. There are no financial, non-financial, and commercial organizations competing of interests. --- Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s40795-022-00521-y. Additional file 1. Ethical clearance from the International Review Board of Demographic and Health Surveys (DHS) program data archivists. --- Authors' contributions The conception of the work, design of the work, acquisition of data, analysis, and interpretation of data was done by DGB, AAT, and KAG. Data curation, drafting the article, revising it critically for intellectual content, validation and final approval of the version to be published was done by DGB, AAT, and KAG. All authors read and approved the final manuscript. --- Declarations Ethics approval and consent to participate --- Consent for publication Not applicable. --- Competing interests The authors declare that they have no competing interests. • fast, convenient online submission • thorough peer review by experienced researchers in your field --- • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year --- • At BMC, research is always in progress. --- Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research? Choose BMC and benefit from:? Choose BMC and benefit from: --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Background: Child undernutrition is a major public health problem in many resource-poor communities in the world. More than two-thirds of malnutrition-related child deaths are associated with inappropriate feeding practices during the first 2 years of life. Socioeconomic inequalities are one of the most immediate determinants. Though sub-Saharan Africa (SSA) shares the huge burden of children undernutrition, as to our search of literature there is limited evidence on the pooled magnitude, socioeconomic inequalities of minimum acceptable diet intake and its contributing factors among children aged 6 to 23 months in the region. This study aimed to assess the level of socio-economic inequalities of minimum acceptable diet intake, and its contributor factors among children aged 6-23 months in SSA using recent 2010-2020 DHS data. Methods: A total of 78,542 weighted samples from Demographic and Health Survey datasets of SSA countries were used for this study. The data were cleaned using MS excel and extracted and analyzed using STATA V.16 software. The concentration index and curve and wag staff type decomposition analysis were applied to examine wealth-related inequalities in the outcomes. P-value < 0.05 was taken to declare statistical significance.The pooled magnitude of MAD intake among children age 6-23 months in SSA was 9.89% [95%CI: 8.57, 11.21%] ranging from 3.10% in Guinea to 20.40% in Kenya. MAD intake in SSA was disproportionately concentrated on the rich households (pro-rich) [C = 0.191; 95% CI: 0.189, 0.193]. Residence (36.17%), media exposure (23.93%), and women's education (11.63%) explained the pro-rich inequalities in MAD intake. The model explained 55.55% of the estimated socioeconomic inequality in MAD intake in SSA.Minimum acceptable diet intake in SSA is relatively low. There are moderate socioeconomic inequalities in MAD intake in SSA, mainly explained by residence, media exposure and women's education. The government of sub-Saharan African countries should plan and work in short terms through the |
M arianna Fotaki's analysis of the role of compassion in good quality healthcare is helpful and challenging. 1 It is helpful because she makes clear that promoting compassion on an individual level can never be a solution for a healthcare system that fails to be humane as regards the atmosphere it creates for individual caregivers and patients. Suggesting that the crisis in contemporary healthcare can be solved by blaming individual caregivers only increases the stress these people already are subjected to. Therefore, Fotaki proposes one needs to look at both: the professional and organisational side of the coin. Her analysis is challenging, because there are a number of causes that make it very hard to give compassion the place healthcare it should have. In this commentary, I would like to reflect on Fotaki's contribution from a care ethical perspective. Fotaki rightly refers to care ethics in her Editorial as a movement with feminist roots. Since Carol Gilligan's seminal work in the beginning of the 1980s, however, care ethics has developed into an interdisciplinary field of enquiry in which insights have been articulated that help understanding the deeper causes of why we do not seem to manage developing a more humane healthcare. Drawing on these insights, I would like to raise three issues that may help understanding why changing our culture is so hard. The 3 issues are: the relation between the moral and the political; the role of Neoliberalism; and the absence of reflection on what care essentially is. --- The Boundary Between the Moral and the Political One of the central critical insights of care ethics coined by Joan Tronto is that the virtual boundary between the moral and the political in our culture has made it possible that unjust political systems may continue to exist next to highly moral individual practices. 2 This is precisely what happens when individual care givers are promoted to be more compassionate in order to held up a healthcare system that in return is not compassionate to their workers and patients. Tronto's insight that the moral is political and vice versa means that we cannot consider compassion to be a feature of isolated individuals. We should look more deeply into an analysis of why compassion is so hard to reinstall nowadays. When we think of the story of the Good Samaritan -the western role model of compassion par excellence -and its widespread use still in contemporary culture, we are reminded of the fact that once compassion was one of the most important foundations of healthcare. Grit and Dolfsma, 3 eg, analysed the different rationalities underlying the developments in healthcare during the last century in the Netherlands and list 4 discourses with their own logics that shift from a central role of compassion to a central role of the market. According to their analysis, in the beginning of the 20th century healthcare was organised from institutions with a religious -mainly Christian -identity. Many of the religious people serving as healthcare givers in these institutions, lived and worked in a world in which compassion was both an individual virtue, reflected in the public policy of their healthcare institution and part of a meaning frame that was shared by both professionals and patients. This unity of discourse, expressed in a continuity between the individual and the institutional, the moral and the political, was changed when a new paradigm and discourse was developed in the 1950s. Due to the great developments of medical science a medical discourse began to dominate healthcare in which an idea of professionalism was developed, replacing the central value of compassion. In the 1970s, a political discourse was introduced into healthcare in which accessibility of healthcare and participation of all citizens began to dominate. In the 1980s, the Netherlands, as many other North Atlantic countries, were confronted with a new discourse: economy began to reign over healthcare, managers were introduced and the market was seen as the best way to reduce costs. The role of compassion shifted from a central organizing value to a commodity to enhance low quality care. 4 --- Neoliberalism In order to understand why it is so hard to change this situation, we have to dig somewhat deeper into the cultural climate change that set off in the 1980s and has had an enormous effect on every segment of society: Neoliberalism. As Wendy Brown has shown, Neoliberalism extends market values to social politics and all institutions that uphold our society, including healthcare. 5 The effect of this on our society can hardly be underestimated. Because it is so pervasive and omnipresent, it even influences the way we look at ourselves and the world around us. All aspects of economic life are subjected to an economic rationale, including the way individual subjects see themselves and organize their lives. In order to have a viable existence, citizens are forced to adopt entrepreneurial habits and be prepared to always be high performing. This creates calculating individuals, subjected to economic rationalism. The instrumental logic of Neoliberalism also transforms the way we look at care. 6 As, according to the laws of the market, all human capital must bear fruit, care is considered as an activity by which human beings deploy their human capital. Taking care of oneself is seen as an individual responsibility, whereas taking care of someone else is regarded as an economic transaction. In a logic like this, human beings are not seen as the vulnerable corporeal beings they basically are. Neoliberalism holds a reductionist view of mankind as composed of rational selfsupporting creatures that all strive for wealth and freedom. Compassion can only have a place in this logic if it is cut to an instrumental size. The roots of compassion as a premoral unpredictable and disruptive experience that opens up and connects human beings is to be avoided for its uncontrollable and irrational nature. 7 In the logic of Neoliberalism compassion appears as a commodity, a trick to manipulate vulnerable patients at a deeper level in order to gain profit from them. --- Understanding Care One should not be romantic about restoring compassion in healthcare. Neither does nostalgia bring us any further. Compassion cannot play the fundamental role it has played for centuries without the meaning frame that had accompanied it in those days, and the institutional and political structures that went with it. What can be done within our Neoliberal society, however, is change the way we look at things by working on the concepts we use to organize our society. No society can do without healthcare. The more care is generally understood and agreed upon as a multidimensional human practice that is intrinsically contributing to a more humane world, the less we need a concept as compassion to provide good quality healthcare. How can this be realised? One of the most inspiring stories in 20th century healthcare is the way Dame Cicely Saunders contributed to transforming the way we care for the dying. Being denied and marginalised in a society traumatised by the second world war and hypnotized by the promises of modern technology, care for dying people was often limited to physical support, if at all. 8 By introducing the concept of 'total pain,'and founding an institution -St Christopher's hospice in London -that played a leading role in developing a new approach to terminal care, she helped developing a new way of understanding what care for the dying should be like. Worldwide palliative care is now seen as care for the whole person and his or her family, intrinsically multidimensional, including physical, psycho-social and spiritual support, and thus essentially non reductionist. Although, of course, culture can never be changed by one single person, and the complexity of these changes involve a long and slow cultural process of patients and relatives learning to reorient their hopes and perspectives on living and dying, Saunders helped influencing policy making up to the level of the World Health Organization (WHO), and changed the face of care for the dying. The lesson we can learn from Saunders, is that healthcare can be changed, but only then when our thinking about healthcare is changed as well. Saunders installed new practices of care -accompanied with research and education -that articulated a new way of looking at reality. And by changing the way we look at the dying person, it became impossible to accept any form of reductionism any longer. Palliative care is not only a specific practice of caring for people in a specific state, but also an approach, a philosophy, including an anthropology that sees patients as relational beings embedded in a family context and asking for support in all dimensions of human life. Just as Neoliberalism has entered our inner lives and deeply influences our perception of reality, other ways of looking at the people and world around us may touch and motivate us to shape different practices. That asks for reflective spaces in healthcare in which daily reality is analysed and reflected upon in order to understand why healthcare itself can be so unhealthy. Most healthcare professionals are trained to care for people for many years without ever reflecting upon the question what caring is and how it relates to a humane society. They are trained to perform actions without thinking about the systems their actions are embedded in, and the degree to which these actions contribute to a society that threatens the dignity of many of its weakest members. Good philosophical reflection on caring makes clear that this practice, in whatever context or form it is performed, is aimed at building a humane world in which people can live together in sustainable relational webs. That compassion does play a role in such a practice goes without saying. But it is neither the foundation of this practice nor the decisive element which makes the difference between good and bad quality care. The real foundation of caring is our readiness and willingness to deal with our vulnerable and mortal human condition in a humane way. The philosophy that helps spelling this out should be part of any healthcare curriculum. --- Ethical issues Not applicable. --- Competing interests Author declares that he has no competing interests. --- Author's contribution CL is the single author of the manuscript. | Although Marianna Fotaki's Editorial is helpful and challenging by looking at both the professional and institutional requirements for reinstalling compassion in order to aim for good quality healthcare, the causes that hinder this development remain unexamined. In this commentary, 3 causes are discussed; the boundary between the moral and the political; Neoliberalism; and the underdevelopment of reflection on the nature of care. A plea is made for more philosophical reflection on the nature of care and its implications in healthcare education. |
Native American individuals have a higher 12-month prevalence of alcohol use disorder (AUD) relative to non-Hispanic white individuals (AUD; 19.2% vs. 14.0%) and are twice as likely to meet criteria for a severe AUD (Grant et al., 2015). They also have higher alcoholrelated consequences and morbidity and mortality rates, including alcohol-related motor vehicle accidents and suicides relative to non-Hispanic whites (Center for Disease Control, 2013;Landen et al., 2014). However, it also is important to note the wide variability of alcohol consumption patterns within any ethnic minority group. While lifetime substance use is often lower among Native American groups relative to other adults (Beals et al., 2003;Spicer et al., 2003), of the Native Americans that do consume alcohol, there tends to be increased frequency and severity of use (Grant et al., 2015). These variations in alcohol consumption and consequences may be associated in part to drinking cultural norms. In a landmark article, MacAndrew and Edgerton (1969) argued that culture influences how people behave during and after drinking alcohol. For example, within group cultural differences have been found based on factors such as religious beliefs (Koenig et al., 2012) and religious commitment (Menagi, et al., 2008). Individuals who identify with religions that promote abstinence generally report higher rates of abstinence; however, those who drink alcohol have an increased risk of AUD (Luczak et al., 2014). Most Southwestern tribes promote abstinence and prohibit alcohol, such that alcohol is illegal to sell, buy, or consume on their reservation land (Kovas et al., 2008). Given the impact of cultural norms and proscriptions against drinking alcohol, cross-cultural applicability of the AUD criteria is warranted. Only one previous study has examined the construct validity of the AUD criteria in Native Americans. Gilder and colleagues (2011) examined the validity of 10 AUD lifetime symptoms, except for legal concerns, outlined in the Diagnostic and Statistical Manual for Mental Disorders, fourth edition (DSM-IV-TR; American Psychiatric Association, 2000) among a Native American community sample that endorsed drinking more than 4 drinks at least once in their lifetime. They found support for a unidimensional construct in this sample suggesting that the abuse and dependence symptoms represent a single diagnosis. "Social and interpersonal problems related to use," and "tolerance" were associated with lesser severity, whereas "physical and psychological problems related to use" and "activities given up to use" were associated with greater severity. In terms of ability to detect who meets criteria for an AUD and who does not, "social and interpersonal problems related to use" had the highest discrimination ability and "tolerance" had the lowest discrimination ability. Gilder and colleagues did not include the criterion of craving that was added to DSM-5. Thus, to our knowledge, no previous work has examined the construct validity of the full DSM-5 AUD criteria in a treatment-seeking sample of Native Americans. Most previous factor analytic studies of DSM criteria for AUD replicate the work by Gilder and colleagues (2011), demonstrating that these criteria represent a single continuous latent factor (see review Hasin et al., 2013). These findings also have been incorporated into the newest version of the DSM, 5 th edition (DSM-5, American Psychiatric Association, 2013), in which the alcohol abuse and dependence disorders were combined to reflect a single disorder. Yet, many of the studies used to justify the transition from two disorders in DSM-IV-TR to one disorder in DSM-5 relied on data from predominantly non-Hispanic white samples that were not treatment-seeking. Thus, we found it important to examine the unidimensional nature of the AUD criteria in DSM-5 in a sample of treatment-seeking Native Americans. Our study builds on previous research in several important ways. One, we are assessing the unidimensional nature of this construct using the DSM-5 rather than the DSM-IV-TR, as previous studies have done. Specifically, the current study included a measure of craving and was testing the validity of this criterion in a diverse sample. Two, although Gilder and colleagues (2011) found support for a single construct in Native Americans, different clinical assessment measures were used. Gilder and colleagues (2011) used the Semi-Structured Assessment for the Genetics of Alcoholism (SSAGA; Bucholz, et al., 1994) to assess AUD, whereas the current student assessed AUD using the Structured Clinical Interview for the DSM (First et al., 2002). Lastly, in contrast to Gilder and colleagues (2011), our sample is treatment seeking, and it is currently unclear whether the DSM items are discriminative within those seeking treatment for and meeting diagnostic criteria for AUD as defined by the DSM-5. The current study used baseline assessment data from a randomized clinical trial examining the efficacy of a culturally adapted evidence-based substance use disorder treatment to evaluate the construct validity of the DSM-5 criteria for AUD in a sample of Native Americans seeking treatment for alcohol and drug concerns. Specifically, we sought to test the latent factor structure of the AUD diagnostic criteria and examine item characteristics of the AUD diagnostic criteria using item response theory (IRT). We hypothesized that a single continuous latent factor representing AUD severity would best fit the data in our sample. IRT analyses were exploratory, and we did not have a priori hypotheses for the IRT models. --- Methods --- Participants Participants (N = 79) were recruited from a community treatment center located on a Native American reservation in the southwestern United States. Inclusion criteria into the study were 1) tribal membership, 2) residence within the reservation or immediately contiguous small settlements, 3) aged 18 or older, 4) seeking treatment for a substance use disorder, 5) meeting DSM-IV-TR criteria for substance abuse or dependence for at least one of the following: alcohol, amphetamine, cannabis, cocaine, or inhalants, and 6) willing and able to participate (in English) in assessment and treatment procedure of the study. The majority of participants enrolled in the study were male (n = 54; 68.4%) and had an average age of 32.91 years (SD = 10.134, range = 18-55). Most participants identified as being a member of the tribe (n = 78, 98.7%) and were currently living on the reservation (n = 78, 98.7%). Approximately 64.6% of the participants (n = 51) endorsed tribal-specific religious preference, and 72.2% of the sample (n = 57) reported actively practicing this religious or spiritual preference. On average, participants had completed 11.48 years of education (SD = 0.89), and the largest percentage of participants endorsed being selfemployed (43.0%; n = 34). Most participants had received previous treatment for a substance use disorder (69.6%; n = 55) --- Measures Addiction Severity Index (ASI; McLellan et al., 1990).-Demographic information was obtained using the ASI, a semi-structured interview designed to assess several domains in individuals presenting for substance use concerns. The ASI has been shown to have good reliability and validity (McLellan et al., 1985). (First et al., 2002).-Past year alcohol abuse and dependence were assessed using the SCID alcohol use disorder module. The SCID alcohol use disorder module is a semi-structured interview that assesses for alcohol abuse and alcohol dependence corresponding to the DSM-IV-TR criteria (American Psychiatric Association, 2000). This measure has demonstrated good reliability, particularly when assessing alcohol abuse and dependence with Kappa values ranging from 0.65-1.0 (Lobbestael et al., 2011;Zanarini et al., 2000). --- Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders (SCID-I for DSM-IV-TR) Alcohol Use Module There was a change in the AUD criteria from DSM-IV-TR to DSM-5 (American Psychiatric Association, 2013), namely the removal of the legal consequences criterion and addition of the craving criterion. Although the SCID for DSM-IV-TR was used in this study, a supplemental question addressing craving was also included, "(Did you have/have you had) a strong desire or urge to drink?" To assess the validity of the DSM-5 criteria, the legal consequences item was dropped from the analyses and the supplemental craving question was included for a total of 11 criteria. In alignment with DSM-5, alcohol abuse and dependence disorders were combined into a single disorder. Mild (endorsing <unk> 2 criteria), moderate (endorsing <unk> 4 criteria), and severe (endorsing <unk> 6 criteria) sub-classifications also were used. --- Procedures The study was approved by the local university Institutional Review Board and the Tribal Council. Research assistants explained the nature and condition of the study to all eligible participants, and participants signed a statement of informed consent. A federal Certificate of Confidentiality was also obtained from the National Institute on Alcohol Abuse and Alcoholism to protect participant information. As part of a larger randomized trial, participants completed baseline assessment measures before being randomized to a treatment condition. All participants received compensation for the completion of the baseline assessment measures --- Data Analysis All analyses were performed with Mplus (version 7.3; Muthén & Muthén, 2012). Data were treated as binary indicators, such that each participant was coded as either having a criterion present or absent. The SCID (for DSM-IV-TR) is on a 3-point Likert scale with a score of "one" indicating absent, "two" indicating subthreshold, and "three" indicating threshold. All absent and subthreshold indicators were coded as absent, and all threshold scores were coded as present. Recommendations for sample size in CFA are varied, but a critical sample size of at minimum five cases per parameter is needed (Kline, 2011). The sample size in the current study size meets this minimum requirement with approximately 7.18 cases per parameter. A single latent factor indicated by all 11 AUD criteria items was tested with the latent factor mean set to 0 and variance set to 1 for model identification. The robust weighted least squares (WLSMV) estimation procedure was used to accommodate binary indicators (Li, 2016), and model fit was examined using the Comparative Fit Index (CFI; cutoff <unk> 0.90), Tucker-Lewis Index (TLI; cutoff <unk> 0.90), root-mean-square error of approximation (RMSEA; cutoff <unk> 0.08), and weighted root-mean square residual (WRMR; cut-off <unk> 1.00; Hu & Bentler, 1999;Yu, 2002). There were missing data on three criteria, failure to fulfill major role obligations, hazardous use, and interpersonal problems, for 16 participants enrolled in the study. WLSMV utilizes pairwise deletion to handle missing data. We also estimated missing data using multiple imputation, and there were no substantive changes in the pattern of results. --- Results --- Item Descriptive Statistics Percent endorsement of each AUD criterion are presented in Table 1. In this sample, the "repeated attempts to quit/control use" and "drinking more/longer than planned" criteria were endorsed by almost all individuals in the sample (98% and 96%, respectively). "Much time spent using" and "activities given up to use" were among the least frequently endorsed items (53% and 56%, respectively). Of the 79 participants, 98.73% (n = 78) endorsed at least two criteria. Of participants meeting the DSM-5 diagnostic threshold, 6.41% (n = 5) qualified for a mild AUD, 14.10% (n = 11) for a moderate AUD, and 79.49% (n = 62) for a severe AUD. --- Confirmatory Factor Analysis Confirmatory factor analysis was conducted using the eleven AUD criteria as indicators of a single latent construct. The model provided an adequate fit of the data [<unk>2 (44) = 60.219, p = 0.0524; CFI = 0.954; TLI = 0.940; RMSEA = 0.068 (90% CI = 0.000-0.108) p = 0.236; WRMR = 0.908]. This provides evidence of the construct validity of the AUD diagnosis in this population and further suggests that the DSM-5 AUD criteria reflect a unidimensional construct. Standardized factor loadings for each AUD criterion are presented in Table 1. Results indicated that ten of the eleven criteria loaded strongly and significantly onto the latent factor ranging from 0.522 "tolerance" to 0.887 "withdrawal." The loading for "repeated efforts to quit/control use" was not significant (<unk> = 0.213, p = 0.125). --- Item Response Theory: Item Discrimination and Difficulty Given that the results from the CFA suggested that the AUD criteria reflect measurement of a single latent trait in this sample of treatment-seeking Native Americans, a two-parameter IRT model was used to further examine the relationship between each criterion and the latent trait. IRT analyses provide information on two main parameters: item discrimination and item difficulty. Discrimination scores are slope parameters. Steeper slopes indicate that a criterion is better able to distinguish between individuals scoring low and high on the AUD latent trait continuum. Difficulty scores are x-coordinate parameters that correspond to a 50% probability of endorsing a criterion. As the difficulty parameter increases, it suggests that an individual needs a higher severity on the latent trait to endorse that criterion at least half of the time. Results from the IRT analyses are presented in Table 1 and item characteristic curves (ICCs) are presented in Figure 1. The largest discrimination scores in the sample were found for "withdrawal," "social/interpersonal problems related to use," and "activities given up to use." The criteria with the lowest discrimination scores were "repeated attempts to quit/ control use," "tolerance," and "drinking more/longer than planned." AUD criteria associated with the highest severity scores were "much time spent using" and "activities given up to use." The lowest severity scores in the sample were "repeated attempts to quit/control use" and "drinking more/longer than planned." --- Discussion This study is the first to have examined the validity of the DSM-5 AUD criteria in a sample of Native American participants seeking treatment for substance use problems. Confirmatory factor analyses results suggested that an 11-item, one-factor solution was a good fit of these data in this Native American treatment seeking sample. These findings are in line with other research that has identified a one-factor model of the AUD construct across studies (Hasin et al., 2013) and in a Native American community sample (Gilder et al., 2011) albeit with 10 of the current 11 criteria. Furthermore, these findings provide cross-cultural support for the conceptualization of AUD as a single, continuous factor in a treatment seeking sample of Native Americans. All criteria loaded significantly onto the latent construct except the "repeated attempts to quit/control use" criterion. Other research in treatment seeking samples has also found that this criterion did not load significantly onto a single latent factor (Murphy et al., 2014). Furthermore, this criterion was endorsed by almost all participants in the sample. It could be that this criterion is appropriate for use in discriminating between those with and without AUD but is less informative for treatment seeking individuals. Indeed, Kessler and colleagues (2001) found that endorsing this criterion was associated with increased odds of seeking treatment for an AUD, and Preuss and colleagues (2014) suggested that the "repeated attempts to quit/control use" should be considered to reflect mild AUD in their alternative classification system. Few studies have examined item difficulty and discrimination scores for DSM-5 AUD criteria in general, and none have examined these parameters in a diverse treatment-seeking sample. The current study suggested that the "withdrawal," "social/interpersonal problems related to use," and "activities give up to use," criteria had the highest discrimination scores, whereas the "repeated attempts to quit/control use," "tolerance," and "drinking more/longer than planned" criteria had the lowest discrimination scores. These results are comparable to results from IRT analyses in a non-treatment seeking Native American sample using DSM-IV-TR criteria (Gilder et al., 2011) and in an international sample of individuals who were consuming alcohol using DSM-5 criteria (80.5% of whom met criteria for an AUD; Preuss et al., 2014). In these samples, "social and interpersonal problems related to use" (Gilder et al., 2011;Preuss et al., 2014) and "activities given up to use" (Preuss et al., 2014) had the highest discrimination scores. Similarly, the "tolerance" criterion had the lowest discrimination score (Gilder et al., 2011;Preuss et al., 2014). In this Native American sample, less severe AUD was represented by "desire to quit/cut down," "drinking more/longer than planned," and "tolerance." These results directly reflect findings from other studies in which these items were also among the easiest to endorse (Gilder et al., 2011;Preuss et al., 2014). More severe AUD, in this Native American sample, was represented by endorsement of the "much time spent using" and the "activities given up to use" criteria, which coincide with results of Preuss and colleagues. In their Native American community sample, Gilder et al. (2011) found that "withdrawal" and "activities given up to use" were associated with greater severity. All the difficulty parameters in the current Native American sample had negative coefficients, which may reflect the fact that all participants in this sample were treatment seeking and most had severe AUD, compared to other studies using non-treatment seeking samples or a survey of individuals who are currently consuming alcohol (Gilder et al., 2011;Hagman, 2017;Preuss et al., 2014). The results from the IRT analyses suggest potentially useful treatment implications for AUD in this Native American sample. Specifically, many of the items that were most informative with respect to discrimination and severity reflected a narrowing of activities and interpersonal problems related to use. Given these findings, a treatment, such as the Community Reinforcement Approach (CRA; Meyers & Smith, 1995) may be a useful intervention. CRA targets relationship happiness and focuses on increasing pleasant reinforcing activities rather than just on stopping alcohol use. Results from an evaluation study and a pilot study suggest the efficacy of the CRA approach in Native American samples (Miller, Meyers, & Hiller-Sturmhöfler, 1999;Venner et al., 2016), and the results from the current study suggest a CRA framework for treatment may also be beneficial and address particularly salient consequences of problematic alcohol use in this sample. --- Limitations and Future Directions The current study had several limitations. First, the sample size used in this study was small, and included a sample of Native Americans from one tribe in the southwestern United States. Future studies should replicate these findings using larger samples and examine whether these results generalize to other Native American and Indigenous groups. Second, these data relied on self-report recall of alcohol-use and alcohol-related problems over the past year, which may be subject to recall bias. Third, this study used an adapted version of the SCID for DSM-IV-TR to assess for DSM-5 criteria. Future studies should continue to assess the validity of the SCID for DSM-5 in diverse groups. However, in the current study, the wording used in the different versions of the SCID was quite similar, the craving criterion was assessed using almost identical prompts, and a comparable procedure was used in previous studies to help advise and test proposed changes made in the diagnostic criteria for AUD from DSM-IV-TR to DSM-5 (Borges et al., 2011). --- Conclusions The current study provides preliminary support for the validity of the DSM-5 AUD diagnostic criteria as a single continuum in a sample of Native Americans seeking treatment for substance use concerns. Additionally, this is the first study to examine the DSM-5 AUD criteria using IRT analyses in a diverse sample of treatment seeking individuals. These findings suggest that "social and interpersonal problems related to use" and "activities give up to use" may be more informative criteria for assessing AUD severity in treatment seeking Native American samples, whereas "repeated attempts to quit/control use" and "drinking more/longer than planned" may be less informative. Future research with other Native American and Indigenous populations will shed light on the cross-cultural applicability of the DSM-5 AUD diagnostic criteria and may highlight important cultural considerations in conceptualization, measurement, and treatment of AUD. | Objective: Despite high rates of alcohol use disorder (AUD) and alcohol-induced deaths among Native Americans, there has been limited study of the construct validity of the AUD diagnostic criteria. The purpose of the current study was to examine the validity of the DSM-5 AUD criteria in a treatment-seeking group of Native Americans.As part of a larger study, 79 Native Americans concerned about their alcohol or drug use were recruited from a substance use disorder treatment agency located on a reservation in the southwestern United States. Participants were administered the Structured Clinical Interview for the Diagnostic and Statistical Manual for Mental Disorders (DSM-IV-TR; SCID-IV-TR) reworded to assess eleven DSM-5 criteria for AUD. Confirmatory factor analysis (CFA) was used to test the validity of the AUD diagnostic criteria, and item response theory (IRT) was used to examine the item characteristics of the AUD diagnosis in this Native American sample.CFA indicated that a one-factor model of the eleven items provided a good fit of the data. IRT parameter estimates suggested that "withdrawal," "social/interpersonal problems," and "activities given up to use" had the highest discrimination parameters. "Much time spent using" and "activities given up to use" had the highest severity parameters.The current study provided support for the validity of AUD DSM-5 criteria and a unidimensional latent construct of AUD in this sample of treatment-seeking Native Americans. IRT analyses replicate findings from previous studies. To our knowledge, this is the first study to examine the validity of the DSM-5 AUD criteria in a treatment-seeking sample of Native Americans. Continued research in other Native American samples is needed. |
providers. Long ignored, the suburban poor have recently attracted the attention of studies examining the intersection of poverty, healthcare, social services, and drug use in the suburbs (Boeri, 2013;Lawinski, 2010;Tsemberis & Stefancic, 2007). Methamphetamine (MA) is a stimulant that provides energy, produces a feeling of pleasure, and decreases appetite, which makes it an extremely desirable drug regardless of its illegal status (Lende, Sterk & Elifson, 2007). Yet, there are tremendous problems associated with MA use, including stroke, cardiac arrhythmia, stomach cramps and muscle tremor; anxiety, insomnia, aggression, paranoia and hallucinations (Barr, 2006;Shaner et al., 2006). MA use is associated with higher risks for infectious diseases; MA withdrawal can produce depression and suicidal inclination; and the MA-using social context is highly associated with injury and violence for women in particular (Boeri, Harbry & Gibson, 2009b;Compton et al., 2005;Bourgois, Prince & Moss, 2004;Sheridan et al., 2006). In this paper we examine MA-using women living in suburban enclaves of poverty. The aim of this study is to identify how this population accesses basic resources and needed services. We use social capital to guide our analysis and direct our attention to the social determinants associated with drug use that impact social well-being and health. --- THEORETICAL FRAMEWORK: SOCIAL CAPITAL Social capital refers to social resources that are available to individuals from their social networks (Bourdieu, 1984;Coleman, 1990;Portes, 1998). Social capital has emerged in the literature as a valuable concept to help understand the inequality of status achievement based on social ties and access to resources (Lockhart, 2005;Putman, 2000;Schuller, 2007). In addition, the concept has been used to examine the unequal distribution of social resources within communities and across social networks that functions as a barrier to obtaining desired goals (Bourdieu, 1984;Coleman, 1990;Lin, 2001;Wuthnow, 2002). Social networks are proposed to be the main source of social capital likely to profit individuals as adults. For example, as individuals become connected within their community, social relations within this community provide resources such as access to employment opportunities or social and health services (Lin, Ensel, & Vaughn, 1981). The relationship between the individual, community resources, and resources available from other social networks is part of what we call social capital. Social capital that results from relationships between individuals in the same community or network is called "bonding" social capital; whereas "bridging" social capital results from relationships across social divisions such as race and class (Lockhart, 2005;Schuller, 2007). A robust association between social capital and health was found in an extensive literature review of international studies on the links between levels of increased social capital and better health, which was particularly strong in the United States (Islam et al., 2006). The literature suggests that the social capital concept also can be used to better understand drug use dynamics (Granfield & Cloud, 200l;Laudet & White, 2008). The concept of recovery capital, based on correlation between social capital and recovery of addiction, is used to predict cessation of drug use and sustained recovery (Granfield & Cloud, 2001;Laudet, 2008). Social capital is sometimes referred to as the nature and extent of a person's involvement in informal and formal networks (Grootaert, Narayan, Jones, & Woolcock, 2004). Previous research suggests that these networks operate in conjunction to meet the person's needs. Informal networks are comprised of family, friends, and neighbors, while formal networks include community organizations such as schools, social service agencies, and healthcare systems (Shobe, 2009;Beggs, Haines & Hurlbert, 1996). According to Beggs, Haines and Hurlbert: "the receipt of informal support affects the receipt of formal support" (1996, p.11). A higher level of involvement in formal and informal social activities may lead to fewer negative health behaviors such as substance use (Finch & Vega, 2003). Networks have functional and instrumental components (Vaux, 1988). Networks including relatives, kin, and friends can provide instrumental support, such as transportation, small loans, or places to stay (Briggs, 1998). Drug-using networks, though, can result in negative social capital (Wacquant, 1998). Physical and emotional risk-taking, stigmatization, and self-defeating behaviors are associated with resource allocation from drug-using networks (Rose, 1998). In addition, drug users and their social networks may utilize sub-optimal medical services, so although services may be utilized, the quality and effectiveness of use is variable (Boeri et al., 2011). Prior research suggests that financial resources are associated with health-related behaviors (DiMatteo, 2004). Quality of life measures include individuals' perception of health, physical, psychological, and social functioning and well-being, as well as position in life, and expectations in the context of the culture in which they live (Laudet, 2011). The fact that people who are not well off have shorter life expectancies and more illnesses than the rich reveal that differences in health that are not only a social injustice but also highlight the social determinants of health (Wilkinson & Marmot, 2003). As long as lack of education, low job skills, lack of sustainable employment, and restrictions on geographic mobility stay in place, their social mobility prospects will continue to be dim (Luck, 2004). Difficulty occurs when tearing away from the sub-culture community in pursuit of upward mobility. For example, facing the loss of subculture support was found to be a barrier for sex workers creating new lives for themselves, leaving them more isolated from healthy forms of support (Trulsson, 2004). Women more than men drug users suffer from the impact of social capital loss and negative social capital (Anderson & Levy, 2003;Bourgois, Prince, & Moss, 2004;Sterk, 1999;Wacquant, 1998). Women using drugs face double stigmatization by a society that accuses them of violating gender role expectations, especially if they are mothers (Boyd, 1999;Campbell, 2000;Dunlap & Johnson, 1996;Ettore, 1992;Sterk, 2000). Older female drug users also face narrowing social options and are more marginalized in society than their younger peers (Boeri, 2013;Anderson & Levy, 2003;Rosenbaum, 1981;Sterk, 1999). Moreover, poor substance abusing women are found to have scarce resources within their own networks (Mulia, 2008). Marginalized female users can experience a loss of social access, which is needed to pursue new social contexts (Anderson, 1993). Ultimately, when MA-using women are not receiving access to needed resources, these experiences also act as barriers to their social well-being and mobility. While studies have examined the association between social capital and well-being among disadvantaged drug-using populations (Knowlton, 2005), there is scant research on social capital dynamics among disenfranchised drug user populations living in the suburbs. The aim of this paper is to examine how low-income MA-using females in the suburbs access needed resources. We examine need areas including housing, legal assistance, education, employment, medical care, dental care and drug treatment. We investigate the processes involved and specifically how social capital resources are employed, using the women's subjective accounts verified by our own ethnographic fieldwork. This study advances our understanding of the social and contextual impact on social capital attainment and how this affects access to resources among this group of marginalized female drug users. --- METHODS Between 2009 to 2011, thirty active and former female methamphetamine users participated in this study, drawn from the suburban counties around a large metropolitan area in southeastern USA. Participants were recruited using a combination of targeted, snowball, and theoretical sampling methods (Glaser & Strauss, 1967;Strauss & Corbin, 1998;Watters & Biernacki, 1989). The majority of the 30 participants were recruited through targeted ethnographic fieldwork. Some referred their friends to call our study number, resulting in 11 additional participants. Based on our developing theory of recovery trajectories, we also recruited three former users with recovery experiences. Active users were defined as having used methamphetamine at least one time in the past month. Former users were defined as having used the drug for at least six consecutive months in the past but having been drugfree for the last month. To be eligible, participants had to be residing in the suburbs of the city at the time of use and be 18 years or older at the time of the interview. For this study they also had to be female. A consent form was read and agreed to before collecting data. In order to protect the anonymity of the participants, we collected a signed consent form that was not linked to the study data. Only the researchers on the study knew their identity and contact information. Participants were reimbursed for their time and given the choice of cash or gift certificate. Reimbursement for participants has been shown to be ethical and useful in collecting research on hidden and stigmatized behaviors (Wiebel, 1990). The researchers' university's Institutional Review Board approved the study methods and design. A screening process was used to ensure that participants pass the eligibility criteria to participate in the study. Screening consisted of asking questions about age, drug use in the past 30 days, use of methamphetamine in the past six months, and the county where the potential participant resides. Interviews were conducted in a safe location agreed upon by the interviewer and participant; these included the interviewer's car, the participant's home, motel rooms, private university rooms and library rooms. Participants were offered food during the interview, such as pizza and soda or snacks. The research team for this study included two female co-investigators who conducted the first interviews and focus groups interviews, and two female research assistants who helped with the focus groups, a few of the follow-up interviews, finding resources the women needed, data management and analysis. All research team members completed the NIH web-based course on Human Participant Protections Education for Research Teams. Using ethnographic methods, we conducted fieldwork that involved finding field sites, distributing fliers, and talking to anyone interested. During the day we walked the streets of suburban towns or drove through subdivisions and trailer parks located in the suburbs. In the evening and night, we frequented bars, clubs, and 24-hour diners. We often employed a community consultant who was a person familiar with the drug using networks and could introduce us to insider settings. The final sample consisted of 31 women but only 30 were used in this data analysis, which was conducted before the last participant was interviewed. Among the 30 women in the sample, 26 are white, 2 Latino, and 1 African American. One woman reported to be American Indian. The youngest woman was 19 and the oldest was 51 years old. A little over half (17) were active users of MA. All active users were low-income and the majority were unemployed, under-employed or employed in illegal work. The majority of former users were unemployed or being supported by relatives. We collected data at three points in time from the same participants: (1) a first face-to-face interview; (2) a follow-up in-depth interview that was conducted face-to-face or on the phone; and (3) a focus group interview. Participants chose to join a focus group or conduct the second or third follow-up interview alone. Among the 30 women in the study used for this analysis, 5 were interviewed once; 9 were interviewed two times with an average of 5 months between interviews; and 16 were interviewed three times with an average of 7 months between the first and last interview. This represents less a 17% attrition rate for the second interview point, which is typical in longitudinal studies of hidden and hard-to-reach populations (Corsi, Van Hunnik, Kwiatkowski & Booth, 2006). We had a higher attrition rate by the third interview, which was largely impacted by the number of women who became homeless and left the suburban communities in search of shelter. Sixteen women participated in one of the six focus groups *. We used a longitudinal design in this study in order to examine changes over time. The first interview incorporated four data collection instruments: a life history matrix, a drug history matrix, a short risk behavior inventory and a semi-structured, audio-recorded, in-depth interview. The life history matrix, completed with pencil by the interviewer, is a research tool designed to focus the participants on retrospective life events during the in-depth interview. Conducted at the start of the study, this matrix data collection allowed the interviewers to develop rapport and established an additional validating strategy (Bruckner & Mayer, 1998). The interviewer then collected data on a drug history matrix with pencil. The drug history included information on first use of each drug, past six months use, past 30-day use and routes of administration. The risk behavior inventory asked about the drug and sexual risk behaviors such as syringe and condom use. In addition, we provided healthcare and medical information of drug use health risks and a list of social service resources in the area. The first interviews lasted about two hours. Participants were reimbursed $30 for their time. For the follow-up interviews, we updated the drug use matrix and risk behavior inventory and conducted a short qualitative interview, specifically to see how they used the resource list we provided. An updated healthcare, social services and drug treatment resource list developed by the research assistants was given to each participant, targeted for her specific needs. Follow-up interviews typically took about one-half hour. Participants were given $40 cash or gift certificate for their time. The increased amount was to encourage them to conduct the follow-up interview. Immediately after the focus group was completed, the participants met individually with the researchers or research assistants to update the matrices and were asked privately about their access and utilization of service resources. The women were free to discuss topics of interest to them as well. Since the data used for this paper are derived primarily from the focus group interviews we provide more details on this data collection. The focus groups consisted of two, three or four women who typically did not know each other. We used participant's study number or a pseudonym, if they desired. We started with introductions and then conducted an ice-breaker exercise aimed to explore accessibility to risk awareness and utilization of social services and healthcare providers. The exercise consisted of placing a number of cards on the table with the names of health and social services taken from the list we provided to the women in the first interview. Women were given colored cards to place on the resource. Each card coincided with a response as to whether the resources was needed, used, or not used. Immediately after the game we discussed why certain cards were put on each resource. For example, if a resource was needed but not used we asked why not. If a resource was needed and used, we asked about the experience of accessing the resource. We found the card game to be very effective and provided more than merely an ice-breaker. In fact, the women became so engrossed in identifying the right card that much discussion occurred during the game itself. After the first focus group we included a different colored card stating, "I would like to talk more about this service," to ensure more focused discussion. During the focus group we employed a semi-structured interview guide that served merely as a framework on which to maximize group discussion and interaction. Main questions included areas on recent health and HIV-related awareness, prevention experiences, needs assessment, use of the resource list, and experiences in gaining access to needed healthcare and desired treatment. Resources and approaches that were found to not be effective were discussed, and suggestions for better strategies were explored. Accessibility to public healthcare services and employment opportunities emerged as the major problems. Refreshments were available either before or during the activity at a time when a break was needed. The entire interview was recorded and the qualitative parts were transcribed. Participants were given $40 in cash or a gift certificate for their time. All of the women stated that the focus group helped them, and they hoped their discussion could help others who had problems similar to their own. We conducted six focus groups with sixteen women. A woman could only participate in one focus group and it could be on the same day as either the second or third interview. The reasons why some women did not participate in a focus group included (in order of importance): moved too far away from the research study area; difficultly finding a two-hour slot of free-time; or preferred to conduct their follow-up interview alone. In two cases, the women were incarcerated. In five cases, our team lost contact with women who were homeless at the time of their first interview or lost their homes during the course of the study. Ethnography is a living and dynamic form of research. An unexpected aspect of our ethnographic study was the changing involvement of the researchers. Similar to "engaged ethnography" (Sheper-Hughes, 2004), while conducting our ethnographic fieldwork, we recognized our privileged positions and did not ignore the cognitive dissonance we felt due to the knowledge that we had access to what our participants needed. Fortunately, engaged action was fitting with of our study goal, which was to better understand the availability and accessibility of healthcare and social services in the suburbs, as well as barriers to these services. Instead of remaining distant observers of the social action we were investigating, we became engaged ethnographers by applying what we found to be beneficial for the women. Our engaged ethnography led to applied ethnography, meaning that not only did we think reflexively about what we were doing while conducting research but we also applied the "tricks of the trade" we learned while being engaged with our research participants (Becker, 1998). Applying our knowledge and better resources to help the women gain access to services they needed, we discovered further barriers we would have missed if we had relied only on the women's limited resources. One example illustrates this point poignantly. When we learned that an initial barrier to services was lacking a phone, and therefore not having a number to leave when the ubiquitous voice mail message asked for a number to return the call, we called the service while we were with our participants and left our own study phone number. Due to this engagement in the research, we subsequently learned that healthcare and social service staff typically failed to respond to our messages (thus confirming what the women who had phones told us). We continued to apply the resources we had available. In another example, when no one returned our phone call at a women's shelter, we called a professional friend who we knew supported the shelter financially and within an hour received a phone call from the shelter director. Her intervention eventually led to a bed in a shelter. However, when our study participant arrived at the shelter with her children, she learned that the house was located in a drug-dealing neighborhood with crack dealers on every corner. Reluctant to expose her children to crack dealers, she did not go to the shelter and instead accepted temporary shelter from a male friend, but not without further consequences. This obstacle of having to rely on "friends in low places" would never have been revealed nor reported had we not become engaged with overcoming initial obstacles to needed services (i.e., lack of a phone) and further applied our resources to ensure a bed in the shelter. The socio-economic-geographic barriers and the women's viewpoints on shelters located in dangerous areas were very important findings in this study. Although we did not achieve helping the women (which was not a goal of the research), we did achieve our research goal of understanding the complexity of challenges the women faced when trying to access needed healthcare and social services. Interviewers wrote notes on their reflections of the interviews within 48 hours. As the data were collected, we compared the responses on the data to gain a clearer understanding of the phenomenon and to inform the continuing data collection and analysis. The in-depth interviews were transcribed word for word. Data analysis began with the first few interviews using the constant comparison analysis common in grounded theory (Charmaz, 2005;Strauss & Corbin, 1998). The qualitative data analysis program QSR NVivo was used for data and coding management. The research team conducted the initial coding and research assistants trained in qualitative methods helped with second and third coding. For this paper, the first coding was conducted by the first author and reaffirmed by the co-investigators. The codes focused on what the woman discussed regarding barriers to needed resources and, for those who were successful, how they accessed resources. --- VALIDITY AND RELIABILITY The interviewer notes, life histories and drug histories, in-depth interviews and follow-up interviews were used to triangulate analysis of the data using the iterative model (Boeri, 2007;Nichter et al., 2004). The iterative model of triangulating data throughout the study by comparing information collected from various sources, and addressing issues of validity and reliability as the study progresses has been shown to provide greater confidence in understanding complex information (Pach & Gorman, 2002;Rhodes & Moore, 2001). Although research show that drug users tend to report valid information in qualitative interviews (Rosenbaum, 1981;Weatherby et al., 1994), the addition of quantitative data collected in the drug history matrix and risk behavior inventory was used as a reliability and validity check for the qualitative data (Deren et al., 2003). Any inconsistencies found were further explored through an iterative process in follow-up interviews, field observations, and focus groups. --- DATA ANALYSIS Based on the analysis of the interview transcripts, field and interviewer notes, and focus group transcripts, 34 initial codes were created to identify needed resources, the most problematic services, and the barriers to accessing services. Codes were associated with restrictions initiated by the services and limitations on the part of the participants, which were verified by the researchers during efforts to link participants with the services. The services were categorized by resource type: housing; medical; dental; transportation; education; legal aid; employment; and treatment. The most problematic services needed, but not accessible, were treatment, medical, dental, and housing, which is consistent with other research (Lawinski, 2010;Thompson, 1998). Among the most marginalized suburban woman, problematic needs also included education, employment, legal aid, and transportation services. Barrier codes included lack of transportation; service fees, waiting lists, lack of communication, disqualifying criminal histories, service use caps, identification (ID) requirements, and fear of agency intervention with children. During the follow-up interviews and focus groups we examined how many women accessed the services they needed. Participants who did not receive the needed service in a specific category were coded "Unmet Need." If a participant accessed at least a resource in a specified need area, she was classified as having a "Need Met." Discussion of these coded areas revealed that social networks provided or facilitated the majority of links to needed resources. This was a turning point in the analysis. Whereas previously we focused on formal or informal processes to provide resources, we discovered that some of the women accessed resources indirectly through family, friends, extended social networks. Two network types, formal and informal, were first identified and coded based on whether resources came directly through a formal source, the service, or through an informal source, such as their social network. A third new category captured the process of accessing resources when mediated indirectly by a formal and informal network. The three processes employed by the women attempting to obtain needed resources included: (1) FORMALdirectly from social services; (2) INFORMAL-directly from family or close social networks; (3) MEDIATED-indirectly mediated involving help from extended social networks or other contacts, including researchers. We then assessed if needs were met or not met through these processes and examined the results. The findings are presented as positive or negative results for each of the three processes. Quotes are chosen that best represent the essence of what more than one woman expressed or experienced in her attempts to access needed resources. --- FINDINGS Whereas low-income women access needed resources through formal or informal process, our findings show that suburban female MA users often are not having their needs met through either formal service providers or informal networks. Instead, many needs are met through a mediated process between their informal social networks and formal social service providers and staff. The women mentioned barriers and risks for each type of process employed, and we found both beneficial and negative results for each, whether or not they obtained needed resources. As is common of qualitatively defined categories, the boundaries of our positive and negative codes are porous. As we will point out, although one result might appear positive, since the goal of gaining a resource was achieved, it may have been accompanied by immediate or potential risks to the women. For example, a woman might achieve making an appointment to a doctor at the cost of paying her neighbor gas money to drive her there, which further depleted her precarious financial situation, or worse, indebted her to the neighbor, who is typically a man. We placed the women's reported experiences in the context of their social environments and social capital potential. For example, when a woman obtained her needed resource although there was a potential risk, it was coded as positive unless a negative incident resulting from this process occurred during the period of the study. Due to the overlap in how women used their resources to obtain needed services, and because it is not always possible to disentangle one type of method of obtaining resources from another (formal, informal, mediated), we do not provide a quantitative assessment of the way women accessed services. Instead we provide an in-depth qualitative assessment using the women's stories as data. --- FORMAL POSITIVE-Examples of positive formal processes of obtaining help, while assumed to be the most realistic process since this is what social services were designed to do, were actually the most difficult to document in this study. In every instance where we heard a participant had obtained help directly through a health provider or social service on our resource list or elsewhere, further inquiry revealed that either informal help or a mediator was used in the process. In one case, a woman had been trying to obtain Medicaid to treat a health issue during her entire participation in the study (21 months). We learned that she was finally successful three years later, which was more than a year after the study ended, so we were unable to document the details. NEGATIVE-Many of the women found barriers when attempting to access services directly. The most common barriers to accessing social services were waiting list, service use caps, criminal history, fee, ID restrictions, transportation, lack of communication device, and fear. Some women were referred to other services and given wrong numbers, got only voice mail messages, or were put on the service waiting lists. As one participant stated: They're either not taking new patients, closed, there's so many criteria's that you have to meet if you can even get on their list. And if you get on the list, the waiting time is like six months to a year. By that time god knows where you are. You wanna give up; you wanna say to hell with it. Some of the service providers appeared not to acknowledge or care that needed services were not provided, although a few seemed to have empathy. For example, when we attempted to access a shelter bed for one of our participants in a suburban county, the social service worker told us, "The people in that county are screwed," referring to the homeless. Fear of legal intervention and loss of child custody were real barriers for women with young children, and some stories we confirmed showed that accessing services directly resulted in a negative experience for these women. For example, mothers are required to undergo drug tests before accessing some services, and this resulted in unwanted intervention by public child protection services. Researchers observed no danger to child welfare, but some women were frustrated with the public agency intrusion into their lives. They viewed the required oversight as a loss of privacy rights and felt that family service agencies generally did more harm than good in their experiences. Emotional barriers such as shame, guilt, feelings of hopelessness, and learned cultures of racism also deserved closer scrutiny. A few women in predominantly white social networks feared using a shelter because there were "too many blacks" there. For them, the risks of violence at the hands of a male they knew were better than the risks of unknown people they had been taught as children to fear. --- INFORMAL POSITIVE- The women quite often turned to male assistance for their needs. Relying on male relationships tended to leave these marginalized women even more at risk. Yet sometimes, their trust in a male in their network produced positive results. We were a little apprehensive when one of our first participants, who was homeless when we met her, started a romantic relationship with a man she met at a support group meeting. In a short time, the unmet needs we had identified in our interviews were met through the help of her new male partner. He gave her a car to drive to appointments, a phone to call out and receive calls back from the services on our resource list, and money for dental work, including full dentures. "If it weren't for him, I'd be walking around with no teeth," she told us. As more of her immediate needs were met, she was able to contact the legal resource contact we gave her to start the divorce paperwork that she had not been able to do for years. With her boyfriend's help, this participant obtained more needed resources than any other woman in the study. Yet her complete reliance on one person leaves her at risk for exploitation, abuse or abandonment. Another major inhibitor was transportation. "I didn't have a ride" was a recurring reason not to make an appointment or keep an appointment at a social or healthcare service. Unless they lived on a bus line, which most did not, they lived too far from the service to walk and needed help from someone with a car. The only hope for most of the women, who were without cars, was someone in their network who had a car. As one woman who was in treatment said: That's my main thing to get a job and get a car. I need a car. I'm just trying to get me a car because I feel so bad. My sister's taking me to meetings. My sister's taking me to doctor's appointments. Likewise, another participant described how a close knit female-using network obtains food from the store or a food pantry. "We help each other; if one gets a ride, they get food for the rest of us too." NEGATIVE-Having help from someone with a car was necessary for many woman to access needed resources, but what was particularly problematic for these women was when it came from potentially harmful situations, including exploitative friendships, abusive male sex partners, or broken family relations. In these cases, the woman was often in danger of being harmed or exploited, which was a chance they usually took. There were no direct harmful consequences during our study, other than a few who said that someone they had relied on to go to an appointment never showed up or came hours late. Yet we heard many stories of past negative experiences when relying on close relations, especially abusive partners. Of all the areas of need, participants were most likely to obtain housing within their network. This put them at risk for domestic violence, instability, and increased drug use. For example, one woman explained, "The power was turned off in our house, so we moved in with our drug dealer's boyfriend. We lived over there, and that's when all the shooting up [injection drug use] really began." A young homeless female who occasionally stayed with her mom provided some more insight during her interview: She [mom] tried suicide, but it was a cry for attention. And me and her are like Thelma and Louise. We both are party animals, you know. It is better to stay away from my greatest demon." [Interviewer] "Do you have anyone who can help you?" No, because you're a misfit. You're no good. When you got friends in low places, you stay low. Referring to herself, her mother and her friends, this woman's response was consistent with the emotional climate we found among other poor MA-using suburban females. They are often imbedded in social networks that have very little to offer, and their communities are isolated in poor areas away from social services, forcing them to further rely on the other MA-using relationships. We observed repeated exploitation of our participants. For example one woman's former landlord was "helping" her by allowing her to stay in a trailer with broken windows, no working utilities, and dog feces inside. Researchers visited the site and found it to be completely uninhabitable. The same woman "lost" her only form of identification, a driver's license, which she was keeping at former employer's home for safekeeping. She did not report this person because she thought she might work again for her in the future, which she did. A few participants told us about a landlord who overworked and underpaid them to clean and manage the trailer park in exchange for free rent in dwellings that were not fit for human living. One dwelling was a trailer in the hot sun with no shade and no air conditioning. The temperature gauge inside was reported to read up in the 90s during the summer. One of the females explained what happened if they slowed down in their work: He was angry for us not going to be a slave some more--cheap labor. He gets everybody to do it, you know. Gets that cheap labor; does it to this house--that house, everybody. We're going to have to stay there because we don't have no place right now. Many of the women in one extremely poor and disenfranchised social network remained in contact with us through the cell phone of one participant. The females in this entire network made use of one single phone, which provided some hope for return calls from potential job offers, caseworkers, and financial assistance organizations. The women relying on this phone explained: These people [social service providers and employers] want a telephone number to call you back. And you get a recording. You don't never get to talk to nobody. I called; I didn't have access to a phone number for them to call back. But they acted like they would help me if I had a number. The price, however, was the woman with control of the phone often took it upon herself to attempt to engage in conversations and provide questionable information about other participants when we called. We wondered how she answered the phone when potential help from social services or employers was on the other end. --- MEDIATED POSITIVE-Although the women reported extreme difficulty attempting to obtain direct help from either formal services or informal social networks, they seemed to have greater success when formal and informal social networks mediated services. One of the most debilitating barriers was that of obtaining proper identification (ID). As voting requires citizens to present ID, we expected this to be a minor problem. Instead it was the main reason some of the women were living on the street without services for years. For example, we acted as mediators for one homeless woman to obtain an official ID. With no home address, and all of her personal documents long gone, she had to use the dental records of her recent trip to a dentist (that we arranged for her) to first obtain a birth certificate in order to receive a copy of her driver's license that was stolen. In this case, we provided the mediation, but it took over a week, a photocopy machine, a fax machine, one of our personal credit cards, and the use of our institution's address in order to acquire her birth certificate from another state. Another example of mediation came from an unlikely source. One of our participants had tried unsuccessfully to enter an inpatient treatment center and finally was accepted at the emergency room when she said she was withdrawing from alcohol and barbiturate addiction and suicidal. Although she was an alcoholic, she confided to us that only by saying she was suicidal was she sure they would take her for a short stay in the intensive detoxification unit. While she was there she met another patient in the unit who was empathetic to her story. In her words: I didn't have anywhere to go. I was headed | To examine access to needed resources among low-income methamphetamine-using females, we conducted interviews with 30 women living in poor suburban communities of a large southeastern metropolis. As an invisible population in the suburbs, underserved by social services, the women remain geographically and socially anchored to their poor suburban enclaves as transit, treatment and education remain out of reach. The longitudinal study included three interviews over a twoyear period. Resources needed by the women were identified in the first interview and a list of available services was provided to them. In subsequent interviews we asked how they accessed the services or barriers encountered and discussed these further in focus groups. Using a social capital framework in our qualitative analysis, we identified three processes for accessing needed resources: formal, informal and mediated. Implications for policymakers and social service providers are suggested, and models for future development proposed. Methamphetamine (MA) was proclaimed an epidemic as it crossed from the western coast of the United States, settled in the heartland and continued eastward, impacting primarily urban populations of young people and men who had sex with men (MSM), and rural populations in an increasingly poorer countryside ( |
to obtain an official ID. With no home address, and all of her personal documents long gone, she had to use the dental records of her recent trip to a dentist (that we arranged for her) to first obtain a birth certificate in order to receive a copy of her driver's license that was stolen. In this case, we provided the mediation, but it took over a week, a photocopy machine, a fax machine, one of our personal credit cards, and the use of our institution's address in order to acquire her birth certificate from another state. Another example of mediation came from an unlikely source. One of our participants had tried unsuccessfully to enter an inpatient treatment center and finally was accepted at the emergency room when she said she was withdrawing from alcohol and barbiturate addiction and suicidal. Although she was an alcoholic, she confided to us that only by saying she was suicidal was she sure they would take her for a short stay in the intensive detoxification unit. While she was there she met another patient in the unit who was empathetic to her story. In her words: I didn't have anywhere to go. I was headed for the woods. I was headed for the tent city in front of the shelter because I can't go back to the shelter for six months. Because once you are there and you leave, you can't go back for six months. That's not an option. So I called my friend that I was in the hospital with, and she got into this program. I said, 'I'm just calling to tell you that I'm not in the motel. I don't have anywhere to go. I'll be in the woods. I don't know what else to do.' She said, 'Wait a minute, let me call the director of our program. Let me see if I can get you in.' We played phone tag all afternoon, and I got in without a face-to-face interview on her word-because of my friend. I got on the bus, and she met me to take me downtown. I was really amazed that I got in off the street on the word of one of their clients. This service was located in the city, and the network contact was made during a hospital stay and not a part of her close network. But because her new contact gave her a referral to this residential treatment facility for women, she was accepted. The program included long-term case management. By her last interview one year later, she was drug-free, working and planning to leave the facility and go to the next step of the program into a rental apartment. She remained friends with the woman she had met in the hospital and who had acted as her mediator. An informal social support group acted as a mediator for another woman in our study. She lacked the fees needed for the General Educational Development (GED) registration. She told us in dejectedly: God, it's the most frustrating thing I've ever tried to do because it's [GED] something that's so necessary if you want to do anything with your life. And they make it so difficult to get it. It makes me sick to my stomach; there is no help. If there is help, I can't find it. Subsequently, a contact she met through "Twelve Step Program" offered the financial assistance she needed. This was an example of help coming from outside of one's drugusing network. NEGATIVE-Although mediated help was usually successful, problems could arise when social services were obtained from relationships outside the women's immediate social network, especially when it was from positions of power. Participants reported unethical practices of service providers who used their connections, although their intentions were often respectable. In one example of a negative mediation, a nonprofit director promised a set of dentures for one of the woman, saying he would make her "a poster girl" for the reduced dental services program he ran. When she arrived for her appointment, all she obtained for free was a consultation, with a dental plan showing she would need nearly a $1,000 to complete needed dental work. When asked why he did not do as promised he explained that he was: "out there trolling for money every day," and donations were dwindling. In another example, a woman in the study overcame her fear of withdrawal and faced the threat of losing her children by attempting to enter a detoxification unit through the emergency room. Homeless, she finally obtained transportation from a male friend to a detoxification facility, only to be released 12 hours later. She later tried to take barbiturates to qualify for detoxification but was also denied long-term admittance. We intervened and called the emergency room only to be told that once stabilized, uninsured patients are legally allowed to be discharged. This failure of the social service system to connect an MA user, who also used barbiturates and methadone, directly with treatment resulted in this participant resorting to taking a dangerous combination of methadone and barbiturates to self-medicate and ease her discomfort with withdrawal. --- DISCUSSION Participants in this study were low-income women who used methamphetamine, although their greatest difficulties stemmed from poverty and not drug use. They tended to experience many psychological, social, and organizational barriers when trying to access needed healthcare and social services through formal processes. Additionally it seemed that many social workers treated participants in inappropriate ways, by hanging up on callers, failing to return their calls, displaying negative attitudes, and making disparaging remarks about them. Social research should investigate ways to resolve these negative attitudes and behaviors, particularly toward vulnerable populations. When women experienced continuous barriers to accessing resources, they tended to rely on their social network for help. Informal network access to services increased their chances of successful attainment of these services. But these networks often had little social capital and sometimes failed to be of assistance or kept them embedded in a life involving exchanging sex for services, exploitation, feelings of shame, and self-defeating behavior, all of which potentially ties them in a cycle of drug use and poverty. Relying on these tight network bonds, further binds the women to their MA-using networks. And for some poor female MA users, utilizing one's social networks to gain formal resources was not an option. They often place themselves at risk for injury, exploitation, and abuse at the whim of uncaring strangers, which reduces their chances of recovery and social mobility. There were no significant differences between women who were active or former users of methamphetamine in their experiences of barriers to services or access to resources. The only significant difference in barriers to services found was among those who had been incarcerated and therefore lost most privileges to government services. We also found differences in access to resources among those who participated in social programs, such as 12-step or church groups, and therefore received help from people outside their immediate social networks. However, due to the lack of transportation, most of the women were unable to continue attending any type of social program that was outside their immediate geographic location. Except for a few of the younger women, all of the participants in the longitudinal study were in socially disenfranchised circles of family and friends who could offer little more than emotional support. As the woman who told us the illustrative quote used in title of this paper described so expressively, and to paraphrase here, when your friends and your family are in the same poor and marginalized situation as you, there are few resources available to help change your social-economic status. Our study revealed that both formal and informal processes for obtaining needed resources resulted in positive and negative outcomes. Although this was a small study and no definite conclusions can be made, the mediated process of achieving resources appeared to be most successful. Whether the mediation involved close or extended informal networks with formal sources of help, the women typically achieved their goals along with increased social capital. We suggest further focus on mediated processes needs to be explored among poor drug-using populations. Models similar to mediated help for achieving needed resources already exist in case management that helps patients move smoothly through the healthcare system. Patient navigation, which is a form of nursing care individualized to each patient's needs was conceptualized by Harold Freeman specifically for poor cancer patients (Freeman, 2004). The patient navigation model was used successfully among pregnant women in drug courts (Holsapple, 2011). We suggest a similar model is needed to mediate healthcare and social service navigation for women with few social resources that involve their informal social networks. --- LIMITATIONS The limitations of this study lie in its small sample size and exploratory nature. Further research is needed regarding the processes we examined to obtain needed resources. Also, a larger study is needed to understand how some processes might be more harmful or successful than others. For example, reliance on family members or friends often include reciprocity that could put the women at further risk, such as the women who relied on unlicensed "rides" from neighbors with cars, who were usually men. In addition, mediation by informal groups (e.g., 12-step or church services) also demands some commitment to the group, and women who have children to care for or lack transportation cannot always fulfill these commitments. We found that the failure to commit to these groups placed additional stress and guilt on the women, which is another area that needs to be further studied. The present study does not claim to fully capture the daily challenges associated with unmet resource needs. For example, although we knew that not having transportation was a reason why some of the women stopped going to a 12-step support group or religious service as well as the reason for missing an appointment, they often did not mention this unless we asked specifically. Moreover, many of the participants did not want to discuss how many times a day they are negatively affected by other daily barriers, such as not having someone they could trust to watch a child, or not owning decent clothes to wear to church. These were sources of shame and further stigmatization that some of the women discussed, but others did not, and needs to be further examined. Finally, what we consider a barrier may be accepted as part of normal life. For example, as the women become accustomed to not having certain resources, they learned to live without them and may no longer consider having to rely on a neighbor as a barrier to services. They also felt that in comparison with some of the women in their social networks, they were better off. We often found that the women did not like to talk about their lack of resources and failures to access needed services because they did not want to seem as if they were complaining. Likewise, we acknowledge that our participants were perhaps less likely to mention positive outcomes of directly accessing healthcare and social services. Finally, our category of mediated processes is limited and involves primarily mediation conducted by members of the research team. However, we did not conduct the study with a mediated resource in mind; instead this emerged as a result of the analysis. The process of mediation applied by engaged researchers needs to be examined more thoroughly as a research topic in a future research study. --- CONCLUSION This exploratory study found that low-income suburban female MA-users are blocked by long lists of bureaucratic restrictions and other limitations from the formal social service network. Our data reveal that time after time, many female MA users in these southern suburbs remained unable to obtain basic resources from the social service and healthcare systems designed to help them. While bureaucratic inertia and apathy towards the needs of the poor is not a new finding, this is one of the few studies that looked at how disenfranchised women living in the suburbs with few resources access needed services. What we found is that many of the public resources available to women living in the cities were unavailable to women in the suburbs. When they resorted to getting help from their social networks, they remained anchored to marginalized members of disenfranchised communities with low or negative social capital, leaving little hope for a better life. We know that social exclusion plays a large role in health disparities across the life course (Marmot, 2005), and that social services and safety nets for the poor have not kept pace with the increasing dispersion of the poor from the cities to the suburbs (Felland, Laeur & Cunningham, 2009). In a seminal article written in 1976, Syme and Berkman wrote "rather than attempting to identify specific risk factors for specific diseases...it may be more meaningful to identify those factors that affect general susceptibility to disease. Of particular interest would be research on the ways in which social and familial support networks mediate between impact of life events and stresses of diseases outcomes (p. 27)." The medical field made long strides to incorporate the social determinants of health in research and practice since then. We suggest that today, rather than merely attempting to break down the barriers to access healthcare and social services, we instead identify the process by which the poorest and most disenfranchised are obtaining needed resources. Mediated processes were generally successful in our small exploratory study; however the consequences of mediation from different sources remains largely unexamined. Our findings suggest that mediated processes need to be incorporated into formal healthcare and social services, such as shown in the patient navigation care model (Freeman 2004;Holsapple 2011), and mediated processes using informal social networks need to be further explored. The findings highlight the success of employing a type of research design that is actively involved with the participants in the study. Similar to what has been known by various names, including "participatory research" (Cornwall & Jewkes, 1995), "rapid ethnographic assessment" (Carlson, Singer, Stephens & Sterk, 2009), or "engaged ethnography" (Sheper-Hughes, 2004), the dynamic nature of our research produced an unintended applied ethnographic design. We found that as compassionate women studying women, we could not simply watch our participants struggle when some of the solutions to the challenges they faced were within our reach. By becoming engaged in the process, we applied our resources and found that structured mediation is needed. Our finding suggests that mediation should be incorporated more often as part of healthcare and social services. Moreover, mediation must take into account the challenges presented in suburban environments and especially for the suburban poor. | To examine access to needed resources among low-income methamphetamine-using females, we conducted interviews with 30 women living in poor suburban communities of a large southeastern metropolis. As an invisible population in the suburbs, underserved by social services, the women remain geographically and socially anchored to their poor suburban enclaves as transit, treatment and education remain out of reach. The longitudinal study included three interviews over a twoyear period. Resources needed by the women were identified in the first interview and a list of available services was provided to them. In subsequent interviews we asked how they accessed the services or barriers encountered and discussed these further in focus groups. Using a social capital framework in our qualitative analysis, we identified three processes for accessing needed resources: formal, informal and mediated. Implications for policymakers and social service providers are suggested, and models for future development proposed. Methamphetamine (MA) was proclaimed an epidemic as it crossed from the western coast of the United States, settled in the heartland and continued eastward, impacting primarily urban populations of young people and men who had sex with men (MSM), and rural populations in an increasingly poorer countryside ( |
Introduction --- M ortality differentials among socio-economic groups belong to the most consistent findings in public health, but the magnitude of these inequalities differs substantially between countries. A recent study of inequalities in health in 22 European countries in the 1990s showed that some southern European populations have relatively small educational inequalities in mortality. 1 Smaller inequalities in mortality in Spain and Italy were also found in a previous study, 2 but have never been satisfactorily explained. We therefore conducted an in-depth study of potential explanations for smaller inequalities in mortality in Spain. Spain is a young democracy, with an underdeveloped welfare state, important income inequalities and a universal national health service. 3 Evidence on socio-economic differentials in mortality based on individual data is relatively scarce, due to the poor quality of socio-economic information included in death certificates, and to restrictive legislation with regard to linkage of the death register with census information. 4,5 International literature focused mainly on the city of Barcelona or the region of Madrid. [5][6][7][8][9] One factor standing out from the more detailed analyses that have been performed is smoking: inequalities in smoking are smaller in Spain and Italy than in other Western European countries, particularly among women, and this is likely to contribute to smaller inequalities in ischemic heart disease 10,11 and lung cancer. 12 Studies which tried to explain the comparatively small inequalities in mortality in Spain are non-existent, and a comprehensive explanation is lacking so far. The present study was based on evidence from three Spanish populations (the city of Barcelona, the region of Madrid and the Basque Country), which were compared with six other Western European populations [Finland, Sweden, Norway, Denmark, Belgium and Turin (Italy)]. Our analysis aimed at identifying the specific causes of death and some of the specific determinants which contributed to smaller inequalities in total mortality in the three Spanish populations. --- Methods --- Study population Mortality data were obtained from longitudinal mortality studies based on linkage of death registries to population censuses and consisted of deaths and exposure counts by sex, 5-year age groups, cause of death and level of education (table 1). The data covered national (Finland, Sweden, Norway, Denmark, Belgium), regional (Madrid, the Basque Country) and urban (Barcelona and Turin) populations. The linkage between census data and death registries was achieved for almost 100% in all populations except in Barcelona, Madrid and the Basque Country where the linkage was obtained for only 94.5%, 70% and 94.1% of the population, respectively. To correct for the underestimation of deaths we weighted the number of deaths in the three Spanish populations with a correction factor. The correction factors were 1/0.945 for Barcelona, 1/0.7 for Madrid and 1/0.941 for the Basque Country. Data on determinants of mortality by socio-economic position came from nationally representative health or multipurpose surveys with a cross-sectional design (table 1). --- Measures The causes of death were classified according to the ninth and 10th revision of the International Classification of Diseases (ICD). We analysed a few large groups of causes [cardiovascular diseases (CVD), cancer, infectious diseases, respiratory diseases, alcoholrelated causes, external causes and all other causes], as well as a Data on determinants included smoking, obesity, sedentary lifestyle and health services utilization. Smoking status was measured as self-reported current tobacco smoking. Obesity was measured on the basis of self-reported height and weight, and defined as a body mass index >29 and <unk> 70. Sedentary lifestyle was measured either by asking the best described respondents' leisure time activities or the frequency of respondents' physical exercises or activities. The measurement of health services utilization was based on visits to a general practitioner, to specialists and to any physician. All analyses of health services utilization were adjusted for self-assessed health. Educational level declared at the census and during the interview surveys was used as a measure of socio-economic status and classified according to the International Standard Classification of Education (ISCED) using three categories: low (primary and lower secondary education), middle (upper secondary education) and high (post-secondary or tertiary education). Persons with missing information on educational level (generally <unk>5%) were excluded from the analysis. --- Statistical analysis Analyses were conducted separately for men and women aged 30-74 years at baseline (i.e. at the time of census). The follow-up time was 10 years for most countries except Belgium, Denmark, Basque Country (5 years) and Madrid (1.5 years). To obtain comparable ages at death, analyses were conducted on slightly older age groups at baseline for countries with shorter follow-up period (35-79 years for Madrid, and 30-79 years for Belgium and the Basque Country). In Denmark, no information on socio-economic status was available for subject aged >75 years. Further information on this adjustment procedure can be found elsewhere. 13 Mortality rates by educational attainment were age-standardized with the direct method using the European Standard Population. The contribution of a specific cause of death to inequalities in allcause mortality between low-and high-educated people was determined as the share of the rate difference for each cause of death out of the rate difference for total mortality. The magnitude of mortality inequalities according to educational level was summarized by relative (relative index of inequality, RII) 14 as well as absolute (slope index of inequality, SII) measures of inequality 1 using Poisson regression due to count data. Prevalence rates of determinants by educational level were also age-standardized, and inequalities in determinant prevalence were summarized by RIIs. As the prevalence of the determinants was relatively high (>10%), we used log-binomial regression. --- Results --- Mortality analyses All populations included in the analysis show a graded relationship between education and mortality, but the absolute gap in mortality between the lowest and highest educated is smaller in the three Spanish populations (supplementary figure 1). Average mortality rates are also lower in Spain than in other Western European populations, both among men (with the exception of Barcelona) and particularly among women, where mortality in the lowest educated group is lower compared with the highest educated group in all other Western European populations. Table 2 shows relative inequalities in total and cause-specific mortality. Among men, relative inequalities in total mortality in all Spanish regions tend to be smaller than those in most other populations, although the differences are neither entirely consistent nor substantial. Among women, relative inequalities in total mortality in the three Spanish regions are substantially smaller than those in all other populations, with the exception of Turin, which has similarly small RIIs. Among men, relative inequalities in CVD mortality in the three Spanish regions are smaller than those in all other populations, but inequalities in mortality from other causes of death are similar in magnitude, or even larger than those elsewhere. Among women, relative inequalities in mortality from cancer are smaller in the three Spanish regions, but inequalities in mortality from other causes are not consistently smaller than those in other populations. Moreover, reverse pattern was observed for lung cancer among women in the three Spanish populations and Turin, and for breast cancer among women in all populations except Turin and the Basque Country. The large inequalities in mortality from infectious diseases in Spain are predominantly due to AIDS mortality. More detailed data on cause-specific mortality by educational level can be found in supplementary tables 2-4. Figure 1 quantifies the contribution of specific causes of death to the difference in age-standardized mortality rates between low and high educated men and women. It shows that the smaller absolute inequalities in mortality in the three Spanish populations are partly due to smaller absolute inequalities in CVD mortality. These are negligible in Spain, but substantial in most other populations. Among men, these smaller contributions of CVD are due to both lower average rates of mortality, and smaller relative inequalities in mortality (table 2). Among women, these smaller contributions of CVD are mainly due to lower average rates of mortality, and not to smaller relative inequalities in mortality. Among women, smaller or negative absolute inequalities in cancer mortality also contribute importantly to smaller absolute inequalities in mortality in Spain (see supplementary table 5). --- Analyses of survey data Among men, inequalities in smoking are smaller in Spain than in most other populations (figure 2) because of comparatively prevalent smoking among higher educated Spanish men (Pvalue <unk> 0.0001 for the comparison between Spanish men and the rest of the countries), while among women, they are small or absent in Spain because higher educated Spanish women smoke more than the lower educated. Similarly, the smaller inequalities in sedentary lifestyle in the Basque Country are due to the fact that the higher educated are less physically active (P-value <unk> 0.0001). With regard to obesity, the inequalities are substantial in all countries (figure 2). After adjustment for self-assessed health, inequalities in health services utilization tended to favour the lower educated regarding visits to GP in most populations, including Spain. The opposite was observed for the use of specialized services, with the exception of the Basque Country. --- Discussion --- Summary of findings The Spanish populations have considerably smaller absolute inequalities in total mortality than other Western European populations. This is the result of both lower average levels of mortality and smaller relative inequalities in mortality. However, the analysis by cause of death reveals an important heterogeneity: smaller relative inequalities in total mortality in Spain are due mainly to comparatively small inequalities in mortality from CVD (men) and cancer (women). Inequalities in mortality from most other causes are not smaller in Spain than elsewhere, and inequalities in infectious disease mortality are even substantially larger. Spain also has smaller inequalities in smoking and sedentary lifestyle, but not in health services utilization and its inequalities in obesity among women are larger than in the other populations. On the basis of these four determinants, one cannot therefore conclude that the exposure of lower socio-economic groups to health risks is generally more favourable in Spain than elsewhere. --- Limitations Although education as a measure of socio-economic position remains constant during adult life and old age, 15,16 reverse causation is less likely 17 and educational level is comparable across European countries when broader categories classified according to the ISCED are used, 18 the impact of education on individual overall socio-economic position may differ between countries. The comparability of the mortality rates may be compromised by differences between countries in calendar year at start and duration of follow-up. While we adjusted our results for different follow-up periods, we could not correct them for different starting years. Since there were mostly earlier for Northern Europe, and since inequalities in mortality have been widening in these European countries, 19 any bias due to differences in starting year would tend to lead any differences in the magnitude of mortality inequalities in Spain to be underestimated. Regarding the differences in length of follow-up,'sensitivity analysis' (comparison of countries with similar length of follow-up) gives the same results. The data available on the prevalence of determinants and the mortality follow-up applied to the same period. Data that would allow proper time-lag to be incorporated between exposure and outcome in our analysis were not available. However, it is unlikely that the social patterning of these risk factors changes substantially within a 5-or 10-year period. We cannot exclude that some of our cause-specific results are affected by inaccuracies such as differences in certification or coding of causes of death between countries and socio-economic groups. 20 However, we believe that those results using broad cause-of-death categories are likely to be robust. Differences in the magnitude of inequalities in mortality between Northern and Southern European populations may be biased by the fact that we compared national mortality data for Northern European countries with urban or regional mortality data in Southern European countries. Although Turin, Barcelona, Madrid and the Basque Country are relatively more prosperous than other regions in Italy and Spain, results show that inequalities in mortality in Turin, Barcelona and Madrid (where the share of the urban population is very large) are not greater than in the Basque Country (which contains only three medium-sized cities). In addition, on the basis of national mortality data during the 1980s, Kunst et al. 21 have shown smaller inequalities in mortality in Italy and Spain as a whole. Recently, Regidor et al. 22 reported small inequalities in mortality among older people in Spain. We therefore think that the comparatively small inequalities in mortality observed in Barcelona, Madrid and the Basque Country can be generalized to Spain as a whole. --- Interpretation The smaller educational inequalities in mortality observed in Spain are likely to be an effect of a later socio-economic modernization of Spain than that of Northern Europe. The socio-economic modernization refers to the historical process of large-scale socio-economic changes in society, such as rising prosperity, industrialization, urbanization and expansion of mass education. This may have led to smaller educational inequalities in mortality in two ways. The first is that, due to later socio-economic modernization, educational attainment still may be less important as a social stratifier in Spain than in Northern Europe. During the 1990s, the proportion of low educated people was still $70% in Spain, against only 30-50% in Northern Europe (supplementary table 6). Spain's very rapid economic development after the Franco dictatorship 23 may have created a mismatch between education and other statusattainment variables such as income and occupational class. This is confirmed by a review of comparative studies which found weaker relationship between educational attainment and occupational class in Spain compared with Northern European countries 24,25 and the Netherlands. 26 The health survey data also suggested a weaker relationship between educational level and income in the Basque Country than in several Northern European countries, particularly among men (supplementary table 7). The second possible pathway is that later socio-economic development has delayed the epidemiologic transition. 27 The transition from a mortality regime dominated by infectious diseases to one dominated by CVD and cancer occurred several decades later in Spain than in Northern Europe. 28 The small absolute inequalities in CVD mortality in Spain are partly because average rates of mortality from CVD, particularly IHD, have remained low, especially among men (supplementary tables 2-4). While the increase in IHD mortality started many years later than in Northern Europe, the decline started only a few years later. 29 The decline in IHD mortality in Spain after 1975 has been ascribed to the decline in smoking (only among men) and to improvements in medical care (e.g. cardiovascular drugs and intensive care units). 29 In other words, Spain already started to benefit from advances in knowledge about risk factors for IHD and advances in medical care before the epidemic could reach a higher peak. That IHD mortality has never reached great heights in Spain is probably also due to the role of the Mediterranean diet with comparatively high consumption of wine, fish, fruits, vegetables and olive oil. 30 In view of the fact that partial adherence to the Mediterranean diet seems to explain the low average rates of mortality from IHD in Spain, it seems likely that adherence to this diet by lower socio-economic groups also explains part of the smaller inequalities in IHD mortality and the low rates of IHD mortality among the high educated despite their high prevalence of smoking and physical inactivity. This is confirmed by a review of inequalities in diet in different European countries, which shows that the association between education and fruit and vegetables consumption is inconsistent in Spain (and clearly positive in Northern Europe), while the higher educated in Spain consume more animal fat and fewer vegetable oils than the lower educated. 31 Not all studies, however, reach the same conclusions. 32,33 Another reason for the smaller relative inequalities in IHD mortality in Spain can probably also be found in the different timing of epidemiologic developments. Previous studies have concluded that Southern European countries tend to be at an earlier stage of the smoking epidemic, in which smoking is still more prevalent in upper socio-economic groups, especially among older people and women. 19 Regarding cancer mortality, smaller absolute inequalities among women in the three Spanish populations were due partly to the strong reverse gradients for breast and lung cancer. Breast cancer is related to reproductive behaviour (a particularly high age at first pregnancy), and reverse gradients of breast cancer arise because higher educated women are the first to delay pregnancy to higher ages. 34 The stronger reverse gradient in Spain may be due to the fact that this aspect of modernization started later, too. 34 Spain had very large inequalities in mortality from infectious diseases, due mainly to AIDS. During the 1990s, large inequalities in AIDS mortality in Spain were driven by a combination of lower access and adherence to treatment and to unfavourable material conditions among vulnerable groups. 35 The introduction of highly active antiretroviral therapy (HAART) has contributed importantly to narrowing absolute inequalities in AIDS mortality in Spain. 36 --- Conclusion Educational inequalities in cause-specific mortality and its determinants are not consistently smaller in Spain than in other Western European populations. Smaller absolute inequalities in total mortality in Spain reflect smaller absolute inequalities in mortality from CVD and cancer. On the other hand, Spain does not have smaller inequalities in mortality from many other causes of death, and as many of these relate to living conditions, our findings suggest that smaller inequalities in total mortality in Spain do not reflect a generally more favourable situation with regard to social inequality. Smaller inequalities in mortality from CVD and cancer are likely to be due to Spain's later socio-economic modernization. While the Spanish example shows that inequalities in total mortality are not inevitable, the favourable situation in terms of inequalities in mortality from CVD and cancer in this country seems to be a historical coincidence rather than the outcome of deliberate policies. Unfortunately, in view of the on-going changes in social-protection policies in Spain and the changing socio-economic distribution of risk factors for mortality in the Spanish population, 37 this favourable situation is also likely to be transitory. --- Supplementary data Supplementary data are available at EURPUB online. Conflicts of interest: None declared. --- Key points Although the social inequalities in mortality and health are relatively small in southern European countries compared to the rest of Europe, the smaller size of inequality in total mortality in Spain does not represent an unambiguously favourable situation. Smaller inequalities in mortality in Spain were only found for cardiovascular disease and cancer. Inequalities in mortality from most other causes were not smaller in Spain than elsewhere. The smaller inequalities for cardiovascular diseases and cancer did not resulted from lower risk factor prevalence in lower socio-economic groups but from relatively high risk factor prevalence in higher socio-economic groups. The on-going changes in social-protection policies in Spain and the changing socio-economic distribution of risk factors for mortality in the Spanish population need to be taken into account to tackle health inequalities. | Epub ahead of print]. 34 Borrell C, Espelt A, Rodriguez-Sanz M, et al. Analyzing differences in the magnitude of socioeconomic inequalities in self-perceived health by countries of different political tradition in Europe. |
Introduction The escalating environmental challenges of our time demand urgent action, placing a spotlight on critical issues such as plastic waste and consequential greenhouse gas emissions [1,2]. Recent studies underscore the alarming trajectory of plastic pollution, which is expected to inflict severe damage on natural ecosystems and compromise air and soil quality [3]. Among the industries contributing significantly to these challenges, the food service sector stands out for its notorious generation of single-use plastic waste. Food and beverage packaging alone accounts for approximately 15% of total plastics produced since the 1950s [4]. While commendable strides have been made in certain areas, e.g., the transition to digital receipts, paper straws, and alternatives to plastic packaging, as well as the emergence of environmentally related labeling on food products, the pressing need for effective, behavioral interventions remains [5,6]. Within the food service sector specifically, restaurants play a pivotal role in bridging material innovations with consumer behaviors and can act as change agents in enacting strategies [7]. However, achieving transformative change requires a deeper integration of core environmental attitudes that influence consumer behaviors. This monumental task mandates a profound introspection into what truly drives green lifestyles and a rigorous evaluation of the multitude of factors influencing eco-decisions. These factors are not limited to, but certainly encompass, socio-economic backgrounds and personal attributes, each playing its pivotal role in shaping attitudes [6,7]. Publications in the literature spanning various disciplines consistently elucidate the interplay of personality, demographics, and foundational environmental attitudes, offering pivotal insights into sustainable consumption patterns [8][9][10][11]. An interesting nuance emerges when one delves into the role of age: though not always a strong standalone predictor of eco-consciousness, younger digital-native cohorts seem to exude a heightened sense of responsibility towards sustainable practices [12]. Further compounding this, there is robust evidence pointing towards a correlation wherein individuals from elevated educational and economic echelons lean more towards environmentally friendly behaviors [13,14]. Intimate familiarity with environmental issues can act as a potent catalyst, triggering more aligned behaviors [15]. When we navigate the domain of gender studies, a pattern crystallizes: there seems to be a female propensity towards eco-conscious behaviors, a phenomenon shaped by an amalgamation of societal imprints, gender roles, and unique concerns spanning reproductive health and beyond [16][17][18][19]. Additionally, the nexus between vegetarianism and pro-environmental behavior is intricate, yielding mixed research outcomes. Although vegetarianism does not universally signify heightened awareness of environmental health [20], evidence indicates that adopting a vegetarian diet, compared to a meat-based one, can lead to reduced greenhouse gas emissions [21]. Additional studies have highlighted that a considerable proportion of the population exhibits hesitancy towards pro-environmental measures, often stemming from either a propensity to prioritize short-term gains or a potential lack of awareness regarding their environmental impact [22]. This is further exacerbated by a prevailing sentiment wherein individuals often deflect personal accountability towards larger institutional entities, a sentiment that becomes entangled with economic constraints that might hinder sustainable decisions [23]. There is also a bias coming from self-identification, which needs to be considered in the understanding of these trends [6]. As the world hurtles towards technological advancements, emerging tools and methodologies, like "nudging", present themselves as formidable allies in our journey towards sustainability [24,25]. Nudging is a tactical concept from behavioral economics that refers to making small, subtle changes to the environment or decision-making processes to encourage or "nudge" people towards making more sustainable choices [24]. It can change behavior and attitudes without limiting choices or mandating actions [26,27]. Nudging with messages about the impact of plastic waste has also been used in many contexts [28]. It has been used specifically to reduce plastic use, e.g., to reduce plastic bag usage in supermarkets [29]. Nudging has also been researched within the context of plastic pollution by referencing its detrimental effects on oceans and aquatic life [24]. However, different types of nudges may be more effective for different groups than others, particularly concerning gender [25]. Technological advancements in Virtual Reality (VR) enable realistic simulations of food-shopping scenarios, providing an accurate platform to evaluate influences on consumer choices [30]. As research in this area grows, VR is set to become a key tool in promoting sustainable food consumption, presenting rich insights for both researchers and policymakers. However, integrating VR into studies requires careful attention to avoid introducing biases from the immersive environment [31]. Factors such as the language used and the visual cues presented in VR can sway participants' perceptions, potentially affecting the study outcomes. Furthermore, personal attributes like individual past experiences, educational background, age, and familiarity with the items under study can lead to varied interpretations. This emphasizes the importance of designing VR experiences based on sound research to minimize unintended biases. Nonetheless, a certain level of subjectivity remains unavoidable in crafting and deploying VR scenarios, given the inherent personal touch involved in the design process. By delving into the attitudes and behaviors of consumers in a virtual restaurant scenario, this research aims to contribute valuable insights. As the field of research utilizing VR technology expands, it is poised to become a key tool in promoting sustainable food consumption, offering rich insights for both researchers and industry decision makers. The objective of this study is to explore potential disparities in pro-environmental behaviors among individuals based on their dietary preferences and packaging selections when they make takeaway purchases at restaurants. This investigation formulated the following three hypotheses pertaining to environmental attitudes and behavior: H1: Primarily, we posit that attitudes towards environmental action are intrinsically connected to demographic factors including age, gender, and education; H2: Secondly, the study hypothesizes that environmental attitudes correlate with choices related to diet and packaging; H3: Lastly, it is postulated that interventions geared towards heightening environmental awareness will yield a positive transformation inconsumer selections towards less plastic and more sustainable menu items. To scrutinize these hypotheses, a VR restaurant scenario was orchestrated, wherein participants' selections were evaluated through a choice. Half of the respondents were presented with a warning message of an animal hurting from plastic exposure, while the control group did not see this cue. All participants were asked to provide responses to established scales measuring environmental literacy, responsibility, and willingness to embrace eco-friendly consumption. --- Methods This study utilized a VR experiment, simulating a takeaway restaurant environment, to explore consumer behavior and attitudes related to environmental sustainability. In this section, we detail the experiment's design, wherein participants, divided into intervention and control groups, engaged in a choice-based tasks within our immersive VR setup, followed by a survey incorporating key measures to assess the outcomes. --- Virtual Reality Experiment Incorporating the immersive potential of VR, this study unfolded within a simulated setting resembling a takeaway restaurant for order collection. The VR experience initiated with participants virtually embarking on a journey from their own homes to a modest eatery. Alongside the participants' demographic particulars encompassing age, gender, and educational background, an intervention group of half of the participants received a cautionary infographic about the detrimental impact of plastic waste on ocean ecosystems (depicted in Figure 1). Following exposure to potential interventions and completion of self-assessment tasks, participants immersed themselves in a VR scenario that replicated a restaurant's ordering process (illustrated in Figure 2). The virtual environment faithfully recreated the ambiance participants would encounter in an actual restaurant. In this context, a virtual waiter engaged participants in two choice-based tasks. The initial task required them to select from meal options: vegetarian, fish, or meat-based. Subsequently, the second choice involved their preference for packaging materials for the takeaway meal, offering a selection between recyclable and non-recyclable plastic. Following exposure to potential interventions and completion of self-assessment tasks, participants immersed themselves in a VR scenario that replicated a restaurant's ordering process (illustrated in Figure 2). The virtual environment faithfully recreated the ambiance participants would encounter in an actual restaurant. In this context, a virtual waiter engaged participants in two choice-based tasks. The initial task required them to select from meal options: vegetarian, fish, or meat-based. Subsequently, the second choice involved their preference for packaging materials for the takeaway meal, offering a selection between recyclable and non-recyclable plastic. --- Measurement of Attitudes After the VR simulation, in other to evaluate participants' stances on environmental concerns and sustainable practices, this study employed three scales: the perceived seriousness of environmental behavior (PS), perceived environmental responsibility (PER), and green purchase intention (GPI) scales [32]. Respondents rated a series of statements on a 7-point Likert scale, ranging from "totally disagree" to "totally agree". Higher scores indicated stronger pro-environmental inclinations. --- Benefits and Limitations This study harnessed the synergy between VR simulations and questionnaires to garner tailored data and attitudes for choice-based experiments. This approach facilitated the exploration of how demographic variables and personality traits influence consumer decisions and pro-environmental attitudes. VR simulations offered a cost-effective means of creating quasi-realistic scenarios with precise control over responses. However, it is Following exposure to potential interventions and completion of self-assessment tasks, participants immersed themselves in a VR scenario that replicated a restaurant's ordering process (illustrated in Figure 2). The virtual environment faithfully recreated the ambiance participants would encounter in an actual restaurant. In this context, a virtual waiter engaged participants in two choice-based tasks. The initial task required them to select from meal options: vegetarian, fish, or meat-based. Subsequently, the second choice involved their preference for packaging materials for the takeaway meal, offering a selection between recyclable and non-recyclable plastic. --- Measurement of Attitudes After the VR simulation, in other to evaluate participants' stances on environmental concerns and sustainable practices, this study employed three scales: the perceived seriousness of environmental behavior (PS), perceived environmental responsibility (PER), and green purchase intention (GPI) scales [32]. Respondents rated a series of statements on a 7-point Likert scale, ranging from "totally disagree" to "totally agree". Higher scores indicated stronger pro-environmental inclinations. --- Benefits and Limitations This study harnessed the synergy between VR simulations and questionnaires to garner tailored data and attitudes for choice-based experiments. This approach facilitated the exploration of how demographic variables and personality traits influence consumer decisions and pro-environmental attitudes. VR simulations offered a cost-effective means of creating quasi-realistic scenarios with precise control over responses. However, it is --- Measurement of Attitudes After the VR simulation, in other to evaluate participants' stances on environmental concerns and sustainable practices, this study employed three scales: the perceived seriousness of environmental behavior (PS), perceived environmental responsibility (PER), and green purchase intention (GPI) scales [32]. Respondents rated a series of statements on a 7-point Likert scale, ranging from "totally disagree" to "totally agree". Higher scores indicated stronger pro-environmental inclinations. --- Benefits and Limitations This study harnessed the synergy between VR simulations and questionnaires to garner tailored data and attitudes for choice-based experiments. This approach facilitated the exploration of how demographic variables and personality traits influence consumer decisions and pro-environmental attitudes. VR simulations offered a cost-effective means of creating quasi-realistic scenarios with precise control over responses. However, it is important to acknowledge the inherent reliance on self-assessment, the time-intensive nature of in-person procedures, and the potential for divergent choices influenced by real-world factors like price and social pressures. --- Hypothesis Tests To ascertain reliability, Cronbach's Alpha measures were applied to the three green attitude scales. A threshold of 0.6 was selected, based on a general consensus of this being a sufficient condition [33]. On the rationale of a small sample size and detected lack of normal distribution, the means of attitude scales were compared with Mann-Whitney tests. This was repeated for all three scales and the subgroups (i) Gender (Female vs. Male), (ii) Education (Undergraduate vs. Graduate), (iii) Message (Intervention vs. Control), (iv) Meal Choice (Plant-vs. Animalbased), and (v) Package Choice (Recyclable vs. Non-recyclable). Two-sided hypotheses with a significance level of 0.05 were used. For the connections between Meal Choice-Gender, Meal Choice-Intervention, Package Choice-Gender, and Package Choice-Intervention, Fisher's test of independence was used, as the condition of at least five entries in every cell of a chi-square test was not fulfilled. The descriptive and inferential analyses were carried out using R version 4.3.1, Microsoft Excel, and Google Sheets. --- Results --- Participant Profile --- Sampling In this exploratory study, we recruited 22 students from the campus and randomly assigned them to two groups: an intervention group of 11 students who received a warning message, and a control group, also comprising 11 students. The decision to use a relatively small sample size was primarily driven by the study's exploratory nature, aiming to test initial hypotheses and collect preliminary data within the context of Virtual Reality (VR) technology. Additionally, the inherent constraints of VR environments, particularly regarding participant management and data collection, played a significant role in determining the sample size. --- Demographics The average participant age was 23.4 years (with a standard deviation of 8.9 years), comprising 73% females and 27% males. Among them, 77% were pursuing bachelor's degrees, 18% were enrolled in master's programs, and 5% were Ph.D. candidates. --- Attitude Measurement All scales demonstrated satisfactory reliability levels, as shown in Table 1. Shapiro-Wilk tests showed that the PER scale adhered to the assumption of normality (p = 0.07) and the PS (p <unk> 0.001) and GPI (p = 0.04) scales deviated from a normal distribution. Consequently, non-parametric tests were employed for all three scales. --- Influences on Environmental Attitudes Table 2 presents confidence intervals and p-values from Mann-Whitney tests, comparing (a) gender differences, (b) individuals with varying educational levels, and (c) the intervention and control groups across the three environmental attitude scales. Notably, gender emerged as the sole factor significantly impacting personality traits associated with pro-environmental values, consistently favoring females as being more conscious. Hypothesis 1 was confirmed in part. Intervention-Control 0.5 (-0.1-1.2) 0.2 0.3 (-0.3-1.0) 0.3 0.0 (-0.9-0.9) 0.8 Note: 95% confidence interval given in parentheses. **, significant at 95% confidence level; ***, significant at 99% confidence level. --- Attitudes Scales and Consumption Choices The relationship between pro-environmental dimensions as measured by the three scales and the consumption choices in the trial are displayed in Table 3. Note: *, significant at 90% confidence level; **, significant at 95% confidence level. H2 was confirmed for the choice of meal, where those picking the green meal also reported as more green in their values and purchase intentions. There was no similar pattern between packaging choices and environmental attitudes. --- Intervention Message, Packaging, and Choice of Meal The packaging preference and meal choice, considered in terms of intervention vs. control, are summarized in Table 4. Fisher's test gave a p value of 0.04, thus indicating that the VR message had a significant influence on choice of meal. Fisher's test gave a p-value of 1, thus indicating that the message did not have a proven influence on packaging selection. The combined findings from Table 4 offer partial support for Hypothesis 3. The data reveal a significant trend among consumers, indicating that the VR message had a significant influence on choice of meal. However, these results did not indicate a statistically significant result in consumer choices towards items with reduced plastic usage. Further analysis revealed that neither gender nor education level significantly influenced meal choices or packaging preferences (p > 0.05). While females tended to choose vegetarian meals and recyclable packaging more than males, and there were differences between undergraduates and graduates, these variations were not statistically significant according to results from Fisher's test. This suggests that gender and education may not be major factors in environmental choices regarding meals and packaging. --- Discussion This study delved into the intricate interplay between pro-environmental attitudes, dietary preferences, and packaging choices within the context of a VR restaurant scenario. The results underscored the significant influence of gender on pro-environmental attitudes, aligning with previous research highlighting women's heightened engagement in sustainable behaviors [16,17]. The potential causative links between gender roles, psychological differences, and perceptions of responsibility necessitate further exploration to comprehend the underlying dynamics. All three hypotheses were fulfilled, but only in part, as the main effects were: (i) Females being more conscious of environmental consumption, (ii) those choosing plant-based meals having higher values on the green scales, and (iii) the warning message leading to more vegetarian meals being chosen. The investigation, hence, suggested a connection between environmental awareness interventions and positive transformations in attitudes, although these findings did not reach statistical significance. This echoes the principle of "nudging" as a valuable tool to encourage sustainable behaviors [26,27]. However, the primary intended effect of reducing plastic usage by selecting reusable packaging was not observed. The message was more efficient in shifting respondents towards the meal choice without animal proteins. The link between dietary choices and environmental consciousness has been investigated extensively, with a vegetarian diet frequently associated with lower environmental impact [22]. This study's findings add to this narrative, but the nuanced nature of dietary preferences, influenced by factors beyond environmental concerns, highlights the complexity of causation. With regards to socio-demographics, the gender dimension has been underlined through multiple surveys confirming that females in general are better aligned with environmental health in their attitudes and consumption [17][18][19]. As far as higher degrees go, research has confirmed two opposing effects; on the one hand, greater knowledge of climate issues spurs more sustainable actions while a higher income also leads to a higher footprint, even more so for the most affluent and highest echelons of social status [34]. In terms of packaging choices, the research did not identify statistically significant relationships between gender, pro-environmental attitudes, and packaging preferences. However, the potential implications of these choices on plastic waste and environmental impact remain paramount. The significance of efforts to reduce single-use plastics and promote eco-friendly alternatives requires sustained attention [29]. Despite its contributions, this study possesses certain limitations. The relatively small sample size and the use of a convenience sample from a single demographic could undermine the generalizability of the results. Furthermore, the potential influence of contextual factors like price and social pressure on participants' choices in the VR scenario is not fully addressed. Additionally, while the VR approach facilitates control and immersion, the experiential aspect might introduce biases related to personal interpretation. --- Conclusions This study ventured into the dynamic relationship between pro-environmental attitudes, dietary choices, and packaging preferences within a VR restaurant scenario. While the results suggested significant correlations between gender and pro-environmental attitudes and a potential connection between adopting vegetarian diets and pro-environmental attitudes, the study also highlighted the nuanced yet complex nature of these relationships. The potential impact of interventions, such as environmental awareness messages, on transforming attitudes towards sustainability was also hinted at, although not statistically proven. As behavioral interventions like nudging become increasingly common practice, this study's application of VR serves as a steppingstone towards understanding how individuals interact with choice scenarios in the quest for a greener future. It is important to reiterate that factors beyond dietary choices, such as social norms, access to resources, and education level, influence consumers' pro-environmental behavior. Therefore, while dietary choices can be essential in reducing environmental impact, they are not the only factor and should not be viewed in isolation. One's consumption patterns, whether related to diet or materials, may reflect personal values that result from individualistic and societal conditioning. Identifying solid correlations between these factors and dietary choices may be possible, though implying causation may be misleading and merits continued avenues of exploration. Ultimately, the pursuit of sustainable behaviors and a reduction in environmental impact found in this study underscores the monumental need for continued interdisciplinary research, larger-scale studies, and strategic interventions for the food service sector. This task mandates a profound introspection into what truly drives green lifestyles and a rigorous evaluation of multifaceted factors' pivotal roles in shaping attitudes. --- Data Availability Statement: Data will be made available upon request by contacting the corresponding author. --- Author Contributions: Conceptualization, A.R.F.; methodology, A.R.F.; formal analysis, H.L.; investigation, A.R.F. and H.L.; resources, A.R.F.; writing-original draft preparation, H.L.; writing-review and editing, A.R.F., H.L., J.M.W., S.W. and J.K.; funding acquisition, A.R.F.; All authors have read and agreed to the published version of the manuscript. --- Conflicts of Interest: The authors declare no conflict of interest. | This research paper delves into the complex relationship between pro-environmental attitudes, dietary preferences, and packaging choices using a Virtual Reality (VR) restaurant scenario. The imperative is to address environmental concerns, particularly plastic waste and greenhouse gas emissions, as they pertain to sectors of the food service sector. This study seeks to understand the factors influencing environmental attitudes and behaviors, with a focus on dietary preferences and packaging choices using a VR restaurant scenario. This study explores connections between gender, education, interventions, and pro-environmental attitudes, as well as the correlation between vegetarian diets and sustainable behaviors. While the results suggest significant correlations between gender and pro-environmental attitudes and a potential connection between adopting vegetarian diets and pro-environmental attitudes, our study emphasizes the nuanced nature of these relationships. The findings underline the importance of interdisciplinary research and strategic interventions for fostering sustainable behaviors and reducing environmental impact. The use of VR simulation adds a novel dimension to understanding individuals' choices in controlled environments, shedding light on the intricate dynamics of pro-environmental decision making. This paper contributes to the ongoing discourse on sustainable behavior by offering insights into the interplay between personal preferences, environmental awareness, and choices with significant environmental implications. |
Introduction As highly social animals, humans experience better mental and physical health and cope with stressors better when they have access to social support [1]. One antecedent of Animals 2022, 12, 3434 2 of 12 social support is the extent to which those in supporting roles empathise with us [2]. The term 'empathy' is often used ambiguously in the scientific literature, but for the purposes of this work we take empathy to involve emotional, cognitive, and behaviourally expressive aspects, and to entail an observer perceiving another's affect and experiencing shared feeling [3]. Companion relationships with non-human animals (hereafter 'animal/s') have evolved over 15,000 and perhaps as long as 40,000 years [4]. They are reported to be positive for our mental and physical health [5,6]. This phenomenon is known as the 'pet effect' [7]. While there is debate as to the veracity of the positive effect of companion animals due to contrasting results [8], studies that consider attachment and social support theories suggest that non-human animals fulfil human needs for emotional support [9], even acting as substitutes for reduced human support networks [10]. However, the role of animal empathy towards humans in generating this social support has not been explicitly investigated. Attributing uniquely human capacities to non-human entities is considered anthropomorphic [11]. Despite a human tendency to anthropomorphize literally anything [12], the primary target remains animals [13]. While examination of the phenomenon of anthropomorphism is accelerating [14,15], being anthropomorphic is often considered unscientific and viewed negatively by the scientific community (see [16]), though anthropomorphism can have positive impacts on human-animal relationships. As a counterpoint, anthropocentrism has been defined as the interpretation of reality according to human values, needs, and experience, due to a belief structure where humans are primary amongst all species [11]. We can perceive our companion animals and their capacities through either lens, either affording them human-like capacities, perhaps beyond their physiological and cognitive abilities, or denying them such affordances based on a bias that views humans as exceptional. While there is mounting evidence for canine empathic abilities [17], the study of feline empathy lags far behind-for example, a major review of emotional contagion research in mammals included no references to studies in cats [18]. Regardless, owners retain beliefs that both cats and dogs can empathise with us [19]. As mutual caring, reciprocal support, and empathy moderate human relationships, it is possible that these same attributes play a role in the bond humans have with their companions. Hence, examining how we perceive and construct animal empathy experiences can generate valid and important information to aid our understanding of how animals provide social support-in particular, by revealing the extent to which people use anthropomorphic explanations for experiences of empathy from canine and feline companions. This study investigates the phenomenology of animal empathy by focussing on how humans construct sense-making narratives of animal empathy experiences. We hypothesized that anthropomorphic attributions would play a key role in these constructions. To elucidate a deep understanding of each participant's experience and draw interpretative meaning from them, a qualitative approach concerned with subjective experience, and in particular emotional responses, is essential. Therefore, the current study used the qualitative methodology of interpretative phenomenological analysis to gain insight into how participants identified and constructed a lived experience of animal empathy. --- Materials and Methods --- Theoretical Framework As the research question centred on how participants identified and understood their experiences of animal empathy, this study utilized the qualitative approach of interpretative phenomenological analysis (IPA), which focusses on participants' lived experiences and the meaning they make of them. This method facilitates the deep examination of experiential phenomena and is particularly beneficial for understanding how participants interpret and react emotionally to the experiences of interest. IPA is an inductive method and is the product of a joint elucidatory process in which, not only does the participant interpret their lived experience, but the analyst ultimately provides their account of what they think the participant is thinking, resulting in a 'double hermeneutic' [20] (p. 80). --- Participants Upon ethical approval (ER/KH447/1, University of Sussex), the participant sample was generated purposefully via social media advertising (Facebook) and word of mouth. Eligibility criteria were deliberately wide to promote participation, meaning any adult (over 18 years) who self-identified as having lived experience of an occasion when they believed their companion animal was empathic towards them was included. All participants who came forward were female, and experiences discussed were evenly split between dogs and cats. Two participants were residents of New Zealand, the remaining four in the United Kingdom, and as such all were English speaking and derived from broadly similar western cultural backgrounds. This study followed recommendations for IPA methodology to be applied to a sample size of one to six participants [20] (p. 51). Participants gave voluntary verbal consent before interviews took place. --- Interviews Semi-structured interviews were conducted by a single interviewer (author 1) following the established IPA methodology [20] (chpt. 4). This paper addresses themes arising from part B, questions 4, 5, and 6 of the interview schedule (Table 1). Themes arising from parts A and C are interpreted in future work. The italicised words are replaced by the actual name of animal during the interview. The interview schedule and interview technique were piloted with two unanalysed participants, after which suitable amendments were made. Each interview lasted approximately one hour, and all were conducted online via a video conferencing platform suitable for non-sensitive data (Zoom) between March and May 2021. Recordings were immediately downloaded to a secure university server and then deleted from the online platform. Interviews were recorded and transcribed automatically by the video conferencing platform, with transcriptions later checked against audio recording and manually corrected by the interviewer to ensure verbatim accuracy. --- Data Analysis Interviews were analysed sequentially by the interviewer, and recruitment terminated when themes reached saturation. Transcripts were first read, and re-read, to ensure familiarity with content, then exploratory comments were made line by line, which were categorised as descriptive, linguistic, or conceptual [20]. An interpretation was then conducted by the systematic coding of transcripts using proprietary software (NVIVO release 1.5) followed by clustering of evolving themes. Emerging themes were examined for divergence, convergence, repetition, and nuance, and this process was repeated for each transcript to uphold a commitment to each participant's meaning-making. Reflexivity was enhanced by cycling back over previously analysed materials in an inductive cycle to move the interpretation from the individual level to a gestalt understanding of relationships between themes. Generated themes and coded data were discussed in tandem with psychological knowledge from other authors throughout the analysis to test and develop the plausibility and coherence of the interpretative account. --- Results Two superordinate themes were identified. The first covered the context and identification of animal empathy experiences, while the second encompassed multiple themes and sub-themes concerned with how participants constructed their experiences. Sub-themes with interpretive commentary and illustrative extracts are presented below, with those concerning how animal empathy is constructed by guardians further interpreted through anthropomorphic, mixed, or anthropocentric lenses (Figure 1). Participants are anonymised and pseudonyms are used for animal names. --- Context and Identification of Animal Empathy Participants reported a variety of contexts where self-identified experiences of animal empathy took place. Some described empathic interactions in terms of entirely emotional support such as in situations of grief, loneliness, and stress, others in terms of physical support including protection and illness, while several participants described both emotional and physical support. In this extract, Participant E describes a period of grief after the death of a close friend and the emotional support role of Barney (dog) during that time: PE: But sometimes there'll be something [ ] and it takes my breath away, and it's almost like sometimes that he (Barney) picks up on that and will just come and lean on me or will come and flop next to me or something. And I do find it really comforting. In all cases participants identified a change from their cat or dog's normal behaviour as the indicator of an empathic interaction. In the following extract Participant F describes the actions of Henry (cat) during a period of convalescence: PF: Me and Henry would always lie [ ] we had a particular position that we always lay in, and I was on my back and she sat right on my womb, where I'd had this horrendous operation, and just sat there. So, it was not a position she'd normally sit in at all and if I tried to move, she'd hiss at me and she never hissed at me either... In this extract the narrator uses alteration in behaviour to identify that Henry is attempting to care for them bodily. This extract also shows that in common with all interviews, Participant F used their animal's increasing physical proximity as an identifier of an empathic interaction, as does Participant C when speaking about Tukker (dog): PC: he'd seek you out and try to initiate contact if he could see you'd had a crap day, you know, come and put his head on my lap, he'd come wriggle up to me. In this extract Participant C also attributes Tukker with the ability to identify ('he could see') their emotional need. This is illustrative of the following thematic framework which attempts to unpick the diversity of how animal empathy experiences are constructed and understood by participants. --- Constructions of Animal Empathy How participants understood what was going on inside their animal during their experience of animal empathy varied across a spectrum from highly anthropomorphic to highly anthropocentric, with some explanations involving a mix of both (Figure 1). Multiple explanations were used by each participant, with some participants expressing conflicting constructs within their reasoning. In this extract Participant C also attributes Tukker with the ability to identify ('he could see') their emotional need. This is illustrative of the following thematic framework which attempts to unpick the diversity of how animal empathy experiences are constructed and understood by participants. --- Constructions of Animal Empathy How participants understood what was going on inside their animal during their experience of animal empathy varied across a spectrum from highly anthropomorphic to highly anthropocentric, with some explanations involving a mix of both (Figure 1). Multiple explanations were used by each participant, with some participants expressing conflicting constructs within their reasoning. --- Anthropomorphic Constructions Many participants provided explanations that utilised human-like capacities to construct understanding of their experiences of animal empathy. --- Cognitive Attribution The most anthropomorphic explanations provided by participants were those attributing high levels of cogitation and intention by the animals involved, such as shown in this extract from Participant A: --- PA: Bay was thinking'mum's in trouble' or'mum's getting hurt and I need to do something about it', [ ] like 'I have to protect mum' [ ] Here the participant ascribes Bay (dog) not only with understanding of the context of what was happening (an incident of domestic violence) but also of conscious thought and action intention. By giving Bay an internal 'voice', this participant also assumes that Bay categorizes their relationship in a familial way and sees Participant A as'mum'. Participant F likewise affords Henry (cat) with the ability to apply human-like cognition as they recovered from painful abdominal surgery: PF; it was exactly where it was hurting me, absolutely, and it's like she completely knew, and she was just like, 'just lie the fuck down, keep still you're not well, I want you to recover'. And she sort of looked after me all through the next week, when I was in recovery from the operation. Here Participant F apportions conscious knowing to Henry, including what was wrong (pain, not well) and what needed to happen (lie down, keep still), and as with the previous extract, also ascribes Henry an internal 'voice'. --- Anthropomorphic Constructions Many participants provided explanations that utilised human-like capacities to construct understanding of their experiences of animal empathy. --- Cognitive Attribution The most anthropomorphic explanations provided by participants were those attributing high levels of cogitation and intention by the animals involved, such as shown in this extract from Participant A: PA: Bay was thinking'mum's in trouble' or'mum's getting hurt and I need to do something about it', [ ] like 'I have to protect mum' [ ] Here the participant ascribes Bay (dog) not only with understanding of the context of what was happening (an incident of domestic violence) but also of conscious thought and action intention. By giving Bay an internal 'voice', this participant also assumes that Bay categorizes their relationship in a familial way and sees Participant A as'mum'. Participant F likewise affords Henry (cat) with the ability to apply human-like cognition as they recovered from painful abdominal surgery: PF; it was exactly where it was hurting me, absolutely, and it's like she completely knew, and she was just like, 'just lie the fuck down, keep still you're not well, I want you to recover'. And she sort of looked after me all through the next week, when I was in recovery from the operation. Here Participant F apportions conscious knowing to Henry, including what was wrong (pain, not well) and what needed to happen (lie down, keep still), and as with the previous extract, also ascribes Henry an internal 'voice'. --- Exceptionalism Several participants expressed beliefs that their animal companion's exceptionality explained how they were able to empathize with humans. Participant B describes the exceptional abilities of their cat as even allowing them to transcend species: PB: she's very, yeah, very unique [ ] I think that just makes her, just almost makes her human, though she's not human obviously, (inaudible), but it almost makes her slightly human in what she does so I think she is very special. [ ] because she does these things like that humans would do, and I think that's probably how I feel about her, more than other cats we've had because they just acted like normal cats. Here the anthropomorphic classification is made explicit alongside the elevation of this cat from others of its species. Several participants spoke of their animals as 'unique', as a descriptor of their identity and in terms of their capacities being an extension of what a 'normal' animal could do. Similarly, Participant G singled out one cat in their household for the ability to pre-empt and warn them of oncoming seizures. PG: I think she's highly intelligent, and has managed, because she's highly intelligent to understand what her normal senses are telling her. This participant understood this cat's ability to predict seizures in terms of its exceptional intelligence, particularly in comparison to other cats, and indeed humans. This speaks to a view that for animals to understand our internal states and to communicate this to us requires skills beyond the capability of their conspecifics, sometimes conferring on them a human-like status. Against a backdrop of social norms that views many empathic and cognitive capacities as exclusively human, to explain how these animals have acted in these experiences, guardians may feel they have to separate their animal from the norm. --- Mixed Constructions Some constructions mixed anthropocentric and anthropomorphic interpretations as in the following sub-themes. --- Special Senses Most participants utilized some degree of folk rationale to explain their animal's behaviour. These explanations centred around beliefs of animal knowing and the attribution of special, non-human senses: PB: Whether animals have got another, an extra sense we don't know [ ] I don't know what it is that they feel or can sense but there's obviously something. Because they seem to try and be more of a comfort to you for that little period of time [ ] you think 'why are they doing this,' but I think they must have some sort of sense that they, that you need help. Explaining animal empathy this way suggests that the animal's actions were so inexplicable by any other means that abilities unknown or unknowable to humans must be at play. This suggests that participants were sometimes reticent to attribute human empathic capacities, perhaps due to concerns over allegations of anthropomorphism. However, the attribution of special senses was sometimes considered superior to human empathy: PF: you can trust them to sort of know, and maybe have some kind of superior knowledge in certain situations, like okay, that's what she thinks, that's what should happen [ ] so I felt that cats did do that you know, they were capable of sort of targeted comfort, like knowing when you need something. Participant F emphasized their trust in Henry (cat) and based this on a belief that Henry had access to knowledge that that wasn't available to humans. Considering special senses from this perspective could also be construed as highly anthropomorphic, in that Henry is in possession of'more than human' capacities. --- Surprise/Expectation Participants often expressed surprise at the empathic actions of their animal. PF: it was amazing, because yeah, she was a cat! [ ] You're like wow! [ ] yeah, I was really surprised [ ]. But yeah, it was really, I really was amazed, I was really like wow, Henry you know you're doing here. Here, Participant F's surprise can be interpreted from as anthropocentric, in that non-human capacities are generally expected to be inferior, hence any display of capacity beyond an accepted animal norm is worthy of astonishment. While the previous extract illustrates wonder at animal knowing, other participants were more anthropomorphic and held expectations of their animal companions: PA: so no, I knew in the moment what it was, and I wasn't surprised, like I wasn't surprised at all yeah [ ]. No, no, not at all, and I feel like he'd do it again. He, if it happened again with someone else I can hundred percent guarantee he would do it again yeah. PB: Because they seem to try and be more of a comfort to you for that little period of time, which is quite, you know, you think 'why are they doing this' [ ] And I just grew up thinking all cats could do that, but then people tell me, no, no, no. This final extract illustrates the contradiction within this theme by simultaneous expressing surprise; questioning why the animal is providing comfort, followed by an expectation that this is just something cats can do. It also illustrates a tension some participants expressed attributable to believing in their animals' capacities while in an anthropocentric culture as discussed in the next theme. --- Anthropocentric Constructions Three sub-themes are interpreted as displaying commonality in anthropocentricity. These sub-themes illustrate a belief structure that sees humans as primary amongst species and the participant explanations are generated through a human-centric lens. --- Proofs All participants repeatedly stated proofs to verify their attributions of empathy. This usually took the form of detailed and persistent explanations of the identifier of animal empathy-behaviour change. PA: I got up to just use the bathroom that night, and usually Bay couldn't care less, he'll just keep sleeping, but this night he actually got up and came to the bathroom, which is very unusual. [ ] Bay just slept there with his head on my chest. He would usually start the night sleeping on my chest anyway, but he gets really hot and then he goes to my feet. That night, he was just like on my chest, the whole night. The deliberate caregiving described can be interpreted against a human-centric cultural backdrop whereby to make assertions of animal empathic capacities requires extensive and robust proof to protect from accusation of naive anthropomorphism. Participants were thus motivated to provide multiple proofs to show that their animal's behaviour was not merely chance or being misinterpreted. --- Physiological Explanation Some participants employed concepts of normal physiological functioning to construct their experiences. These explanations used the sensitivity of animal senses (smell, hearing, etc.) to interpret human internal states. These rationales were rooted in reality as opposed to more magical abilities discussed in the special senses theme. PA: [ ] even down to my heart rate or how my physiology is changing and Bay is just sensing it better than a human would sense it. Like, he can probably tell my heart rate's gone up, that ever so slight decibel of my voice has gone up, he can probably like, smell it from my, like, hormones coming off me because I'm stressed, [ ] I feel like they know what we're thinking, [ ] like a physiological way, and then they react accordingly, PG: Scientists say that the smell, is picking up some kind of smell I give out, that I can't smell, but because she's got far more receptors than a dog has [ ] that she can pick it up easier. Participant A had an educational background that informed their more detailed physiological explanation, while Participant G relied on folk knowledge, but both remain based in the reality of existing senses and physical functioning. Using physiological constructs to understand animal empathy behaviours is a more parsimonious explanation than those used in anthropomorphic or mixed reasonings and shows how the participants were at times wary of over-attributing empathic capacities to their non-human companions. --- Black Box The most anthropocentric explanations are grouped into a theme labelled black box in reference to a historical view of non-human animals as simple, stimulus-response organisms with no or limited conscious intention to their actions. This view has its roots in Cartesian thought and became validated for a time through the work of Skinner and the behaviourist tradition. PE: I think that's a higher-level thinking than I imagine Barney could have. [ ] There's no thought behind it, it's just a spontaneous emotional reaction, when a dog is happy and he wags his tail, if he's nervous his tail goes down, if he's cross he barks, if he's frightened he growls [ ] So I think he can recognize your emotions in that very basic way, but I don't think my emotions would have an impact on how he was feeling. Here, Participant E also rejects the possibility of emotional contagion (the emotional state matching of one individual to another [18] from humans to animals, as does Participant A: PA: dogs and cats in general, are not going to feel sad just because I'm feeling sad. I think they'll react to it, that is just my thinking [ ] but I don't think that just because their human is sad that they're just going to get sad, like I don't I don't think that's how they would work. Both participants, when musing on the internal workings of their companion animals, describe the empathic behaviours as spontaneous or reactionary, in essence, incognizant. This interpretation follows the behaviourist tradition which afforded no internal awareness to non-human animals. While these extracts demonstrate a most parsimonious and anthropocentric interpretation, the very same participants also expressed highly anthropomorphic readings of the same experiences (e.g., Participant A extract, cognitive attributions theme, where they describe Bay (dog) as consciously thinking that'mum's in trouble'). Similarly, despite Participant E describing Barney (dog) as being unable to have higher level thinking, they did attribute some cognitive abilities: PE: I do feel that he thinks he's looking after me. That he's, in that moment, keeping me, I don't mean safe physically, but just keeping me okay [ ] he's acknowledging that perhaps there's something wrong and his closeness is perhaps a comfort. yeah. Interviewer: Do you think he's choosing to do that, like he's making a choice to fulfil that function? PE: yeah, without a doubt, I think he recognizes it and he, yeah chooses or decides to just come and sit with me at that, at that point. Interviewer: Do you think he knows your sad? PE: Yes, I don't know, why but yes, I think he does. While demographic data of participants were not expressly obtained in this study, it was apparent during the interviews that Participant A and E had educational backgrounds that included psychology, which may have informed their reticence to express unfettered anthropomorphic explanations to their experiences, resulting in their representation in opposing thematic constructs. --- Discussion Participants were consistent in reporting changes to their animal's normal behaviour as key to the identification of animal empathy experiences, yet they were highly paradoxical in their constructions of the internal drivers within the animal. Dichotomous explanations ranged from highly anthropomorphic, where animal companions knew what their humans were thinking, feeling, and needed, to highly anthropocentric expressions of animals as little more than stimulus-response organisms. Furthermore, there was a combination of these extremes both within individual participant narratives, and within some thematic constructs. The narratives also conformed to the social support theory of human-animal relationships. Devoldre et al. [2] describe two positive forms of social support, emotional and instrumental. Emotional support is that which assists the management of emotions, which all participants narratives contained, whereas instrumental support is characterised by more problem-orientated help. Participant A, F, and G all described specific physical instrumental support provided by both cats and dogs. In contrast to the investigation of accuracy and functionality, there has been relatively limited exploration of the psychological basis of anthropomorphism [21], and while debate continues as to the accuracy versus erroneousness of anthropomorphic attributions in companion animals, that anthropomorphism is an intrinsic aspect of human nature is less controversial. Anthropomorphic thinking varies between people [22], and previous work has shown it to be a stable trait in individuals [23]. However, the findings of the current study suggest that variability can also exist within individuals, with seemingly incompatible views being held simultaneously. This finding may relate to evidence in developmental psychology where a body of work shows that learners hold misconceptions about phenomena based on na<unk>ve theories gained from observation of the environment during their lives and go on to use multiple and sometimes contradictory explanations based on superficial reasoning to explain an event. Furthermore, acknowledging contradictions is avoided by modifying observations to defend previously held views [24]. The range of how participants constructed and understood animal empathy experiences may represent an inherent confusion as to what is really happening within their animals during perceived empathic encounters. Epley, Waytz, and Cacioppo [14] put forward a model of anthropomorphism that combines both motivational and cognitive aspects, and this provides a framework to account for and predict this variably. This model proposes three psychological factors: accessibility and applicability of knowledge about humans (elicited agent knowledge), motivation to explain and understand the behaviour of non-humans (effectance motivation), and the desire for social contact (sociality motivation). The elicited agent knowledge factor affords that the accessibility of knowledge about us as humans plays a central role in attributions to non-humans. As we have such immediate access to rich phenomenological information of what it is like to be ourselves, this forms a rapid and automatic basis for applying that knowledge to non-human agents. That anthropomorphic explanations featured in all our participant narratives conforms to this factor. Furthermore, this factor suggests that when internal knowledge is less accessible, it is less likely to be applied. This aspect may be seen in our data where some participants extended beyond anthropomorphic descriptions and into a more-than-human realm of magical thinking, ascribing special senses to their animal companions. Perhaps, if the internal psychological mechanisms of empathy are difficult or inaccessible knowledge for some, it then becomes difficult to apply to animals. This may be an explanation for the resulting attribution of magical capacities to explain the unintelligible. Motivational factors in the model provide modulation to the degree of anthropomorphism used by participants. Sociality motivation relates to the desire to establish social connections and predicts that attributions to animals are increased in the absence of connections to other humans [14]. In the context of social support and empathy, this may be particularly relevant, as evidenced by several participants describing the context of their empathy experiences as times of loneliness and loss of close human companions. A motivation to reduce discomfort associated with uncertainty over the actions of non-humans, and improve the prediction of future behaviour by providing anthropomorphic explanations for animal actions is termed effectance motivation. As Nagal [25] would have it, we cannot know what it is like to be a bat, or indeed our companion cats and dogs, hence there is a motivation to interpret their behaviour rather than leave it unexplained. In this study, there is the added incentive to explain the animals' behaviour anthropomorphically because to do so increases the emotional support provided by the encounters if the animals are believed to be empathic. After rapid application of elicited agent knowledge to provide anthropomorphic explanations, the model suggests there is post hoc correction to accommodate evidential knowledge of non-human capacities. Participants who reported expertise in psychology and science appeared to conform to this aspect of the model as they were more careful to provide highly parsimonious explanations for their experiences, perhaps due to a greater understanding of the negative view of anthropomorphism as folk or naive reasoning. However, it was also these participants that displayed the most notable dichotomy in their narratives, perhaps illustrating a greater cognitive dissonance between the internal motivation to anthropomorphise and cognitive desire to correct in light of their knowledge. As anthropomorphism is driven by both motivational and cognitive determinants, the mixing of interpretations both thematically and in individual participants may represent the various ways participants combined and rationalised these competing methods of constructing their experiences. The three-factor model of anthropomorphism assists us in understanding some aspects of the participant narratives, but how might we understand the more anthropocentric themes? De-mentalisation is a strategy unconsciously used by people to alleviate cognitive dissonance experienced by what is known as the'meat paradox'-the inconsistency of loving some and animals and eating others [26]. For example, humans tend to deny food animals the capacity to suffer more so than they do for companion species [27]. Perhaps providing anthropocentric explanations, particularly those that deny the animals' emotional repercussions or contagion from their owner's distress, is motivated by similarly extending deniability of their capacity to be negatively affected, thus assuaging any guilt owners may feel for using their animals for social support. An important theme uncovered in this study was that of exceptionalism. That some participants viewed their animals as exceptional in comparison to others shares commonality with the concept of subtyping of stereotypes in human prejudice literature. Subtyping refers to the separation of members of a stereotyped group into a separate category because they violate rules of the stereotype [28]. As the exceptionalism theme emerged in these data, it suggests that stereotypes about dog and cat empathic abilities exist and, as display of animal empathy was a violation, the stereotype is likely to lean toward the anthropocentric side of the spectrum. --- Limitations In comparison to other research methodologies, the sample size used in this work may appear both small and biased. However, the purpose of this work is not to provide statistical or population-level generalisability; instead, in approaching the research question via an IPA method, our aim was to achieve theoretical generalisability and provide novel insights into the topic, which may then be taken forward via other research methods [29]. When using IPA, it is appropriate to purposively recruit a sample that is relatively homogeneous regarding the topic of interest, and due to level of detail required in analysis of phenomenology, small sample sizes are advised [20] (p. 49). This resulted in participants that were not only homogeneous with regard to their experiences, but also their gender, language (English), and western cultural background. These aspects must be taken into account when considering the insights generated in this work. In particular, as the construction of human-animal relationships is a semiotic process whereby the meanings generated from signals and experiences are hugely influenced by the culture of the participants, it would be interesting to investigate constructions of animal empathy experiences in participants of different cultural backgrounds. Researcher experience with qualitative interviewing techniques can impact the quality of such work, and while the lead researcher was new to this method, thorough piloting of the interview schedule, strict adherence to substantiated IPA data analysis protocols, and clear establishment of rapport with participants leaves the authors confident that the resulting richness of the interview data provides relevant and compelling findings. A further recognised limitation is that some participants may have been reticent to express anthropomorphic views to a research scientist, perhaps skewing those participants towards more anthropocentric or parsimonious explanations. In future work it is suggested that greater demographic detail is gathered, such as the timescale of animal relationship, participant education level and background, and perhaps to consider blinding participants to the interviewer's scientific background. --- Conclusions Themes identified in this study provide valuable and rich insight into how humans understand their companion animals. This research demonstrates that experiences of companion animal empathy can be powerful and meaningful for humans, but the inconsistent mixture of anthropomorphic and anthropocentric reasoning illustrates the confused nature of human understanding of animals' internal states. As increasing public knowledge of the burgeoning scientific evidence of animal capacities intersects with a long history of anthropodenial [30] and aspersion of anthropomorphism, this confused state may not quickly dissipate. However, as the ascribing of internal states-particularly emotions-to animals has important implications for their moral status [31], gaining understanding and insight into how humans construct animal empathy may hold an applied value. For example, this knowledge could lead to more targeted education in areas where humans use companion animals for social support, such as in animal-assisted therapy and emotional support animals. --- Data Availability Statement: The data presented in this study are available on request from the corresponding author. --- Author Contributions: Conceptualization, K.M.H., K.M. and R.B.; methodology, primary analysis and writing-original draft preparation, K.M.H. Validation, writing-review and editing, and supervision, K.M. and R.B. All authors have read and agreed to the published version of the manuscript. Funding: This research was supported by a University of Sussex, School of Psychology PhD studentship. The APC was funded by the Sussex University PGR OA fund. --- Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the University of Sussex (ER/KH447/1, 13 February 2021). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. --- Conflicts of Interest: The authors declare no conflict of interest. | As highly social animals, humans experience better mental and physical health when they have access to social support. One aspect of social support is the extent to which those in supporting roles empathise with us. Many humans use-and rely upon-companion animals to provide social support and believe them to be empathic in times of need. This study used in depth interviews with six participants who identified as having had experiences when their dog or cat empathised with them, to examine how they made sense of these experiences. The participants consistently reported that changes to their animal's normal behaviour was key to identifying animal empathy, but there was significant variation in their understanding of how and why their animals performed empathic actions. Inconsistencies in participant explanations may illustrate the difficulties in understanding animals' emotions, motivations, and cognitive abilities in light of a history of denial of animal capacities on one hand, coupled with burgeoning scientific evidence about animal communication on the other. The findings in this study can be applied in areas where companion animals are used explicitly for social support, such as animal-assisted therapy and emotional support animals. |
, writing-review and editing, and supervision, K.M. and R.B. All authors have read and agreed to the published version of the manuscript. Funding: This research was supported by a University of Sussex, School of Psychology PhD studentship. The APC was funded by the Sussex University PGR OA fund. --- Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the University of Sussex (ER/KH447/1, 13 February 2021). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. --- Conflicts of Interest: The authors declare no conflict of interest. | As highly social animals, humans experience better mental and physical health when they have access to social support. One aspect of social support is the extent to which those in supporting roles empathise with us. Many humans use-and rely upon-companion animals to provide social support and believe them to be empathic in times of need. This study used in depth interviews with six participants who identified as having had experiences when their dog or cat empathised with them, to examine how they made sense of these experiences. The participants consistently reported that changes to their animal's normal behaviour was key to identifying animal empathy, but there was significant variation in their understanding of how and why their animals performed empathic actions. Inconsistencies in participant explanations may illustrate the difficulties in understanding animals' emotions, motivations, and cognitive abilities in light of a history of denial of animal capacities on one hand, coupled with burgeoning scientific evidence about animal communication on the other. The findings in this study can be applied in areas where companion animals are used explicitly for social support, such as animal-assisted therapy and emotional support animals. |
Nursing homes (NHs) have experienced devastating impacts of the COVID-19 pandemic. Although less than 0.5% of the U.S. population live in NHs, NH residents accounted for as much as 40% of COVID-19 deaths at the height of the pandemic (Grabowski & Mor, 2020). The most frail and vulnerable NH residents, such as those with advanced dementia, are at highest risk of acquiring and dying from COVID-19 (Panagiotou et al., 2021). There are more than 860,000 Americans living with dementia in NHs, comprising 61.4% of the NH population. Moreover, 36.6% of NH residents have advanced dementia characterized by severe cognitive impairment and functional disability (U.S. Centers for Medicare and Medicaid Services, 2016). These residents with advanced dementia are unable to advocate for themselves, cannot reliably communicate symptoms, and are completely dependent on staff for all their care needs (Mitchell et al., 2009). Thus they may be especially negatively affected by insufficient staffing, isolation, and deficiencies in care due to the COVID-19 pandemic, particularly when their family advocates cannot visit. Throughout the pandemic, NHs struggled with the urgent need to make compassionate and effective management decisions, maintain communication with family, and protect the safety of residents and staff (Grabowski & Mor, 2020). Prominent national media reported widespread chaos and burden in the NH setting, highlighting overwhelming infection and death rates among NH staff and residents (Berger, 2020;De Freytas-Tamura, 2020;Engelhart, 2021). This crisis also highlighted racial disparities in NH care, with more Black residents suffering with the virus (Abrams et al., 2020;Gebeloff et al., 2020;White et al., 2020a). Little is known about the intersecting challenges of COVID-19, advanced dementia, and disparities in NH care. The Assessment of Disparities and Variation for Alzheimer's disease Nursing home Care at End of life (ADVANCE) National Institute on Aging-funded study was a large qualitative study that sought to better understand the drivers of well-documented regional and racial disparities in intensity of care provided to Black and White NH residents with advanced dementia (Hendricksen et al., 2021;Lopez et al., 2021Lopez et al.,, 2022;;Rogers et al., 2021). ADVANCE used nationwide databases to purposefully select 14 NHs within four hospital referral regions (HRRs) across the United States with varied intensity of advanced dementia care quantified by feeding tube and hospital transfer rates in this population. ADVANCE-C was a supplement grant that leveraged the unique cohort of diverse NHs and research infrastructure from ADVANCE. The aim of this study was to explore NH staff and proxies experiences caring for these residents with dementia during the pandemic across multiple domains. --- Method We used a qualitative descriptive study design (Sandelowski, 2000(Sandelowski,, 2010)). This study was approved by Advarra Institutional Review Board. --- Facility Recruitment The methodology of the main ADVANCE study is described elsewhere (Lopez et al., 2021). Briefly, we used the 2016-2017 Minimum Data Set aggregated to the NH level to quantify intensity of care for this population based on feeding tube and hospital transfer rates among residents with advanced dementia. High-or low-intensity HRRs were defined as above or below the national median hospital transfer and tube-feeding rates for residents with advanced dementia, respectively. High-and low-intensity facilities were defined as above or below HRR median hospital transfer and tube-feeding rates for residents with advanced dementia, respectively. From the HRRs included in ADVANCE, we selected one high-intensity HRR located in Georgia (GA) and one low-intensity HRR located in New York (NY) and aimed to recruit two high-intensity facilities and two low-intensity facilities within each HRR and the same staff and proxies who participated in ADVANCE. --- Participants We aimed to conduct semistructured interviews with staff in each facility from a range of disciplines, including directors of nursing, administrators, social workers, registered nurses (RNs), and licensed practical nurses (LPNs), certified nursing assistants (CNAs), and medical providers (physicians, physician assistants [PAs], and nurse practitioners [NPs]). Staff eligibility criteria were as follows: > 21 years, communicated in English, and cared for residents with advanced dementia for >2 months. NH administrators identified and scheduled interviews with staff at their convenience. We also aimed to recruit one Black and one White proxy from each facility, which we defined as the individual listed as the designated decision-maker for residents with advanced dementia. Administrators reached out to eligible proxies for residents with severe cognitive impairment (Cognitive Functional Scale score of four; Thomas et al., 2017) and NH stay >100 days, age > 21 years, and able to communicate in English. The research team contacted proxies who agreed to participate and arranged interviews. --- Data Collection Data collection occurred from October 2020 through March 2021 during staggered 2-week periods at each facility. The data collection team included two Mastersprepared researchers (one Black and one White race) trained in qualitative methods. Semistructured, digitally recorded, qualitative interviews were conducted via Zoom or telephone. Verbal consent was obtained from participants, and they were given a $25 gift card. Interview guides for staff and proxies comprised open-ended questions focused on "a priori" domains (see Supplemental Materials). The five staff domains were as follows: decision making; organizational resources; care processes; vaccinations; and personal impact. The five proxy domains were as follows: connecting with residents; NH response to the crisis; communicating with NH; decision making; and personal impact of the pandemic. --- Data Analysis Recorded interviews were transcribed and checked for accuracy. Data were analyzed by four investigators with formal training in qualitative analyses including two interviewers (M. Hendricksen, H. Akunor) using framework analysis methodology (Gale et al., 2013). Transcripts were coded independently by two analysts and interrater reliability assessed using the coding comparison query in NVivo version 12 (QSR International Pty Ltd., 2018). Discrepancies were discussed until consensus was reached. Analysis consisted of open, thematic, and matrix coding (Miles & Huberman, 1994). In open coding, raw data were grouped to create large, discrete themes initially guided by our "a priori" domains. In thematic coding, themes were identified and refined. In matrix coding, themes were displayed on two-dimensional matrices and compared across HRRs, NHs, and proxy racial groups. Evolving themes and results were discussed with the entire research team, which included qualitative methods and nursing home experts (R. Palan Lopez, S. L. Mitchell). --- Results Of the eight facilities in the two HRRs, five facilities agreed to participate; two high-intensity facilities (NY1 and GA1), and three low-intensity facilities (NY2, GA2, GA3). Characteristics of staff and proxy participants are shown in Tables 1 and2. Staff interviews averaged 37 min (range, 20-56 min). Of the participating staff (N = 38), 11 identified as Black, 22 were White, and were from the following disciplines: administrators (N = 5), directors of nursing (N = 5), social workers (N = 5), nurses (RN/ LPN) (N = 10), CNAs (N = 7), prescribing providers (physician/PA/NP; N = 4), and other (N = 2, activities director, resident care coordinator). Proxy interviews averaged 27 minutes (range, 18-41 min). Of the participating proxies' (N = 7), 42.8% identified as Black, 57.1% were White, and included spouses (N = 2), adult children (N = 3), and other relationships (N = 2). Although ADVANCE-C aimed to explore differences in staff findings by HRR and facility intensity, and proxy findings between Black and White proxies, matrix analyses did not show discernible patterns or differences along these parameters. Thus, results are described for staff and proxies in all facilities without reference to HRR or NH intensity of care or proxy race. GA1 N = 8 GA2 N = 8 GA3 N = 8 NY1 N = 8 NY2 N = b 0 0 1 1 3 Occupation, N Administrator 1 1 1 1 1 Director of nursing 1 1 1 1 1 Nurse (RN/LPN) 2 2 2 2 2 Certified nursing assistant 1 2 2 1 1 Social worker 0 1 1 1 1 Prescribing provider 2 1 1 1 0 Other c 1 0 0 1 0 Mean --- Staff Experiences Decision making: "The family has to be able to trust us" Because residents with advanced dementia cannot communicate for themselves, staff frequently referred to the importance of having a connection with proxies when making decisions related to hospital transfers, advance directives, and care planning. Staff also recognized the importance of having the proxies' trust. This was especially relevant in discussions around advance directives and care planning. All facilities readdressed care plans and advance directives during the pandemic. Some facilities had processes in place to readdress advance care plans regularly, others implemented processes specifically for the pandemic. Visitors were not allowed into the facilities, so proxies had to rely on and trust the staff reports of the residents' with advanced dementia status and video calls with the resident to make decisions. As an administrator (NY2) said: And so it's a whole lot more communication between because if the family come in and able to see their loved one it is much easier to portray the picture, um, advanced dementia, as well as the decline in the progression of the disease process. Versus now it's basically through the little Zoom visit... the family has to be able to have that trust in us that we speak on the, uh, what is best for the resident on behalf of the resident. Another nurse practitioner (GA3) further described the challenges around Zoom visits and the importance of the proxy trusting the staff reports of the resident's status while having to make decisions about care for someone with advanced dementia like this:... you know... you just, uh, give them what you see... you have that sense of trust because you've been dealing with them for a while. And so they trust your observation, they do trust the staff as well. So it makes a huge difference... you know, and they do have the Zoom calls so they can see, you know... [but] she was sleeping the whole time, I could barely get her to wake up, she was rattling during the call. Um, and then they'd make the decisions based on that... and they make the decision on, you know, on whether to transfer the patient or not. --- Organizational resources: "We all came together" The major organizational resource available to staff was their ability to pull together as a team. All facilities reported experiencing grave staffing shortages throughout the pandemic, due to staff quitting out of concerns for their safety or their family's safety, unemployment offering higher reimbursement, or staff needing to quarantine during facility outbreaks, "there's times where we had over 56 staff members out at one time" (NY1). Despite these shortages, all staff participants discussed how their team took an "all hands on deck" approach, utilizing nonclinical staff to assist with care, such as administrators and social workers assisting residents with advanced dementia with feedings and providing direct care when needed. Many described how this brought their team closer together. For example, one social worker (NY2) mentioned: GA1 N = 2 GA2 N = 1 GA3 N = 2 NY1 N = 2 NY2 N = 0 Proxy characteristic Mean age, years (SD) 77.0 (0) 79.0 (0) 65.0 (12.7) 66.5 (3.5) - Sex, N Male 1 0 0 0 - Female 1 1 2 2 - Race, N Black 1 0 1 1 - White 1 1 1 1 - Relationship to resident, N Partner/spouse 1 1 0 0 - Adult child 0 0 1 2 - Niece/cousin 1 0 1 0 - Notes: GA, Georgia; NY, New York; SD = standard deviation.... you had directors and management and administration that were, you know, going to the floors and... working weekends and helping with feeding when we had staff out with COVID... I think it really brought everyone closer together. Another administrator (GA1) described: I really feel like that we have come together as a team... we've gotta do whatever we've gotta do to keep these residents safe. Care processes: "You have to become even more family" Staff reported difficulties with adaptations in care that did not benefit and were even detrimental for residents with advanced dementia. For example, all facilities reported using video calls, phone calls, window visits, and scheduled, socially distant outdoor visits to maintain connections between residents and families. However, staff noted that although these interactions were sometimes helpful for families, they were largely ineffective for residents. Residents with advanced dementia were unable to hear their families due to social distancing, recognize their faces due to masks, or receive their physical touch; a common way for families to communicate affection for these residents who cannot understand their surroundings. One NP described (GA2): We had a number of family members that... stopped scheduling the calls just because there was no connection there... they basically were looking at a picture of their loved one... So we did have a number of family members that had just decided... that it was doing more... emotional harm to them personally than it was good for the resident. Staff also reported SARS-CoV-2 testing as especially challenging for residents with advanced dementia. Because these residents could not understand what was happening to them during nasopharyngeal swabs, it was very difficult for staff to test for the virus. Many residents with advanced dementia had to be restrained, and others were too combative to test them regularly. The residents' suffering and anguish was emotionally distressing for staff. One social worker (GA2) described:... for this person to scream bloody murder, you're sticking something in their nose and they're confused, you know, that really hurt me to the point that I was emotional. Many staff talked about the significant impact of isolation on residents with advanced dementia. Staff reported that residents lacked the normal social cues to eat during meal times, which contributed to significant weight loss. They also perceived more rapid cognitive decline than usual. One administrator (NY2) mentioned that antipsychotic use increased trying to manage worsened behaviors among their residents with dementia.... we were doing really doing great on the use of antipsychotics, that definitely went up... people definitely got more medication and more depressed during the pandemic. While caring for residents with dementia, staff also reported difficulty connecting with residents while having to wear masks, because residents could no longer see their faces/smiles. As this NP (NY1) described:... wearing a mask and a shield going into a room, I can't interact like I would or make someone feel comfortable with my smiles... You know, you feel a barrier just because there is a barrier. Due to these particular challenges for residents with advanced dementia, staff reported needing to "become even more family" or a type of surrogate family member. Many staff felt it was even more important throughout the pandemic to spend more time with residents with advanced dementia because families were unable to visit. A nurse (GA2) said: At that time, you got to be a family member. Not only a caregiver, but you got to be a family member. Because their family members couldn't be there. A director of nursing (GA1) described: We've had to step up more and be more of a family to these residents... but because literally their family can't come here, we I think have all taken it personal to love 'em even more, care for 'em even more, show 'em the family love more. Staff also discussed how government guidelines negatively affected their care processes. One LPN (GA1) said keeping up with the changing guidelines "was a full-time job." Others felt the guidelines were important to keeping the virus from spreading. Some staff reported extreme frustration with the restrictiveness of government guidelines. As one CNA (GA3) expressed: They've made us imprison them, take away all their rights, put them in their room, not let them have any interaction with others except for the person that they're in the room with. For months, for almost a whole year now. --- Vaccinations: "It was chaotic" At the time of data collection, vaccines were not available in two facilities (NY2, GA3). In the others (NY1, GA1, GA2), many staff reported discomfort at being among the first group to be vaccinated. Almost all administrative and leadership teams were vaccinated, but many staff reported initial reluctance about getting a vaccine. As one social worker (NY1) stated, "People wanna see... the rest of us go through it first and see how it went... we are seeing more people want it now." Facilities partnered with pharmacies, but administrators reported the first vaccine rounds were "chaotic," "rushed," and "unorganized." One administrator (GA2) described disappointment with how delayed vaccines coincided with a facility outbreak of COVID-19: You know, how many lives could have been saved... had the [vaccine] came to us sooner as promised, or at least initially promised... the response plan from the federal and the state level [was] so uncoordinated. --- Personal impact: "I bear an enormous burden" All staff discussed the difficulty of caring for residents through the pandemic. They described the emotional toll of caring for dying residents with dementia, the burden of shifts lasting 12 hr or longer, and symptoms of burnout. As one CNA (GA3) described:... months and months of it. And just watching the decline... and people die. It just weighs on you... some days I'll leave here and... just cry on the way home. I feel terrible about it. I bear an enormous burden and a sense of guilt over it... because, you know, I can't do anything about it." Staff in the Georgia facilities reported faith as a source of comfort to help deal or cope with the stress. In one NH (GA3), a CNA said "I pray, I ask God to give me strength to make it through the day," while the social worker described having "daily conversations with God about, about [the stress of caring for residents], and how it is." Although many staff did not express concerns for their personal safety, others expressed concerns about bringing COVID-19 home to their loved ones, especially to vulnerable family members (medically ill, elderly, and very young). An LPN (GA1) said, "I didn't feel safe going home [after work] and then I didn't feel safe coming back here because I have children and grandchildren." --- Proxy Experiences Connection with resident: "We'll be there as soon as they'll let us" Proxies reported NHs trying various ways to maintain connections with residents, including video calls, outdoor visits, and very limited, socially distant indoor visits. Although some proxies reported that these visits were helpful, many said not being able to see residents in person was extremely difficult, and that they were waiting for visitor restrictions to be lifted. Some proxies mentioned that time was limited with their loved ones because they were at the end of their lives, making the lack of visiting especially difficult. One proxy (NY1) described: I have to tell her that, "We'll be there as soon as they'll let us come in"... It's very hard, you know, she doesn't understand. And it's harder, I guess, because I know she doesn't have that much more time, that she'll know us, and we wasted a year because of COVID. NH response: "They did the very best they could" In general, most proxies overwhelmingly felt NHs did the best they could under the circumstances and empathized with NH staff. One proxy (GA1) described concerns for not only their loved one, but for the health and well-being of staff:... they've done an excellent job, because they've only had one outbreak of COVID. [Resident] did not have COVID... it's hard for me not being able to go, but I understand the situation they're in and I know how contagious this is. However, proxies in one NH (GA3) described how mistrust of the facility staff, combined with the inability of the resident with advanced dementia to advocate for themselves, contributed to worry about whether their loved one was receiving the best care possible.... are they sincerely caring for her, are they jerking her around, and just because she can't communicate... some of the... people are in the wrong calling, if that makes any sense. I think you got to have that in your heart and you mind, to be a good caregiver. --- Communication: "They always let us know" The majority of proxies were very satisfied with availability and frequency of communications with frontline and administrative staff. One proxy said, "I talk to them every day. And they share with me, you know, everything... my communication with them could not be any better" (GA1). Proxies reported receiving phone calls, emails, and video meetings that included updates on their resident's condition as well as facility COVID updates. One proxy (NY1) described how phone calls from NH staff made them feel like they knew what was happening during the time they were not allowed to visit the facility. I think they've done a good job of, is just keeping me abreast of what is going on with her. Um, you know, all of the times she was tested for COVID. They would call and say, you know, "She tested negative."...I don't know, they just seem to check in often-so that, while I can't see her, I know what's going on. --- Decision making: "They know our wishes" Almost all proxies reported not having to make decisions around hospital transfers, care planning, or resuscitation because the resident's and proxy preferences were already known and documented. A proxy in the south (GA3) described how having their resident's preferences on file from admission made them feel more confident about having to make a decision around hospitalization should the decision arise: "[resident's hospitalization preferences] is in their file. They've got all of it. They know our wishes." One proxy (NY1) did hospitalize their loved one and described the decision as traumatic because the resident could not understand what was happening or advocate for herself: They sent her to the hospital and because she can't read, she can't write, she doesn't understand, and they're asking her questions, it was very, very traumatic... and the doctor... was trying to ask her questions because he wanted to give her an MRI, and she can't answer them. So it was very traumatic. Personal impact: "It's been a rollercoaster" All proxies described feeling stressed and very emotional, "emotionally... it's been a rollercoaster" throughout the pandemic. Without the ability to see their loved ones whenever they chose and the toll of being separated from their family support systems, proxies described video calls with the residents as heartbreaking: I do video calls and everything, but... if I sit here and say it wasn't difficult, even with me having the best experience, it's still... difficult and I appreciate you calling because I don't think I've expressed how I felt about this. (NY1) Another proxy (GA2) described the stress of worrying about their family member getting and --- Discussion This unique qualitative study sheds light on the experience of caring for residents with advanced dementia during the COVID-19 pandemic. Our findings highlight (a) the importance of developing dementia-specific policies and procedures for future crises, (b) the critical nature of communication to both quality of care and the experiences of family of NH residents with advanced dementia, and (c) the detrimental effect of social isolation on both residents and proxies. NH staff experienced ubiquitous challenges providing care for this vulnerable population regardless of region and facility intensity. Staff reported common adaptations made for residents during the pandemic, such as window visits and video calls, were not effective in maintaining connections for residents with advanced dementia. However, technology played a critical role in maintaining frequent communication, via phone calls, video calls, and emails, for the decision makers of NH residents with advanced dementia. Proxies of residents with advanced dementia indicated that although facilities were doing their best to try and to maintain personal connection, they felt especially isolated from their loved ones throughout the pandemic. Staff and proxies stressed that the separation and isolation of NH residents with advanced dementia from their families due to infection control guidelines was detrimental to not only the health of residents, but the well-being of the proxy. Both staff and proxies emphasized that mutual trust was critical for making decisions regarding residents' care during the pandemic. This report extends prior literature regarding NH staff experiences during the pandemic by providing a deeper understanding focused on the impact of caring for residents with advanced dementia. Similar to previous research, staff reported concerns of bringing the virus home to their families, and a deep sense of empathy and concern for residents in their care (White et al., 2020b(White et al.,, 2021;;Panagiotou et al., 2021). However, staff encountered substantial challenges specific to advanced dementia such as testing these residents for the virus, keeping their masks on, and keeping them isolated in their rooms. The ability of residents with advanced dementia to comprehend precautions precluded successful implementation of infection control protocols. Most staff reported that isolating residents with dementia was particularly challenging and had unforeseen outcomes. Consistent with national media reports (Healy et al., 2020;Wan, 2020), NH staff perceived more rapid decline in cognitive status and weight loss among residents with dementia due to the lack of social interaction. Taken together, the staff experience underscored the need for dementia-specific considerations for future NH emergency preparedness plans. The majority of proxies expressed satisfaction with NH communication, and the critical role communication played instilling their trust in staff. Contrary to media reports of families kept in the dark about their loved one's status in NHs during the pandemic (Shih Bion, 2020), our findings indicate proxies felt that NHs continually updated them on resident's status and any facility changes. The unique needs of proxies of NH residents with advanced dementia should be noted, understanding that these residents have limited life expectancy, proxies consistently reported distress regarding missing out on their loved ones remaining days, adding to the emotional burden they carried through the pandemic. Limitations of the study merit comment. We used best methodological practices (triangulation, doublecoding, team consensus) to mitigate biases found in qualitative analyses. Nevertheless, these findings are limited to participating NHs and individuals who consented to be interviewed, but do include the experiences of diverse NHs, staff, and proxies. Moreover, we may not have captured prevalent racial differences in experiences between Black and White proxies due to the small number of participants. Lastly, due to restrictions we were unable to conduct any physical observations in participating facilities, therefore our findings rely solely on staff reports of experiences in the NHs. This study provided a unique opportunity to understand experiences of NH staff and proxies of residents with advanced dementia during the COVID-19 pandemic in facilities in different regions of the United States with differing intensity of care. Staff consistently described the heavy emotional burden of caring for residents with advanced dementia and underscored the importance of considering the psychological consequences of the trauma they experienced throughout the pandemic. Overall, the findings suggest staff and proxies felt that facilities were doing the best they could with the resources available to them with an all hands-on-deck approach to providing care, especially for residents with advanced dementia. While hospital staff and other frontline healthcare workers were touted as heroes, NH staff were often vilified in the media (De Freytas-Tamura, 2020;Rabin, 2021;White et al., 2021). In the wake of the pandemic, 2,405 staff and 153,445 NH residents have died from COVID-19 to date (U.S. Centers for Medicare and Medicaid Services, 2022). The pandemic shone a harsh light on critical flaws in the U.S. NH system and further exacerbated long-standing inequities of its most vulnerable residents, particularly those with advanced dementia. The pandemic also renewed calls for widespread system transformation and heightened focus on emergency preparedness for future public health emergencies (e.g., pandemic influenza, bioterrorism) and natural disasters (e.g., floods, hurricanes, earthquakes, wildfires). This report emphasizes the need for dementiaspecific strategies to improve NH preparedness for future crises (National Academies of Sciences, Engineering, and Medicine, 2022). It further underscores the need for increased support for NH staff from policy-makers and clinicians; a demand that will surely continue following the pandemic (Grabowski & Mor, 2020). Challenges NH staff and proxies faced throughout the COVID-19 pandemic exacerbated the burden and stress they experience and are likely to contribute to continued staff shortages and increased rates of caregiver burnout in the future. It is critical that dementia-specific strategies strive to balance best practices to mitigate future crises while maintaining family connections and person-centered care for this vulnerable population. --- Supplementary Material Supplementary data are available at The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences online. --- Author Contributions M. Hendricksen participated in all aspects of the study including conducting interviews, data analysis and interpretation, and drafting the manuscript. S. L. Mitchell designed the study, obtained funding, and contributed to the interpretation of findings and critical revision of the manuscript. R. P. Lopez designed the study, obtained funding, supervised data analysis, and contributed to interpretation of findings and critical revision of the manuscript. A. Roach contributed to data analysis and interpretation of findings. A. H. Rogers contributed to data analysis and interpretation of findings. H. Akunor conducted interviews and contributed to data analysis and interpretation of findings. E. P. McCarthy contributed to development of interview guides, supervised data collection, and contributed to interpretation of findings and critical revision of the manuscript. --- Conflict of Interest None declared. | Objectives: Assessment of Disparities and Variation for Alzheimer's disease Nursing home Care at End of life (ADVANCE) is a multisite qualitative study of regionally diverse Nursing homes (NHs; N = 14) providing varied intensity of advanced dementia care. ADVANCE-C explored the experiences of NH staff and proxies during the COVID-19 pandemic. Methods: Data collection occurred in five of the ADVANCE facilities located in Georgia (N = 3) and New York (N = 2). Semistructured qualitative interviews with NH staff (N = 38) and proxies of advanced dementia residents (N = 7) were conducted. Framework analyses explored five staff domains: care processes, decision making, organizational resources, vaccinations, and personal experience, and five proxy domains: connecting with residents, NH response, communicating with NH, decision making, and personal impact of the pandemic. Results: Staff mentioned difficulties implementing infection control policies specifically for advanced dementia residents. Staff reported trust between the facility and proxies as critical in making decisions during the pandemic. All staff participants spoke about "coming together" to address persistent staffing shortages. Proxies described their role as an "emotional rollercoaster," emphasizing how hard it was being separate from their loved ones. The accommodations made for NH residents were not beneficial for those with advanced dementia. The majority of proxies felt NH staff were doing their best and expressed deep appreciation for their care. Discussion: Caring for advanced dementia residents during the COVID-19 pandemic had unique challenges for both staff and proxies. Strategies for similar future crises should strive to balance best practices to contain the virus while maintaining family connections and person-centered care. |
The practical side of transparency How can scientists increase the transparency of their work? To begin with, they could adopt open research practices such as study preregistration and data sharing [3][4][5]. Many journals, institutions and funders now encourage or require researchers to adopt these practices. Some scientific subfields have seen broad initiatives to promote transparency standards for reporting and summarizing research findings, such as START, SPIRIT, PRISMA, STROBE and CONSORT (see https://www.equatornetwork.org). A few journals ask authors to answer checklist questions about statistical and methodological practices (e.g., the Nature Life Sciences Reporting Summary) --- Transparency Checklist We provide a consensus-based, comprehensive transparency checklist that behavioural and social science researchers can use to improve and document the transparency of their research, especially for confirmatory work. The checklist reinforces the norm of transparency by identifying concrete actions that researchers can take to enhance transparency at all the major stages of the research process. Responses to the checklist items can be submitted along with a manuscript, providing reviewers, editors and, eventually, readers with critical information about the research process necessary to evaluate the robustness of a finding. Journals could adopt this checklist as a standard part of the submission process, thereby improving documentation of the transparency of the research that they publish. We developed the checklist contents using a preregistered'reactive-Delphi' expert consensus process 10, with the goal of ensuring that the contents cover most of the elements relevant to transparency and accountability in behavioural research. The initial set of items was evaluated by 45 behavioural and social science journal editors-in-chief and associate editors, as well as 18 open-science advocates. The Transparency Checklist was iteratively modified by deleting, adding and rewording the items until a sufficiently high level of acceptability and consensus were reached and no strong counter arguments for single items were made (for the selection of the participants and the details of the consensus procedure see Supplementary Information). As a result, the checklist represents a consensus among these experts. The final version of the Transparency Checklist 1.0 contains 36 items that cover four components of a study: preregistration; methods; results and discussion; and data, code and materials availability. For each item, authors select the appropriate answer from prespecified options. It is important to emphasize that none of the responses on the checklist is a priori good or bad and that the transparency report provides researchers the opportunity to explain their choices at the end of each section. In addition to the full checklist, we provide a shortened 12-item version (Fig. 1). By reducing the demands on researchers' time to a minimum, the shortened list may facilitate broader adoption, especially among journals that intend to promote transparency but are reluctant to ask authors to complete a 36-item list. We created online applications for the two checklists that allow users to complete the form and generate a report that they can submit with their manuscript and/or post to a public repository (Box 1). The checklist is subject to continual improvement, and users can always access the most current version on the checklist website; access to previous versions will be provided on a subpage. This checklist presents a consensus-based solution to a difficult task: identifying the most important steps needed for achieving transparent research in the social and behavioural sciences. Although this checklist was developed for social and behavioural researchers who conduct and report confirmatory research on primary data, other research approaches and disciplines might find value in it and adapt it to their field's needs. We believe that consensusbased solutions and user-friendly tools are necessary to achieve meaningful change in scientific practice. While there may certainly remain important topics the current version fails to cover, nonetheless we trust that this version provides a useful to facilitate starting point for transparency reporting. The checklist is subject to continual improvement, and we encourage researchers, funding agencies and journals to provide feedback and recommendations. We also encourage meta-researchers to assess the use of the checklist and its impact in the transparency of research. --- Data availability All anonymized raw and processed data as well as the survey materials are publicly shared on the Open Science Framework page of the project: https://osf.io/v5p2r/. Our methodology and data-analysis plan were preregistered before the project. The preregistration document can be accessed at: https://osf.io/ v5p2r/registrations. --- Author contributions --- Competing interests S.K. is Chief Editor of the journal Nature Human Behaviour. S.K. has recused herself from any aspect of decision-making on this manuscript and played no part in the assignment of this manuscript to in-house editors or peer reviewers. She was also separated and blinded from the editorial process from submission inception to decision. The other authors declared no competing interests. --- Additional information Supplementary information is available for this paper at https://doi.org/10.1038/s41562-019-0772-6. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. | We present a consensus-based checklist to improve and document the transparency of research reports in social and behavioural research. An accompanying online application allows users to complete the form and generate a report that they can submit with their manuscript or post to a public repository. |
Introduction By 2050, the world's population is projected to increase from its present level of 6.7 billion people to 9.2 billion people1. Such a population rise will put a strain on the community's infrastructure as well as its food supply, medical services, and general well-being. Governments have incorporated family planning as a component of their demographic control program in order to offset the impacts of such a massive surge in population. According to the most recent United Nations figures, India had 1,342,463,457 people as of 2017. It is the second most populous nation. Given that India's median age is 26.9 years, a significant portion of women are also of reproductive age. It is not unfair to state that women continue to pass away in our nation due to conditions that are almost unknown in industrialised nations2. 17.86% of all people on Earth live on the Indian subcontinent3. In 2019, there are 1.9 billion women globally in the reproductive age range (15-49 years), and 1.1 billion of them need family planning4. Contraception is recommended in order to meet the vital issue of rapid population expansion as well as the demands of both men and women in terms of reproductive health5. According to research, 222 million spouses do not use any kind of contraception even in this day and age. This is mostly due to a lack of awareness and resources as well as cultural taboos and attitudes around the usage of various forms of contraception6. This hypothesis is reinforced by the statistic that among women with unplanned pregnancies, roughly sixty percent of them were not using any kind of contraception. This concept uses the practice of spacing between each child (in years) through the use of contraceptive methods7. Induced abortions are frequently the outcome of unplanned pregnancies, which presents a serious problem for young people's reproductive health in developing nations like India8. Therefore, the current study intended to evaluate married females who were 18 years of age and order's awareness, acceptance, and adoption for contraception. --- Methodology The current study was done over a six-month period using a cross sectional design. Participants was recruited based on predetermined criteria and attended/delivered at the Saidham Hospital affiliated to Dr. Mane Medical Foundation and Research Centre (DMMFARC) Maharashtra, India. Married women who are 18 years of age or older, of reproductive age; and those who were willing to participate were included and participants who chose a permanent method of contraception, those who underwent hysterectomy were excluded from the study. Sample size calculated by using reference from Bamniya J et al study9 finding and using following formula Sample size = Z (1 -/ 2)2pqd2 Where; P: 63% participants were not using any method of contraception, 5% absolute precision (d), considering alpha (<unk>) 5%, and Z value (1 -<unk>/2) 1.96. Putting these values in equation, sample size = (1.96)2 <unk>63<unk>37/ (5)2, calculated sample size was 359. To examine the feasibility, applicability, and validity of the questionnaire, a pilot study was carried out. For this initial test, a pre-made questionnaire was employed. The questionnaire was improved based on the feedback we got and the difficulties we ran into during the pilot trial. The pretested final version was then used to conduct data gathering. Questionnaire was of three parts viz. Part 1: Contains sociodemographic information Part 2: History of obstetrics and gynaecology Part 3Knowledge, Attitude, and Practise Questions. The accuracy of the data tools was verified, and data input and coding were carried out in Microsoft Excel. To highlight key aspects, the raw data was organised, categorised, and presented in a tabular and graphical format. After confirming that all of the gathered questionnaires were complete, data coding and input were carried out in Microsoft Excel. SPSS Software 21 was used to examine the data. The analysis employed descriptive and inferential methods such percentage, mean, standard deviation, odd's ratio, and Chi-square test. P values under 0.05 were deemed significant for statistical analysis. --- Results Table 1 shows the demographic variables of study participants. Our study revealed that out of a total of 359 study participants, the majority 115 (32.02%) belonged to the 21-25 age group, while 197 (54.87%) were housewives, 112 (31.19%) had higher secondary education, 201 (54.98%) belonged to the lower socioeconomic class and were BPL cardholders, 191 (52.20%) belonged to rural areas, and Hindu populations contributed 182 (50.69%). Furthermore,165 (45.96%) study participants had one child, and 21 (5.84%) had a history of one abortion. Table 4 shows associations of demographic variables with contraception usage. Our study revealed that there was no statistically significant relationship between age and contraception usage. While occupation, level of education, socioeconomic class, area of residence, and religion showed statistically significant relationships with contraception usage, Furthermore, the number of children and history of abortion did not show any statistically significant associations with contraception usage. --- Discussion Our Our study found that occupation, level of education, socioeconomic class, area of residence, and religion showed statistically significant relationships with contraception usage, Furthermore, the number of children and history of abortion did not show any statistically significant associations with contraception usage.as per Gothwal M et al study shows significant association of age, marital status, family size with usage of contraception10. As per Bamniya J et al study education of women, education of spouse, occupation of women, parity, live birth show statistically significance with usage of contraception9. --- Conclusion The present study concludes that the majority of women of reproductive age still do not use contraceptives and their opinion is not taken into account. More similar studies are needed to ascertain the determinants of contraceptive use and such knowledge can be used to formulate specific health education needed for adoption of family planning methods. --- Compliance with ethical standards --- Disclosure of conflict of interest No conflict of interest to be disclosed. --- Statement of ethical approval Institutional Ethical Committee (IEC) Permission was obtained before commencing the study --- Statement of informed consent The aim and objectives of the present study were explained to participants in vernacular language and Informed consent was obtained from all individual participants included in the study. | Introduction: According to the WHO, family planning is an approach to thinking and living that people and couples freely adopt in order to enhance their health and welfare based on their knowledge, attitude, and responsible choices. Each year, incorrect use of contraceptives or their failure to work as intended results in around one-third of unwanted births. The obstacles that exist in poorer nations include a lack of awareness about contraceptive techniques, the availability of supplies, their cost, or their inadequate accessibility.This was a cross sectional study; conducted to asses o contraception awareness, acceptance, and adoption among women of reproductive age. Women who attended/delivered at the Saidham Hospital affiliated to Dr. Mane Medical Foundation and Research Centre (DMMFARC) Maharashtra, India were included. Face-to-face interviews were conducted while data were collected using a structured questionnaire. Result: Out of total 359 total women the majority 115 (32.02%) belonged to the 21-25 age group, while 197 (54.87%) were housewives, 112 (31.19%) had higher secondary education, 201 (54.98%) belonged to the lower socioeconomic class and were BPL cardholders, 191 (52.20%) belonged to rural areas, and Hindu populations contributed 182 (50.69%). Furthermore,165 (45.96%) study participants had one child, and 21 (5.84%) had a history of one abortion. occupation, level of education, socioeconomic class, area of residence, and religion showed statistically significant relationships with contraception usage.The present study concludes that the majority of women of reproductive age still do not use contraceptives and their opinion is not taken into account. More similar studies are needed to ascertain the determinants of contraceptive use and such knowledge can be used to formulate specific health education needed for adoption of family planning methods. |
Introduction Children who grow up in families where the parents have alcohol problems are at increased risk of several negative consequences, including poor school performance, poor mental health, and early onset alcohol use [1][2][3][4][5][6]. Parentification may also occur, where children assume adult roles even though they are not developmentally or emotionally ready [7]. The consequences are often long-term [8], and they augment the likelihood of other disorders, for instance, mental disorders such as major depression [9]. Furthermore, studies have demonstrated that when parents have alcohol problems, their offspring are at increased risk of alcohol-related hospitalization and mortality, including suicide [10,11]. Currently, international studies have estimated that the prevalence of children with parents who have alcohol problems is 4-29% [12][13][14][15][16][17]. The primary reason for this broad range is that parental alcohol problems are defined and assessed differently in different studies. For instance, some studies examined hazardous drinking among parents and others examined parental alcohol use disorder. Furthermore, some studies were based on self-reports from either the children or parents, and others were based on surveys, psychiatric interviews, or registry data. Drinking patterns vary across countries, and there may also be differences in how alcohol problems are defined. --- of 10 In Nordic countries, only a handful of scientifically determined estimates are available, and the estimated prevalence varies. A web-based survey distributed to Swedish youth, 16-19 years old, concluded that 20.1% of the sample had at least one parent with an alcohol problem [13]. In that study, perceived alcohol problems were assessed with the short version of the Children of Alcoholics Screening Test (CAST-6) [18]. Another survey, which was distributed to a nationally representative sample of Swedish adults, 17-84 years old, assessed alcohol problems with the Mini International Psychiatric Interview, derived from the Diagnostic and Statistical Manual of Mental Disorders, fourth edition. They concluded that 3.7% of children had at least one parent with a current alcohol use disorder [12]. Another study, based on Danish registry data, concluded that 4.5% of children had parents that had been hospitalized due to an alcohol-related illness [17]. A recent Danish study, based on 75,853 high-school and vocational school students, reported that 7.3% of the surveyed students perceived that they had at least one parent with alcohol problems [19]. A Norwegian study, based on reports from parents of teenagers, found that 15.6% of fathers and 4.7% of mothers were defined as individuals that misused alcohol [20]. However, these figures may not be generalizable to parents with younger children [21]. The scarcity of data on the prevalence of children who have parents with alcohol problems in Norway calls for further studies. Early adversity may have a negative impact on many aspects of life, including socioeconomic indicators, such as education, employment, and income [22]. However, to the best of our knowledge, no studies have explicitly investigated whether there exists a social gradient connected to parental alcohol problems in non-clinical populations. Moreover, although it is important to understand how widespread parental alcohol problems are, it would be valuable to have estimates based on the perceptions of the children or adult children. Therefore, this study aimed to estimate the prevalence of parental alcohol problems during childhood in a general population of Norwegian adults, and to investigate associations between parental alcohol problems during childhood and lower socioeconomic status in adulthood. --- Materials and Methods This cross-sectional study included a random sample of 75,191 individuals, aged 18 years or older, that resided in the region of Agder (30 municipalities in southern Norway). The sample was drawn from the Norwegian Population Registry, and e-mails or telephone numbers were obtained from the contact registry of the Agency for Public Management and eGovernment (Difi). Individuals who had declined to participate in surveys, individuals registered as deceased, those with unverified contact information, and those with an address outside the region were removed. Thus, in 2019, 61,611 inhabitants were invited to participate in the Norwegian Counties Public Health Survey. The respondents participated by completing a questionnaire online. The questionnaire included questions related to health, well-being, childhood, living conditions, local environments, accidents, and injuries. Participants gave online consent to participate when they answered the survey questions, and provided their age and sex to confirm their identity. Of the 61,611 invited individuals, 28,047 completed the questionnaire; the response rate was 45.5%. --- Ethics Informed consent was obtained from all subjects involved in the study. All personal identification variables were removed before the researchers obtained the dataset. Data were handled in compliance with applicable personal data protection regulations. The Norwegian Institute of Public Health (Oslo) is responsible for the health survey. The survey was approved by the Norwegian Data Inspectorate, and it adhered to the regulations of the Personal Health Data Filing System Act. In addition, a Data Protection Impact Assessment was performed by the Norwegian Institute of Public Health. Ethical approval for the current study was obtained from The National Committees for Research Ethics in Norway (REK) (file number 162353), and from the Faculty Ethics Committee at the University of Agder. --- Measures The questions, response categories, and definitions used in the survey are shown in Table 1. Vocational training/middle school /upper secondary/junior college 3. University/college <unk>4 years 4. University/college <unk>4 years 1 = low education 2 = intermediate education 3 and 4 = higher education --- Financial capabilities For one-person households, consider your total income. If you live with others, consider the total income of everyone in the household. How easy or difficult is it for you to make ends meet day to day with this income? 1. Very difficult 2. Difficult 3. Relatively difficult 4. Relatively easy 5. Easy 6. Very easy 7. Do not know 1-3 = low economic capability vs. --- 4-7 = middle/high economic capability --- Employment status What is your current status concerning employment etc.?(Select as many as applicable.) 1. Full-time 2. Part-time 3. Self-employed, 4. On sick leave 5. Unemployed 6. Receiving disability pension/work assessment allowance 7. Receiving social assistance benefits 8. In retirement/early retirement 9. Pupil/student 10. Undertaking national/alternative civilian service 11. Homemaker 1 = <unk> 32 h/week vs. not2 = <unk> 32 h/week vs. not3 = Self-employed vs. not4 = On sick-leave vs. not6 and 7 = Receiving welfare benefits vs. not The six-item CAST-6 instrument (Table 1) was used to estimate perceived parental alcohol problems [18]. Respondents could answer yes = 1 or no = 0 to each question, and the total score ranged from 0 to 6. The CAST-6 demonstrated high internal consistency (<unk> = 0.86-0.92) and concurrent validity (r = 0.93), compared to the original 30-item CAST for adults [18,23,24]. Moreover, it showed good (r = 0.78) to excellent (r = 0.94, ICC = 0.93) test-retest reliability for both adults and adolescents [23][24][25]. In the present study, the scale showed excellent reliability (<unk> = 0.91). Two alternative cut-off scores are commonly used with the CAST-6. One cut-off score is more inclusive (2 points) and the other is more conservative (3 points) [18,24,26,27]. The more conservative cut-off score was used in the current study. Data on socioeconomic factors were collected with questions related to education, economic capability, employment status, and whether respondents received welfare benefits (disability pension/work assessment allowance/social assistance benefits). Participants' age and sex were provided through the national population registry. In addition, participants were asked about their marital status. --- Statistical Analysis Data were analysed with SPSS version 25 (SPSS Inc., Chicago, IL, USA). Descriptive statistics for the overall sample were estimated for key demographic and socioeconomic variables. Pearson's <unk> 2 analyses were performed to evaluate associations between the overall distribution of parental alcohol problems and the demographic and socioeconomic variables. Multivariable logistic regression was performed to investigate the association between parental alcohol problems and measures of low socioeconomic status, adjusted for age and sex. Results are expressed as odds ratios (OR) with 95% confidence intervals (95% CI). A p-value <unk> 0.05 was considered statistically significant. --- Results Descriptive characteristics of the sample are provided in Table 2. * Multiple responses could be selected; education level and employment status are defined in Table 1. Table 3 shows that, overall, 15.6% of the respondents had experienced problematic alcohol use among their parents during childhood. This experience was significantly more prevalent among females (17.5%) than among males (13.4%; p <unk> 0.001). The proportion of individuals who reported experiences of problematic parental alcohol use varied among different age groups. The lowest prevalence was observed for respondents aged 67 years or older. Moreover, this experience was less common among respondents that were married or had a registered partner, compared to those with another relationship status. We also observed a consistent social gradient in associations between parental alcohol problems and various socioeconomic variables. Parental alcohol problems were more prevalent among those with a lower education level, compared to those with intermediate or high education levels; among those with low economic capability, compared to those with middle/high economic capability; among those on sick leave, compared to those not on sick leave; and among those who received welfare benefits, compared to those who did not receive welfare benefits. 1. Results from the multivariable logistic regression are displayed in Figure 1. Findings revealed consistent associations between parental alcohol problems and all measures of low socioeconomic status. The strongest association was found between parental alcohol problems and the need for welfare benefits (OR: 1.89, 95% CI: 1.72-2.06; p <unk> 0.001). Other forms of marginalization within the work force, such as being on sick leave or being unemployed, were also associated with parental alcohol problems (OR: 1.42, 95% CI: 1.21-1.69; p <unk> 0.001; and OR: 1.54, 95% CI: 1.47-1.72; p <unk> 0.001, respectively). The experience of parental alcohol problems was also significantly associated with no college/university education (OR: 1.33, 95% CI: 1.25-1.42, p <unk> 0.001). forms of marginalization within the work force, such as being on sick leave or being unemployed, were also associated with parental alcohol problems (OR: 1.42, 95% CI: 1.21-1.69; p <unk> 0.001; and OR: 1.54, 95% CI: 1.47-1.72; p <unk> 0.001, respectively). The experience of parental alcohol problems was also significantly associated with no college/university education (OR: 1.33, 95% CI: 1.25-1.42, p <unk> 0.001). --- Discussion We found that, among an adult Norwegian sample randomly drawn from the general population, 15.6% had experienced problematic parental alcohol use during childhood. To the best of our knowledge, no previous studies have estimated the prevalence of parental alcohol problems in the Nordic context based on a broad age range of adult offspring from the general population. A previous Norwegian study analysed self-reported problems with alcohol use among parents of teenagers. They found that 15.6% of the fathers had alcohol problems (scores <unk>2 with the CAGE screening instrument), but the proportion of mothers in this category was significantly lower (4.7%). In our study, we did not group individuals based on parental sex; thus, our finding that 15.6% of parents had problematic drinking behaviours included fathers, mothers, or both. Other international estimates of the prevalence of parental alcohol problems have varied greatly (4-29%) [12][13][14][15][16][17]. This variation might partly be explained by differences in the samples and measures used in different studies. A Swedish study included adolescents aged 16-19 years, and also used the CAST-6. They found that the prevalence of respondents that reported perceived parental alcohol problems was 20.1% [13], which was somewhat higher than our estimate. This difference might be explained by the difference in respondents' age between studies. Our results indicated that the oldest age group (aged 67+ years) was least likely to report parental alcohol problems. This result could be explained by several factors. First, the questions were retrospective in nature, and recall bias could be a prominent issue [28]. Second, it has been shown that adverse childhood experiences, such as parental alcohol problems, were associated with impaired health [29] and elevated mortality [10]; therefore, the oldest respondents who experienced problematic parental alcohol problems could have been underrepresented. Third, alcohol consumption among Norwegian adults increased after the second world war [30]; thus, it is plausible that the prevalence of parental alcohol problems was, in fact, relatively low during the era that the oldest participants grew up. --- Discussion We found that, among an adult Norwegian sample randomly drawn from the general population, 15.6% had experienced problematic parental alcohol use during childhood. To the best of our knowledge, no previous studies have estimated the prevalence of parental alcohol problems in the Nordic context based on a broad age range of adult offspring from the general population. A previous Norwegian study analysed self-reported problems with alcohol use among parents of teenagers. They found that 15.6% of the fathers had alcohol problems (scores <unk>2 with the CAGE screening instrument), but the proportion of mothers in this category was significantly lower (4.7%). In our study, we did not group individuals based on parental sex; thus, our finding that 15.6% of parents had problematic drinking behaviours included fathers, mothers, or both. Other international estimates of the prevalence of parental alcohol problems have varied greatly (4-29%) [12][13][14][15][16][17]. This variation might partly be explained by differences in the samples and measures used in different studies. A Swedish study included adolescents aged 16-19 years, and also used the CAST-6. They found that the prevalence of respondents that reported perceived parental alcohol problems was 20.1% [13], which was somewhat higher than our estimate. This difference might be explained by the difference in respondents' age between studies. Our results indicated that the oldest age group (aged 67+ years) was least likely to report parental alcohol problems. This result could be explained by several factors. First, the questions were retrospective in nature, and recall bias could be a prominent issue [28]. Second, it has been shown that adverse childhood experiences, such as parental alcohol problems, were associated with impaired health [29] and elevated mortality [10]; therefore, the oldest respondents who experienced problematic parental alcohol problems could have been underrepresented. Third, alcohol consumption among Norwegian adults increased after the second world war [30]; thus, it is plausible that the prevalence of parental alcohol problems was, in fact, relatively low during the era that the oldest participants grew up. We also found that parental alcohol problems were reported slightly more frequently by females than by males. Although this result was puzzling, other studies have shown similar findings [24,25]. Havey and Dodd [25] have suggested that females, compared to males, may be more sensitised toward substance use and related issues within the family, and that they also may be more prone to express concern about a family situation in a self-reported questionnaire. Overall, our findings showed that perceived parental alcohol problems were most prevalent among socioeconomically disadvantaged groups (i.e., individuals with low education levels, low economic capability, or a need for welfare benefits). The largest proportion of respondents that experienced parental alcohol problems comprised those who received a disability pension, work assessment allowance, or social assistance benefits (welfare benefits). In this group of respondents, 25% experienced parental alcohol problems during childhood. This finding remained significant after adjusting for age and sex in the multivariable analyses. Although alcohol consumption in Norway was found to be highest among adults with a high education level [31], we found that the respondents' childhood experiences of problematic parental alcohol use were inversely associated with the respondents' education level. Other studies have also found socioeconomic inequalities in the distribution of individuals that experienced alcohol-related harm [32]. Although we lack studies that have specifically addressed socioeconomic differences in the distribution of individuals with parental alcohol problems, other studies have shown that adverse life experiences are socially patterned in childhood [33]. Therefore, the social gradient that we observed among our adult respondents could be related to the socioeconomic disadvantage present in childhood. However, adverse childhood experiences can also reduce educational attainment; indeed, Houtepen et al. [34] found that this relationship remained significant after controlling for family socioeconomic variables. Possible explanations of these relationships are likely complex. Exposure to chronic stress may induce changes in the developing brain and impact a range of important functions that interfere with learning and other skills needed to succeed in education or the workplace [35]. Childhood adversities such as parental alcohol problems could also increase health risk behaviours, physical and mental health problems, and developmental disruptions [36] which may also contribute to economic marginalisation. --- Study Strengths and Limitations This study has expanded existing knowledge by contributing estimates of perceived parental alcohol problems, based on reports from a large adult sample of 28,047 individuals drawn randomly from the general population. Our outcome was based on the CAST-6, which is a validated instrument [24]. Item four of the CAST-6 presumes the presence of two parents, which could influence the score for respondents who grew up in single-parent families. Sensitivity analyses excluding this item did not alter the findings significantly. Our findings shed light on the socioeconomic patterns associated with the prevalence of parental alcohol problems, which were rarely studied in previous research. This study was limited by its retrospective design. Moreover, responses could be prone to recall bias and the risk of measurement error [28]. The validity of retrospective assessments of childhood experiences has been debated; however, a comparison between retrospective reports and prospective results did not reveal a bias in the retrospective assessment of difficult childhood experiences [37]. Additionally, cautiousness regarding the generalizability of the findings is necessary due to possible non-response bias. Finally, this study was based on cross-sectional data; therefore, the results should be interpreted with caution when considering causality. --- Implication for Practice The CAST-6 was not designed to identify diagnostic criteria; instead, it identifies individual perceptions of problematic parental alcohol use. Previous studies that investigated adverse outcomes related to parental non-dependent alcohol use had mainly focused on offspring substance use [38]. However, several studies have identified other negative outcomes related to parental non-dependent drinking patterns [39][40][41]. These dysfunctional patterns often continue into the next generation. To break the patterns, early support interventions should be available. However, support might be available to varying degrees. For instance, in Sweden, the vast majority of municipal social services provide support to children growing up with parental substance use problems, most often in the form of individual counselling or support groups but, at the same time, support only reaches a small proportion of the targeted children [42]. Several organizations identify children in need and offer support, including the adult substance-use treatment services, psychiatric care, and social services. However, studies have shown that, in most cases, those organizations did not determine whether the clients had children [43]. The situation in Norway appears to be similar: only about one fifth of the professionals working in substance use treatment facilities offered support to their clients' children, and about half of the professionals never assessed whether the clients had children [44]. One obvious arena to identify children in need of support is the school setting. Since these children often are neglected, schools could work with policy documents and action plans and inform and train their staff about this vulnerable group. Previous research has shown that policy documents increase the likelihood of school staff to receive training in this issue, which in turn increases the likelihood of identifying these vulnerable children in the school setting [45]. Digital interventions represent a promising approach for increasing the availability of support. However, currently, only a small number of digital interventions are currently being tested that target this group of individuals [46][47][48][49]. For instance, in Sweden an online chat group program has been developed [47], based on a Dutch program [48]. The program consists of eight weekly sessions, each 60-90 min long, focusing on themes such as 'your role in the family','social networks', and'substance use, tolerance, and heredity'. Each session is moderated by a trained counsellor. The program is currently being evaluated but has the potential to reach a large number of adolescents and young adults. --- Conclusions This study showed that one in six adults reported problematic parental alcohol use and, among disadvantaged sub-groups, this prevalence increased to one in four. It is imperative to make both universal and selective prevention interventions available at an earlier age if we expect to break family patterns of problematic alcohol consumption. In addition, we need better methods for early detection, for instance by identifying burdened children when parents are in contact with general or more specialized health care [43,50]. Furthermore, we should ensure proper support and follow-up for these children and their families. --- Data Availability Statement: Restrictions apply to the availability of these data. Data was obtained from Norwegian Institute of Public Health (NIPH) and are available at https://helsedata.no/en (accessed on 27 April 2021) with the permission of NIPH. --- Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. --- Conflicts of Interest: The authors declare no conflict of interest. | The aim of the study presented here was to estimate the prevalence of parental alcohol problems during childhood in a general population of Norwegian adults, and to investigate associations between parental alcohol problems during childhood and lower socioeconomic status in adulthood. This cross-sectional study recruited 28,047 adults (≥18 years) to an online health survey (Norwegian Counties Public Health Surveys). We evaluated demographic and socioeconomic measures and responses to a shortened version of the Children of Alcoholics Screening Test (CAST-6) scale to assess whether respondents perceived parental alcohol consumption during childhood as problematic. Respondents reported parental alcohol problems at a rate of 15.6%, but the experience was more prevalent among adults with a low education (20.0%), compared to those with intermediate (16.4%) or high educations (13.8%, χ 2 (2) = 87.486, p < 0.001), and it was more common among respondents with low economic capabilities (21.1%) compared to those with middle/high capabilities (14.2%, χ 2 (1) = 162.089, p < 0.001). Parental alcohol problems were most prevalent among respondents that received welfare benefits (24.5%). Multivariable logistic regression analyses revealed associations between parental alcohol problems and low socioeconomic status in adulthood; odds ratios (95% confidence intervals) ranged from 1.33 (1.25-1.42) to 1.89 (1.72-2.06). From a public health perspective, children who grow up with parental alcohol problems should be reached through both universal and selective interventions. |
Background Breast cancer is a prevalent chronic disease affecting millions of women, with recent statistics indicating that approximately 1 in 8 women in the United States will be diagnosed in their lifetime. 1 For women with hormone receptor-positive (HR+) breast cancer, adjuvant endocrine therapies (AETs) are often needed for several years as prophylaxis against cancer recurrence following primary treatment. 2 Previous studies have mostly focused on addressing the social support needs of these women during initial diagnosis and primary treatment phases. 3 The adjuvant phase is much longer than the diagnosis/treatment phase and still understudied. Completion of primary treatment coincides with a sudden decrease in healthcare encounters, from several times per month during active treatment to once every three to six months during the adjuvant phase. 4 This decrease results in patients having fewer occasions to obtain support from their healthcare teams. 5 Moreover, patients tend to underutilize their support networks and report receiving less social support from their friends and family a year following primary treatment. 6 Thus, for many women, the start of AET marks the beginning of a decline in social support, which may create new unmet social support needs. Several studies show a positive association between social support and clinical health outcomes, such as medication adherence and mortality. [7][8][9][10][11][12][13] For women on AET, unmet social support needs are also associated with increased symptom burden, 14 higher leukocyte pro-inflammatory and pro-metastatic gene expression, 15 increased depressive symptoms, 16 and lower overall quality of life. 17 Specifically, social support from other survivors, referred to as experiential support, has been shown to improve a patient's ability to appraise her breast cancer experiences while also reducing feelings of isolation and promoting optimism for the future. 18 While this type of support can be facilitated via formal support groups, less than 12% of women with breast cancer regularly attend formal meetings. 19 Accordingly, scholars and clinicians have called for examining new ways to improve access to social support, particularly experiential support, for women with breast cancer. 20 Understanding racial differences in social support is especially important for regions with significant disparities, such as Memphis, Tennessee. The Memphis metropolitan region has high breast cancer mortality relative to other cities of its size 21 and significant Black-White disparities. 22 Black women in Memphis are more than twice as likely to die from breast cancer as White women. 23 Still, little is known about racial differences in social support and how they may contribute to these well-documented racial disparities. We conducted four focus groups (FGs) with women diagnosed with early stage HR+ breast cancer taking AETs to explore social support needs among Black and White women with breast cancer following primary treatment. --- Methods --- Participants Participants were recruited from the West Cancer Center (WCC), a comprehensive oncology center providing a network of fully integrated cancer care that serves patients in the U.S. Mid-South. We recruited WCC patients who met the following criteria: women who were 18 years and older, diagnosed with early stage (I-III) HR+ breast cancer, and prescribed AET medication (e.g., tamoxifen or aromatase inhibitor). --- Procedures Following approval by the Institutional Review Board at the University of Tennessee Health Science Center (IRB # 17-05479-XP IAA), a WCC research nurse (TJ) reviewed electronic health records to identify women who met our eligibility criteria using purposive sampling. TJ confirmed participant eligibility and provided an overview of the study and topics to be discussed during one-time FGs. Four 90-minute FG interviews were conducted between December 2017 and January 2018. Before starting each group, informed consent was obtained from each participant who also completed a survey assessing demographic and medical characteristics. 24 These groups were stratified by race (i.e., Black and White) and length of AET treatment (i.e., <unk> 6 month AET use or <unk> 6 month AET use; see Appendix A). Each FG participant was compensated with a $40 gift card. Race-and gender-concordant moderators, a clinical psychologist (RK) and a health communication scholar (JNA), led the FGs. Both moderators completed formalized FG training and have extensive backgrounds in facilitating FGs. A semi-structured moderator guide containing questions and prompts was drafted by JNA, which was reviewed and edited by the study team until a consensus was reached regarding content and phrasing. For consistency across FGs, moderators asked questions in the guide word-for-word, and followup probes were asked and clarifications were provided as needed. For instance, some interview questions included, "What do you think the team should know about women's physical, mental, emotional, spiritual, and social support needs when taking your hormone therapy?" and "What recommendations would you make to the team?" In order to ensure the accuracy of perspectives and to increase validity, moderators employed the strategy of member checking 25 by periodically summarizing participants' comments throughout the FGs (e.g., "So what you're saying is... Is that right?"). To reduce bias toward perspectives of more loquacious participants, moderators identified participants who were contributing less frequently and encouraged them to offer their perspectives (e.g., "I feel like I haven't heard from this side of the room. Anything to add?"). Analysis-We audio-recorded FG interviews and transcribed verbatim to obtain accurate data using a modified version of Silverman's transcription conventions. 26 First, FG interview transcripts were analyzed separately by group. Two authors (JNA and CG) conducted lineby-line coding for each transcript. These authors used the qualitative strategy of constant comparison 27,28 to identify emergent themes from the raw data. After these initial themes were identified, two authors (AP and IG) performed additional coding and analysis to assess race-based differences in social support. Inter-rater reliability between AP and IG via percent agreement was calculated to be 89.6%, with Cohen's <unk> = 0.87. Discrepant coding was resolved by a third party (JNA). Another author (RK) conducted an independent review of the final codebook and qualitative analyses. --- Results Table 1 in Appendix A describes FG participants' demographic and medical characteristics. Average age was 64 years, 48% were not married, and 19% had a 4-year college degree or higher. The majority of participants (86%) were prescribed the AET medication Anastrazole, and 48% reported not being fully adherent. Across the FGs, participants identified family and friends as key sources of informational and emotional support from their initial breast cancer diagnosis through the adjuvant treatment phase. Importantly, the FG modality itself served as a source of support from which FG participants drew upon to address unanswered questions and receive emotional validation. White women (FG1 and FG2) often reported having support from other survivors. However, Black women (FG3 and FG4) did not make any references to providing or receiving social support from other breast cancer survivors outside of the FGs. --- Informational and emotional support from family & friends-Participants from all FGs noted the importance of family and friends to accomplish varied instrumental support and information-seeking tasks and serve as additional listening ears during physician visits. One participant (FG3) noted the importance of family inclusion during provider visits, which can help facilitate the acquisition of necessary informational support. She said: "And I had several questions and had my daughter, mother, my husband and my son-so it was like a family-like presentation to us because these are the people that are going to have to help you outside of the medical facility." Another participant (FG2) expressed how her daughter, a pharmacist who had also been diagnosed with breast cancer, helped her navigate her own cancer. She said: "I was blessed to have my daughter who... had been through breast cancer... to answer a lot of my questions." One participant (FG4) reported that she maximized the knowledge and skills of her network to help her find the best plastic surgeon in her area, saying, "I really literally called everyone that cares about me and I care about them, and I gave them assignments. I really truly did. You tell me you find me, your job is to find who the best plastic surgeon in [...] is." Additionally, participants underscored the value of emotional support provided by networks of family members and friends. Participants in our study readily admitted to relying on spouses, children, and in-laws to provide comfort during medical visits, especially when serious or negative news from providers was anticipated. One participant (FG1) stated: "My son-in-law was exceptional. He went with me every time I went to the doctor when my daughter couldn't go, so I feel so fortunate." Another participant (FG1) added, "...It's my husband. He's has been my strength through everything." --- Race-based Differences in Support White women more likely to report having other breast cancer survivors in their social support networks-Despite having many similarities in needs, we observed some race-based differences in sources of support. White participants frequently noted the importance of relationships they had with other breast cancer survivors who provided informational support during participants' active and adjuvant treatment phases based on first-hand knowledge and insights from their own cancer experiences. For example, one White participant (FG2) expressed gratitude for the small network of long-term breast cancer survivors with whom she was able to talk and receive reassurance during her treatment. She said: "I've got a lot of support in my work group and my sister has had breast cancer but she is 15 years older and she lives in Missouri...it's nice to be able to sit down and talk to people who have been through it recently." Conversely, there were no explicit references made by Black women in our sample to receiving or providing support from other breast cancer survivors outside of the FG. White women more likely to report addressing other breast cancer survivors' emotional needs-Unlike Black women, White women in our sample also reported finding mutual benefit in providing emotional support to other breast cancer survivors in their lives. Our participants noted the importance of having someone-even a complete stranger-minister to their emotional needs during temporal moments of fear, uncertainty or hopelessness. This was particularly the case among older White women in our study who often expressed the need for survivors to be sensitive to others' emotional needs. For instance, one participant (FG2) recounted an experience in which she was able to provide some comfort to another patient during a short elevator ride. She said: "'Oh, we all you know, didn't know how to do and what to do.' And she looked so floored that I said, 'Would a hug help?' And she said, 'I think it would,' and so I hugged her, and she said she had been going through another type of cancer for 12 years and the breast cancer stuff was new." Interestingly, all of the FGs created environments where participants were able to give and receive support. In fact, in every FG a spirit of sisterhood was fostered among some participants. Experiential support occurred more frequently as seeking and providing informational support to each other in FGs of Black participants (FG3 and FG4) and more frequently as seeking and providing emotional support to each other in FGs of White participants. Black women more likely to provide informational support to each other during FG interviews-Some Black participants used FG discussions to query others about tumor growth (e.g.,"Do everybody think that when they are being diagnosed with breast cancer it is always a lump in their breast?"(FG4)) and genetic testing (e.g., "Now was it a hormone that was causing the tumor or the cancer to grow faster in any of you? Did you have a hormone?") and to share tips and over-the-counter products for combating AET medication side effects. One Black participant who worked in a pharmacy made a point of providing medical information she knew by virtue of her occupation to other women in the FG. Sometimes, as was the case for one exchange in FG4, participants provided informational and emotional support concurrently. For example, Black participants (FG4) acknowledged the harsh effects of active treatment while providing affirming statements to one woman who felt self-conscious about her radiation burns: White women more likely to provide emotional support to each other during FG interviews-Study participants, regardless of race, noted that sharing their personal experiences with breast cancer and subsequent treatments in safe, supportive environments with other women "who are going through the same thing" was meaningful and spiritually helpful. Several women even suggested meeting monthly for lunch. Yet, White women were more likely than Black women to explicitly provide emotional support to other FG members during the interviews. For instance, one participant said to another (FG2): "I just met you, what an hour ago? I'd hug you because sometimes you just feel like you need that, you just need somebody to say, 'Oh, I know what you mean. I've been through that, too." The following conversation, sparked by one participant (FG2), about hair loss in the adjuvant phase is a telling example. --- Participant A Is anyone else losing their hair? Participant B My hair is coming out, and it's so thin now. --- Participant C Mine was thin before. Participant B But this is just from breast cancer not the Anastrozole. Mine is coming in thicker than I had before. --- Participant A I just got a clip on (laughs). The top is mine. Participant C Yeah, it looks good! Similarly, White women in our study did not pass judgment when one participant admitted to being nonadherent to their AET (FG2). Participant D How often I forget to take the medicine. (Women make sounds of concern). --- You know maybe once a week I forget it. Moderator And that would be a little uncomfortable? Participant D Well, yeah because it's to save my life! You know, what's my problem? Participant E I think everyone forgets every once in a while. White women more likely to report a desire for stage-specific support groups -White women in our study who had been newly prescribed an AET medication discussed the importance of sharing their experiences with other breast cancer survivors in similar stages of treatment. These women noted that their cancer experiences often differ from other family members who were diagnosed years prior because of new medical advancements; thus, they reported wanting opportunities to connect with women "who have been through it recently." For one participant (FG2), the absence of social support because of limited familial or friendship networks made experiential support provision from FG participants even more important. This was reinforced by another participant (FG2) who expressed concern about other women with breast cancer who might not have an extensive social network from which to derive support, saying, "And you know you don't know who doesn't have anybody here in town. You don't know what we all are going through and how much we rely on other people or don't." Participants suggested the WCC should facilitate monthly social support groups for newly diagnosed women with breast cancer in addition to general or topic-specific support groups. --- Conclusions Our study found that women with early-stage breast cancer have a variety of informational and emotional social support needs during AET. The presence of relatives and other allies to accompany patients during medical visits was a key factor in meeting participants' emotional and informational needs. Instances of this were recounted as crucial to processing information during encounters with healthcare providers, especially when family and friends functioned as emotional buttresses that made information more easily absorbed. Despite some similarities in experiences among all participants, White women frequently reported receiving and providing support from other breast cancer survivors, while explicit references to this type of support were absent for the Black participants. Experiential support provision among study participants was noted in all FGs. However, Black women were more likely to provide informational support and White women more frequently provided emotional support to each other. In each group, participants developed camaraderie and sisterhood with each other. They provided informational support by asking questions about treatment and giving advice about symptom management and expectations. They provided emotional support by validating commonalities in symptom experiences and by extending gestures of affection and care to each other. Consistent with our findings, previous research of Black survivors found that they often utilized support from friends and family, and never referenced support from other survivors. They also note that Black women are more likely to rely on God for support. 29,30 Still, it is possible that having a more limited support network drives Black women to rely on God. Another study among primarily White participants found that support from formal groups with other survivors and informal support from family and friends are essential to post-primary treatment well-being. 31 Our study expands upon the previous research by juxtaposing needs and illuminating differences in the manifestation of social support among both White and Black patients. The importance of experiential social support in the form of reassurance and validation from others with breast cancer was a central theme in other qualitative studies examining the lived experiences of breast cancer survivors. 31,32 Though all participants in our study acknowledged that they relied on a network of family, friends, and even relative strangers to meet their informational and emotional supports needs, Black women did not bring up other survivors as part of the support they received. In several instances among White participants, family members and friends were also breast cancer survivors, and the support they provided was essential to FG participants during the challenges of cancer diagnosis and treatment. In FGs of Black women, participants readily exchanged experiential support with each other, but they did not explicitly mention other cancer survivors as being part of their existing networks. The seeking and provision of informational support by Black women is also consistent with past research that suggests that individuals from racial/ethnic minority groups are less likely than White patients to report having their informational needs met. 33, 34 This suggests that convening breast cancer support groups for Black women comprised of other Black survivors could be particularly beneficial in meeting their social support needs. Perhaps, connections with other survivors are not being accessed as easily for Black women compared to their White peers because of sociocultural factors unexplored in the current study. Past research suggests that formal breast cancer support groups that include participants with a significant range of treatment phases and experiences may be less helpful in meeting patients' needs. 35 Our participants expressed similar sentiments, stating that meeting with women going through the same phase of treatment was more helpful than having discussions with women who had gone through it years ago. While there are some support groups that target specific race/ethnicities, 36 few target specific treatment phases. Given that social support is important for cancer outcomes and social networks and social support groups are underutilized, our findings suggest that providing smaller, race-and treatment phase-specific groups might be a more effective and impactful way of reducing deficits in support. By leveraging experiential support, prior literature suggests that adopting and encouraging peer mentorship programs leads to greater satisfaction and fulfillment of needs. 37 Thus, women might also benefit from one-on-one peer mentors 38 to fully capitalise on empowering experiential support. Women with limited social networks and fewer personal resources may especially benefit from experiential peer support. 39 Digitally connected technologies and online support groups 40 might be a novel way to connect patients in similar phases of treatment and life experiences who may not be able to connect locally. --- Limitations and Strengths This paper is the first to qualitatively analyze the social support needs of women in the adjuvant phase of their breast cancer treatment, with a specific focus on race-based differences in experiential support. Moreover, this study incorporated the perspective of a group not usually well represented in research and employed race-stratified FGs, using raceconcordant moderators, to facilitate and enrich discussions. Therefore, this study offers valuable insights into the shared and different needs that arise from diverse viewpoints among survivors in the adjuvant phase of treatment. Future research should approach this research question quantitatively and experimentally to assess the degree to which experiential support from women of similar backgrounds might be associated with improvements in outcomes. Still, this study also had some limitations. Despite endeavors to mitigate this, some of the more assertive personalities of the group might have dominated over others and influenced the results and themes that emerged from conversations. Some women might have agreed with some of the discussion but might not have spontaneously shared the same perspective if the methodology were different (e.g., one-on-one interview). Finally, generalizability is limited due to the nature of qualitative research. --- Clinical Implications Our findings highlight the importance of assessing social support needs in the adjuvant phase and offering resources to meet deficiencies in support. Prioritizing ways to foster and encourage experiential support could be a way to fill the gap left by decreased healthcare encounters following primary treatment. Our findings suggest that support groups that are more homogeneous and targeted to specific treatment phases may be better suited to meet the varying needs of patients. This can be accomplished through healthcare and communitybased organizations and online communities creating formal and informal support groups or one-on-one peer groups targeted to treatment phase. Social support from friends and family as well as experiential support from other breast cancer survivors are needed to help women navigate their adjuvant care. Knowing and understanding the nuances of support needs are crucial first steps to developing novel interventions that capitalise on the saliency of experiential support to fill unmet needs for these populations. Ultimately, such interventions should address needs by facilitating connections among survivors, offering more avenues to receive support from the healthcare team, and encouraging women to utilize their existing networks by inviting family and friends to be active contributors in their care. --- Supplementary Material Refer to Web version on PubMed Central for supplementary material. | Social support is a critical component of breast cancer care and is associated with clinical and quality of life outcomes. Significant health disparities exist between Black and White women with breast cancer. Our study used qualitative methods to explore the social support needs of Black and White women with hormone receptor-positive breast cancer on adjuvant endocrine therapy (AET).We conducted four focus group (FG) interviews (N=28), stratified by race (i.e., Black and White) and time on AET. FGs were audiotaped, transcribed, and analyzed according to conventions of thematic analysis.Participants noted the importance of having their informational and emotional social support needs met by friends and family members. White participants reported support provided by others with breast cancer was crucial; Black women did not discuss other survivors as part of their networks. Notably, both White and Black participants used the FG environment to provide experiential social support to each other.White participants noted that having other breast cancer survivors in their support network was essential for meeting their social support needs. However, Black participants did not reference other breast cancer survivors as part of their networks. Cancer centers should consider reviewing patients' access to experiential support and facilitate opportunities to connect women in the adjuvant phase. |
Introduction Sustainable smart cities are complex living ecosystems that involve diverse stakeholders participation. Both public and private sectors have led urban development that mobilizes information communication technologies as it requires substantial funds and infrastructure. The technologies are geared toward developing several smart city capabilities to satisfy users' demands, so it becomes essential to consider the citizens' participation and to satisfy local needs in the context of socio-technical transitions [1]. These trends make cities powerful places to observe and pilot urban transformation [2][3][4]. The United Nations (2022) emphasizes the digital development agenda in cooperation with diverse stakeholder partnerships, particularly engaging citizens and private sectors in a wholeof-government and whole-of-society approach globally [4]. According to the UN-Habitat World Cities Report 2020, smart cities are rapidly deploying technology to address various challenges and to meet the digital development agenda [5] by collaboration across multiple fields, including urban planning, transport planning, administration, healthcare, economics, infrastructure, environment, weather, safety, security, public services, community engagement, and research and innovation [4]. Smart cities have evolved from technological platforms for managing urban resources to innovation generators with the participation of public, private, citizen and non-governmental sectors to satisfy local demands and deal with local challenges [6]. In other words, smart cities are not only innovative engines or --- Smart Cities as Complex Systems The concept of cities as complex systems has been developed by various fields, including social science, ecology, business management, and smart city studies, after it was introduced in the biological sciences fields in the early 20th century [12]. It was later adapted and expanded by Ludwig von Bertalanffy through his revised General Systems Theory [69]. When the concept initially appeared in contemporary science, it ignored the interactions of various fields and emphasized isomorphic laws to unify individual sciences vertically as an organized wholeness. Bertalanffy argued that the existing theory would ignore local events and dynamic interaction manifest in mathematical approaches and suggested that in open systems approaches, the entropy of the system, through dynamic interconnection, becomes a fixed arrangement in the models of equivalent, feedback, and adaptive behavior. The modified general systems theory transformed the concept of cities into distinct collections of interacting entities in equilibrium, which firstly influences the planning and management process in top-down approaches, such as in ideal cities, for example, Ville Radieuse, or those of other modern architects [70]. Michael Batty (2017) explained that the spatial structure is in an equilibrium stage even though technologies and fashions have triggered many social, economic, and environmental changes and that cities are in states far from equilibrium in consideration of urban dynamics in historically evolved economic cycles which coincide with scientific and technological advances, cultural movement and migrations of the population in relation to climate changes or physical conditions [71]. In addition, the physical equilibrium is described as out-of-sync with disequilibrium events, even if they are subsystems in cities [72]. The perspectives of scales in the urban environment create diverse stances toward the phenomenon of development of cities. Cities have unique systems when it comes to micro perspectives, as Michael Batty mentioned, while from macro perspectives, they have universal and spatiotemporal systems. The urban system dynamics, which Jay Forrester demonstrates, dealt with the life-cycle of city development based on three internal structures of the model, consisting of three subsystems, namely industry, housing, and people, which are controlled by an external environment [73]. The model proposed that adjustment between the attraction of internal systems and the total attractiveness of the city needs to develop the accommodation capacity of the city [73]. Bettencourt and West (2010) found that the scale of cities is a significant determinant of the characteristics of cities, and that the development route follows the city size, which positively correlates to crime rates, GDP, and income, so the scale of cities is called a scaling law. [74]. They also showed that urbanization makes cities greener, more efficient, more prosperous, and safer as the cities adapt [74]. On top of the scaling law, Bettencourt et al. (2007) discovered successive cycles of super linear innovation, which are led initially by the biological organization, by sociotechnical organization in the middle, and later by an individual organization, and these processes are tied up with the degree of urbanization, economic development, and knowledge creation [75]. In other words, the time scales of emerging innovation become shorter as the population increases and is more connected than before. Smart cities comprise multiple intelligently connected systems and integrate material construction, agencies, cultures, living creatures, and services in systems of sub-systems. Smart cities are intricate systems comprised of subsystems that integrate interactive systems between ICT-based urban services and a diverse range of stakeholders. Within the smart city service model, terms such as factors, services, domains, components, systems, layers, and sensors are often used to refer to various segments, while groups of segments are commonly referred to as architecture, models, systems, frameworks, and dimensions. To address the interoperability issues between technologies and services and to facilitate knowledge expression methods or maximize synergies between ICT infrastructures, various telecommunications and electronic field researchers have introduced service-oriented reference architectures from technological perspectives [63,[76][77][78]. The other perspectives prioritize multi-stakeholder partnerships and co-creation networks by exchanging knowledge [63,[79][80][81][82][83]. Numerous researchers have also organized service models from complex sociological, environmental, and economic perspectives to establish indicators that analyze city data and rank the cities [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31]. Some studies have suggested service models to identify the evolutionary paths of service development in the history of smart cities and intelligent communities [15,65]. The ICT-based services and multi-stakeholder partnerships facilitate the transformation of e-government to smart governance [84]. City governance, which the government develops, is an extended-term with multi-faceted and multilevel systems of stakeholders, sectors, and agencies [33,84]. The development of smart cities has led to the transformation of traditional e-government into smart governance. The concept of smart city governance has been extensively discussed in the context of socio-technical transitions. The theory from multilevel perspectives was first framed by Geel [58], who was influenced by Nelson-Winter's evolutionary economics [85], Dosi's technological trajectories [86], Malerba's Sectoral systems [87], Carlsson's technological systems [88], and Bijker's technological systems in relation with sociology, institutions, and rules [89]. Smart city governance has been developed from e-government and digital government. Unlike the government, which refers to a form of authority, governance refers to a process of action, processes, traditions, and institutions by which collective decisions are made and implemented [84]. In the context of digital transformation, e-government focuses on implementing digitizing technologies to formulate data based on existing analog government, while digital government emphasizes satisfying local needs and users' demands by re-engineering and re-designing services and processes [90]. In the streams of transformations, smart cities are researched from sociotechnical perspectives to contribute to a theoretical smart city governance model. Mora et al. (2021) analyze the landscape of smart cities from socio-technical dynamics perspectives. The city is a complex and evolutionary adaptive system for urban innovation and urban sustainability in cooperation mainly with public and private sectors and a few with civil society [48]. Kim and Yang (2023) analyze the empirical characteristics of conceptually related smart cities services' evolution from perspectives of socio-technical transitions based on multi-stakeholders and found different services advancement depending on phases of partnership and common characteristics of developing services regardless of stakeholders' partnership [34]. Some researchers applied it to the governance model. Nam and Pardo (2011) connect urban governance to e-government and innovation to make cities smarter from a techno-political perspective [68]. Calzada (2017) researched the transition of four smart European cities regarding the techno-politics of data from the perspectives of multi-level governance devolution schemes [59]. Waart et al. (2016) emphasize the networking of top-down and bottom-up elements in transitions of smart city dynamics [91]. The existing literature reveals that the socio-technical transition consists of rich perspectives explaining smart cities' dynamics based on technical and social analysis that develop a theoretical understanding of techno-governance [8,92]. There is a dearth of appropriate models for governing smart cities, which can be attributed to several factors, including diverse visions, inconsistent implementation, and oversimplified technological solutions [12,33,93]. Current models tend to focus more on the interplay between technologies, services, data, and buildings while neglecting the crucially connected role of stakeholders partnerships and urban contexts. As an attempt to deal with the issue, Robert et al. ( 2018) proposed a conceptual model for smart city governance based on 13 indicators encompassing components such as services, technologies, stakeholders, legislation, and structures, as well as contextual factors and outcomes. [33]. In this regard, the key players in driving sustainable innovation are connected and agglomerated communities, individuals, and organizations on the basis of frontier technologies. Calzada (2017) demonstrates the importance of devolution in smart city development to increase the ownership and the self-responsibility of investment in infrastructure and data [59]. Additionally, ownership of data and cities has blown up the debate on multi-stakeholder participation since citizens and all stakeholders can be seen as tiny chips along with artificial intelligences inside a giant system to collect and analyze data, as Harari (2016) argued [59,94]. However, smart city implementations in real projects still suffer from fragmentation due to variations in definitions, as well as the lack of a model that reflects the multidimensional operational nature of cities and the importance of multi-stakeholder partnerships [10,11,93]. These challenges are aggravated by the lack of a model reflecting urbanization contexts and multi-stakeholder partnerships considering the multidimensional operational nature of cities [12]. This study addresses the gaps in the existing literature by identifying the characteristics of stakeholder partnership systems and their relationship to the implementation of sustainable smart city services. By doing so, this study seeks to contribute to developing a more comprehensive smart city governance model that considers the role of multi-stakeholder partnerships in realizing sustainable urban development. --- Materials and Methods Social network analysis is primarily utilized to identify the study aim based on the published data in Kim and Yang's (2023) research [34]. As demonstrated in Figure 1, the study commences with a research question motivated by smart city governance challenges, as explained in the preceding section. The primary research question is then broken down into three objective research questions directly corresponding to the study objectives under the study aim. In essence, this research aims to identify characteristics of services implementations depending on the stakeholders' partnerships from the perspectives of governance and sociotechnical systems that are established by the major research question, i.e., how the stakeholders' partnership systems are networked with the conceptually related smart cities services' implementation from perspectives of governance and sociotechnical systems. The aim is achieved through three research questions, each directly linked to three objectives. The first research question aims to clarify the characteristics of services in the evolution of smart cities, while the second focuses on demonstrating the different services phases developed depending on stakeholder partnerships. The third research question identifies connected services and stakeholders assuming that smart cities are connected both virtually and physically. The research question is closely linked with the study aim and the concept, which comprise measurable indicators that establish the study framework [34,95], as illustrated in Figure 1. The concept encompasses six aspects, namely, social, technological, governmental, economic, environmental, and managerial factors, and a single keyword, namely, The research question is closely linked with the study aim and the concept, which comprise measurable indicators that establish the study framework [34,95], as illustrated in Figure 1. The concept encompasses six aspects, namely, social, technological, governmental, economic, environmental, and managerial factors, and a single keyword, namely, the sustainability of urban services. These aspects and keywords are utilized to provide a background for analysis and select target cities. European cities were selected as the target cities using cluster sampling, from a population of 221 smart cities which were investigated by Smart City Tracker 1Q18 [96]. The study population was derived by combining three ranking lists of sustainable smart cities that represent the six aspects of the concept. These ranking lists are the United Nations-Habitat Global Urban Competitiveness Report (for social, economic, environmental, and managerial aspects) [97], the McKinsey Company's Smart Cities: Digital Solutions for a More Livable Future (for technological aspect) [98], and the United Nations E-Government Survey 2020 (for governmental aspect) [84]. As shown in Table 1, the study's first selection resulted in 36 cities after removal of cities mentioned more than twice in the three ranking lists. The top 20 smart cities from the rank of smart cities performance published by Juniper Research were integrated with the first screening results to select high-performing sustainable smart cities in the second sampling, which resulted in 12 cities. Finally, European cities were chosen because they have been leaders in e-government development within The United Nations e-government development index (EGDI), which is an index of online services, telecommunications, and human capital since 2010 [4,99]. As this study aims to identify characteristics of conceptually related smart cities services implementations from perspectives of governance and sociotechnical transitions, the selected cities embody the critical elements of the concept of sustainable smart cities and stand out as pilot cities for building smart city governance within the paradigm of sociotechnical transitions. For this study, the target cities are Barcelona, London, and Berlin. On the basis of research questions linked with study aims, this research selects the data regarding conceptually related smart cities for three target cities. The data are from Kim and Yang's (2023) article [34], which coded and weighted datasets on three relevant cities' projects and plans from 1969 to 2021. In a sophisticated analysis aligning with the study aim and the concept of sustainable smart cities, the researchers formulate a city-level dataset, which is configured with categories including year, data sources, stakeholders, services, converted number of stakeholders, the sum of the converted number of stakeholders, and converted weights of services [34]. The data categorized with events, services, year, and stakeholder data were weighted and coded using the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocol. The weights are assigned to each year as "1" equally, and the weight was then divided by the number of participating stakeholders in that year to distribute the divided weights to each stakeholder [34]. This means that individual services are given different weights based on the number of implemented services and participating stakeholders in that year, thereby giving equal weight to services and stakeholders that are correspondingly aligned in the same year [34]. A Social Network Analysis is utilized for this research. This is a specific application of graph theory that originated from Euler's mathematical investigation to represent social actors' networks using points and social relations using lines. This approach is based on German sociology, where Georg Simmel and others emphasized the formal properties of social relations [100]. Alfred Vierkandt and Ledpold von Wiese adopted terminologies of node, edges, and connections that make up social network analysis, while Moreno provided the idea of a sociometric, a type dataset depicting a metric of sociogram [101]. Social network analysis has been utilized to identify corporate power and interlocking directorships [102]. The successive research has explored the power and influence of banks [103]. It is also utilized to analyze community structures with business networks [104]. It has also been utilized to identify and map knowledge flow between organizations since it was introduced to regional and innovation economics fields [105]. Ma (2023) breaks up the collaborative innovation network into spatial and topological networks using social network analysis [105]. Radulescu et al. (2020) deploy it to define critical competencies and human resources in the innovation network from the collaborative model of a smart city [106]. Mora et al. (2019) utilize the analysis to show the two-dimensional network of actors collaborating to enable smart city development in New York City in a quadruplehelix collaborative model of stakeholder engagement [48]. Sconavacca et al. (2020) explore the active areas in the evolution of smart city research using the method [107]. Kim and Yi (2018) analyze the coherence between national and local smart cities plans regarding the keywords and service elements [108]. The existing literature on the method has been developed to reveal the relations between social activities and organizations in the urban environment. At the same time, it highlights the interdisciplinary traits of smart cities research in regards to diverse aggregation and disaggregation of services, technologies, infrastructures, and stakeholders. Social network analysis is utilized to confirm the data relations among systems of services, stakeholders, and cities in regard to the research aim. It mainly utilizes Gephi 0.9.2 software, open-source software, and interactive tools used for visualization and examination or assessment of various simple and complex networks and dynamic and hierarchical graphs [109]. Gephi 0.9.2 needs two datasets for social network analysis: a node dataset comprising a network of actors and an edge dataset consisting of a list of relations between actors. The node list consists of three columns: 'id,' 'label,' and 'attribute'. The label column includes the names of all actors, including services and stakeholders. The 'id' is set up with constant numbers, which take the role of links between the node dataset and edge dataset. The edge list utilizes the stakeholders, services, and their weights. The dataset is transformed into an edge list containing four columns: source, target, type, and weight. Then a column of source and target, which were initially filled with words, needs to substitute with the corresponding 'id' referring to an identical 'label' in the node dataset. After completing the dataset, Gephi 0.9.2 is used to analyze the relationship between service and stakeholders by importing the node table first and the edge table appended to the previously opened node table. Among analysis functions such as average degree, average weighted degree, network diameter, graph density, HITS, modularity, PageRank, connected components, and others, average weighted degree, betweenness centrality, and eigenvector centrality are utilized in this study. The average weighted degree reflects the cumulative number of keyword connections between surrounding keywords to obtain the node and connection frequency. The eigenvector centrality indicates neighbors' weighted centrality, which considers not only the number of connected nodes but also other nodes' centrality as the concept of eigenvector centrality basically states that if each actor is connected to a neighbor with a high connection centrality index, the influence on the network (eigenvector centrality) is greater than if it is connected to a neighbor that does not. The betweenness centrality, introduced by Freeman, is the shortest path based on the enumeration of identifying critical actors in the network [110]. A node with higher betweenness centrality can penetrate the blocked or siloed information from fields to fields through the node so that it has the potential to rise in power [111,112]. The three network analyses are visualized in a preview tab, and the detailed data are identified in a data laboratory tab. Lastly, the interpretation process demonstrates the research questions concerning the network between cities and services at first, then second, the stakeholders and services, and last, the cities' organic network systems in consideration of three factors. --- Results --- Characteristics of the Conceptually Related Smart Cities Services The early smart city development features are reflected in accumulated characteristics of the conceptually related smart city services depending on stakeholder partnerships. These results support and expand the existing literature. This paper provides results for the identical phases of three cities' partnerships which were qualitatively measured in the existing literature, Kim and Yang's (2023) study [34]. As shown in Table 2, it is revealed that for the three cities, the public and private sectors are mostly leading stakeholders. When one is selected as the leading stakeholder and others in the other stakeholders, the analysis reveals Barcelona in a public-people partnership, London in a public-academic-NGO partnership, and Berlin in a private-people partnership, as indicated by yellow shades in Table 1. Moreover, this study expands the existing studies by identifying the early development of smart city services in the current development of smart cities and the accumulated local contexts of developing smart city services. The services in the top 10 weighted degrees, highlighted with green in Table 1, are related to the early development of smart city services illustrated in Figure 2. For instance, Barcelona developed social, economic, architectural, governance, transportation, data, and infrastructure services in the beginning during the launch of the first stage of the @BCN Plan. The city was developed to regenerate cities based on citizens' creative ideas and infrastructure advancement for building a global smart city model in a holistic and comprehensive city renewal approach [56,113]. The services developed at the beginning in Barcelona in the Knowledge-Based Urban Development Project, which was an initiated smart city project, have higher weights in the accumulated network results of the weighted degree. The London services in the high rank were also mainly developed at the beginning of smart cities' evolutions until the launch of the government plan, Inclusion Through Innovation, except for some temporal services regarding health, history, and standardization. The high-ranked Berlin services in the weighted degree analysis are infrastructure, social, economy, data, governance, and transportation services developed from the beginning of smart cities before the implementation of Silicon Allee. The results demonstrate that understanding the context of smart city development is crucial in developing smart cities. In the context of smart city development, the early services carry significant weight, while high levels of network mediation by various stakeholders characterize the later services that evolve in sociotechnical transitions. Social network analysis provides quantitative information and correlations to understand network variables [12]. According to Table 3, the services with higher degrees of centrality, including eigenvector centrality and betweenness centrality, receive improved ranks in weighted degrees among the three cities. Barcelona and Berlin, whose partnerships commonly involve citizens, exhibit strikingly similar results when the two centralities are compared in each city independently. The citizen partnerships promote the highly connected services that are sustained and strengthened, such as education, environment, and health for Barcelona and safety services for Berlin. In other words, citizens become human resources or agencies to connect services through their data and active participation. Specifically, the architecture service in Barcelona, which has modest weight by itself, is linked with various services through the public, people, academia, and private sectors, thereby becoming prominent in the eigenvector centrality. Conversely, the energy service in the same city, which has low weight and few links with stakeholders, was downgraded in the eigenvector centrality. Furthermore, data and knowledge play a role in transmitting information throughout sociotechnical transitions. The services' two centralities commonly receive higher weights than weighted degrees when analyzed as a whole, indicating that explicit and tacit knowledge is transferred from one generation to the next by being embedded in the In the context of smart city development, the early services carry significant weight, while high levels of network mediation by various stakeholders characterize the later services that evolve in sociotechnical transitions. Social network analysis provides quantitative information and correlations to understand network variables [12]. According to Table 3, the services with higher degrees of centrality, including eigenvector centrality and betweenness centrality, receive improved ranks in weighted degrees among the three cities. Barcelona and Berlin, whose partnerships commonly involve citizens, exhibit strikingly similar results when the two centralities are compared in each city independently. The citizen partnerships promote the highly connected services that are sustained and strengthened, such as education, environment, and health for Barcelona and safety services for Berlin. In other words, citizens become human resources or agencies to connect services through their data and active participation. Specifically, the architecture service in Barcelona, which has modest weight by itself, is linked with various services through the public, people, academia, and private sectors, thereby becoming prominent in the eigenvector centrality. Conversely, the energy service in the same city, which has low weight and few links with stakeholders, was downgraded in the eigenvector centrality. Furthermore, data and knowledge play a role in transmitting information throughout sociotechnical transitions. The services' two centralities commonly receive higher weights than weighted degrees when analyzed as a whole, indicating that explicit and tacit knowledge is transferred from one generation to the next by being embedded in the development of infrastructure, social, economic, environmental, and other fundamental services while basic services intrinsically develop data services. --- Barcelona London Berlin Weighted Degree Eigenvector Centrality --- Betweenness Centrality The characteristics of conceptually related smart city services reflect the features of services evolution in socio-technical transitions. The phases of partnership in each city support the existing literature. The smart city implemented context influences current development in that the high-weighted services reflect early-developed services. The early developed services have high weighted services, which refers high frequency of implementation, and later developed services have high degrees of network mediated by stakeholders. Amid an evolution, some intermediated services are highlighted recently. This result is identified through weighted degree, eigenvector centrality, and betweenness centrality in social network analysis. --- Services Developments Depending on Stakeholders' Partnerships The development of different smart city services depends on partnerships with various stakeholders. Social network analysis can provide information on decision framing and key actors, and it is a relatively quick and easy way to conduct research that encourages participation from diverse viewpoints and actors [12]. Sustainable smart cities have --- Barcelona London Berlin Weighted Degree Eigenvector Centrality --- Betweenness Centrality The characteristics of conceptually related smart city services reflect the features of services evolution in socio-technical transitions. The phases of partnership in each city support the existing literature. The smart city implemented context influences current development in that the high-weighted services reflect early-developed services. The early developed services have high weighted services, which refers high frequency of implementation, and later developed services have high degrees of network mediated by stakeholders. Amid an evolution, some intermediated services are highlighted recently. This result is identified through weighted degree, eigenvector centrality, and betweenness centrality in social network analysis. --- Services Developments Depending on Stakeholders' Partnerships The development of different smart city services depends on partnerships with various stakeholders. Social network analysis can provide information on decision framing and key actors, and it is a relatively quick and easy way to conduct research that encourages participation from diverse viewpoints and actors [12]. Sustainable smart cities have --- Barcelona London Berlin Weighted Degree Eigenvector Centrality --- Betweenness Centrality The characteristics of conceptually related smart city services reflect the features of services evolution in socio-technical transitions. The phases of partnership in each city support the existing literature. The smart city implemented context influences current development in that the high-weighted services reflect early-developed services. The early developed services have high weighted services, which refers high frequency of implementation, and later developed services have high degrees of network mediated by stakeholders. Amid an evolution, some intermediated services are highlighted recently. This result is identified through weighted degree, eigenvector centrality, and betweenness centrality in social network analysis. --- Services Developments Depending on Stakeholders' Partnerships The development of different smart city services depends on partnerships with various stakeholders. Social network analysis can provide information on decision framing and key actors, and it is a relatively quick and easy way to conduct research that encourages participation from diverse viewpoints and actors [12]. Sustainable smart cities have Eigenvector Centrality development of infrastructure, social, economic, environmental, and other fundamental services while basic services intrinsically develop data services. --- Barcelona London Berlin Weighted Degree Eigenvector Centrality --- Betweenness Centrality The characteristics of conceptually related smart city services reflect the features of services evolution in socio-technical transitions. The phases of partnership in each city support the existing literature. The smart city implemented context influences current development in that the high-weighted services reflect early-developed services. The early developed services have high weighted services, which refers high frequency of implementation, and later developed services have high degrees of network mediated by stakeholders. Amid an evolution, some intermediated services are highlighted recently. This result is identified through weighted degree, eigenvector centrality, and betweenness centrality in social network analysis. --- Services Developments Depending on Stakeholders' Partnerships The development of different smart city services depends on partnerships with various stakeholders. Social network analysis can provide information on decision framing and key actors, and it is a relatively quick and easy way to conduct research that encourages participation from diverse viewpoints and actors [12]. Sustainable smart cities have development of infrastructure, social, economic, environmental, and other fundamental services while basic services intrinsically develop data services. --- Barcelona London Berlin Weighted Degree Eigenvector Centrality --- Betweenness Centrality The characteristics of conceptually related smart city services reflect the features of services evolution in socio-technical transitions. The phases of partnership in each city support the existing literature. The smart city implemented context influences current development in that the high-weighted services reflect early-developed services. The early developed services have high weighted services, which refers high frequency of implementation, and later developed services have high degrees of network mediated by stakeholders. Amid an evolution, some intermediated services are highlighted recently. This result is identified through weighted degree, eigenvector centrality, and betweenness centrality in social network analysis. --- Services Developments Depending on Stakeholders' Partnerships The development of different smart city services depends on partnerships with various stakeholders. Social network analysis can provide information on decision framing and key actors, and it is a relatively quick and easy way to conduct research that encourages participation from diverse viewpoints and actors [12]. Sustainable smart cities have development of infrastructure, social, economic, environmental, and other fundamental services while basic services intrinsically develop data services. --- Barcelona London Berlin Weighted Degree Eigenvector Centrality --- Betweenness Centrality The characteristics of conceptually related smart city services reflect the features of services evolution in socio-technical transitions. The phases of partnership in each city support the existing literature. The smart city implemented context influences current development in that the high-weighted services reflect early-developed services. The early developed services have high weighted services, which refers high frequency of implementation, and later developed services have high degrees of network mediated by stakeholders. Amid an evolution, some intermediated services are highlighted recently. This result is identified through weighted degree, eigenvector centrality, and betweenness centrality in social network analysis. --- Services Developments Depending on Stakeholders' Partnerships The development of different smart city services depends on partnerships with various stakeholders. Social network analysis can provide information on decision framing and key actors, and it is a relatively quick and easy way to conduct research that encourages participation from diverse viewpoints and actors [12]. Sustainable smart cities have Betweenness Centrality development of infrastructure, social, economic, environmental, and other fundamental services while basic services intrinsically develop data services. --- Barcelona London Berlin Weighted Degree Eigenvector Centrality --- Betweenness Centrality The characteristics of conceptually related smart city services reflect the features of services evolution in socio-technical transitions. The phases of partnership in each city support the existing literature. The smart city implemented context influences current development in that the high-weighted services reflect early-developed services. The early developed services have high weighted services, which refers high frequency of implementation, and later developed services have high degrees of network mediated by stakeholders. Amid an evolution, some intermediated services are highlighted recently. This result is identified through weighted degree, eigenvector centrality, and betweenness centrality in social network analysis. --- Services Developments Depending on Stakeholders' Partnerships The development of different smart city services depends on partnerships with various stakeholders. Social network analysis can provide information on decision framing and key actors, and it is a relatively quick and easy way to conduct research that encourages participation from diverse viewpoints and actors [12]. Sustainable smart cities have development of infrastructure, social, economic, environmental, and other fundamental services while basic services intrinsically develop data services. --- Barcelona London Berlin Weighted Degree Eigenvector Centrality --- Betweenness Centrality The characteristics of conceptually related smart city services reflect the features of services evolution in socio-technical transitions. The phases of partnership in each city support the existing literature. The smart city implemented context influences current development in that the high-weighted services reflect early-developed services. The early developed services have high weighted services, which refers high frequency of implementation, and later developed services have high degrees of network mediated by stakeholders. Amid an evolution, some intermediated services are highlighted recently. This result is identified through weighted degree, eigenvector centrality, and betweenness centrality in social network analysis. --- Services Developments Depending on Stakeholders' Partnerships The development of different smart city services depends on partnerships with various stakeholders. Social network analysis can provide information on decision framing and key actors, and it is a relatively quick and easy way to conduct research that encourages participation from diverse viewpoints and actors [12]. Sustainable smart cities have development of infrastructure, social, economic, environmental, and other fundamental services while basic services intrinsically develop data services. --- Barcelona London Berlin Weighted Degree Eigenvector Centrality --- Betweenness Centrality The characteristics of conceptually related smart city services reflect the features of services evolution in socio-technical transitions. The phases of partnership in each city support the existing literature. The smart city implemented context influences current development in that the high-weighted services reflect early-developed services. The early developed services have high weighted services, which refers high frequency of implementation, and later developed services have high degrees of network mediated by stakeholders. Amid an evolution, some intermediated services are highlighted recently. This result is identified through weighted degree, eigenvector centrality, and betweenness centrality in social network analysis. --- Services Developments Depending on Stakeholders' Partnerships The development of different smart city services depends on partnerships with various stakeholders. Social network analysis can provide information on decision framing and key actors, and it is a relatively quick and easy way to conduct research that encourages participation from diverse viewpoints and actors [12]. Sustainable smart cities have The characteristics of conceptually related smart city services reflect the features of services evolution in socio-technical transitions. The phases of partnership in each city support the existing literature. The smart city implemented context influences current development in that the high-weighted services reflect early-developed services. The early developed services have high weighted services, which refers high frequency of implementation, and later developed services have high degrees of network mediated by stakeholders. Amid an evolution, some intermediated services are highlighted recently. This result is identified through weighted degree, eigenvector centrality, and betweenness centrality in social network analysis. --- Services Developments Depending on Stakeholders' Partnerships The development of different smart city services depends on partnerships with various stakeholders. Social network analysis can provide information on decision framing and key actors, and it is a relatively quick and easy way to conduct research that encourages participation from diverse viewpoints and actors [12]. Sustainable smart cities have been planned and developed in cooperation with the public and private sectors, as shown in Table 2, while diverse services are connected and networked through multiple agents, as illustrated in Table 3. Although the weighted degree results of all partnerships primarily emphasize the implementation of infrastructure and economic services, this is not the only reason for smart city development, which should not be overlooked as simply a project developing the economy through giant investments in infrastructure and technology. The connected services through cities and services are diverse and dependent on partnerships, as analyzed in Table 4. Public sectors sustain and enlarge the services connection based on the fundamental services, while private sectors connect with emerging services that are different from public sectors. The fundamental services are infrastructure, economic, social, and data services. The services led by the public and other stakeholders are accumulated and affixed to service development phases of less influential partnerships compared with public-people partnerships, public-academic-NGO partnerships, and public-people partnerships. For instance, the standardization service | The absence of a comprehensive smart city governance model has prompted research into the characteristics of the relationships among cities, services, and stakeholders. This study aims to identify, from the perspectives of governance and sociotechnical systems, the characteristics of conceptually related smart city service implementations based on stakeholder partnerships. Social network analysis was utilized based on existing research datasets. Stakeholders, services, and representative European sustainable smart cities were included in the dataset in relation to this study's operational definition. The first finding is that the initial conceptually related smart city services are reflected in the accumulated and current characteristics of the smart city services. These depend on stakeholder partnerships, while the network foundation differs between the initial and latter services. The second finding clarifies how different development services depend on stakeholder partnerships and how multiple stakeholders, including local entities, are vital to deal with current challenges in massive urbanizations. The third finding demonstrates the emerging roles of private sectors and some intermediate services in the global network of cities. This study contributes to the management of smart cities by identifying how service development occurs based on stakeholder partnerships and contributes to their theoretical basis by empirically demonstrating the importance of multi-stakeholder partnerships to address current urbanization challenges. |
. Social network analysis can provide information on decision framing and key actors, and it is a relatively quick and easy way to conduct research that encourages participation from diverse viewpoints and actors [12]. Sustainable smart cities have been planned and developed in cooperation with the public and private sectors, as shown in Table 2, while diverse services are connected and networked through multiple agents, as illustrated in Table 3. Although the weighted degree results of all partnerships primarily emphasize the implementation of infrastructure and economic services, this is not the only reason for smart city development, which should not be overlooked as simply a project developing the economy through giant investments in infrastructure and technology. The connected services through cities and services are diverse and dependent on partnerships, as analyzed in Table 4. Public sectors sustain and enlarge the services connection based on the fundamental services, while private sectors connect with emerging services that are different from public sectors. The fundamental services are infrastructure, economic, social, and data services. The services led by the public and other stakeholders are accumulated and affixed to service development phases of less influential partnerships compared with public-people partnerships, public-academic-NGO partnerships, and public-people partnerships. For instance, the standardization service in public-academic-NGO is added to the high-ranked public-people partnership services. Health, tourism, and media services in public-private partnerships are affixed to those in private-academic-NGO. In other words, the public-people partnership is becoming a means to develop smart cities from a humanistic perspective compared with the prevailing public-private partnerships. Even though there are gaps between partnerships with public sectors, some services are mainly led by their partnerships, including governance, education, safety, environment, transportation, and architecture services. Meanwhile, partnerships with the private sector and other players disclose different services connection than the ones with the public sector, even though their implementation seems to occur in an ad hoc way. Representative instances include tourism services in private-people partnerships and health tourism services in private-academic-NGO partnerships compared with identical partnerships with public sectors. Indeed, multi-stakeholder partnerships are crucial in addressing complex issues such as energy and waste management in urban areas. Table 4 in the last column demonstrates that multi-stakeholder partnerships are one of the alternative solutions to deal with recently emerged urbanization challenges regarding energy and waste issues. These two issues are related to Co2 emissions, which are reduced by changed behaviors in cities resulting from compact or walkable cities, e-mobility in combination with low-emission energy sources, and enhanced carbon uptake and storage using nature [114]. The two services' production is mismatched with consumption regarding geographical aspects. The energy is produced outside cities and consumed inside them, while the waste issue follows the reverse pattern [115]. Technological advancements such as smart meters and smart bins can assist in identifying the average demands for households and reducing waste. In addition, urban planning can play a crucial role in developing services close to demand centers, leading to simpler networks and lower costs. The integration of energy and waste services through district heating systems can also bring benefits in terms of energy savings. Barcelona's Solar Ordinance is an excellent example of a multi-stakeholder partnership promoting renewable energy use [115]. By encouraging the installation of thermal solar panels in buildings, the city has achieved significant energy savings. Such initiatives not only promote sustainability but also have positive economic impacts, including reduced energy costs and increased job opportunities in the renewable energy sector. Note: Abbreviations: WD = Weighted Degree, EC = Eigenvector Centrality, and BC = Betweenness Centrality. Green (weighted degree) and Blue (eigenvector centrality) highlights refer to services within the top five rankings. --- Networks of Conceptually Related Smart Cities The networks of conceptually related smart cities provide ways to improve stakeholder partnerships and service implementation systems. Social network analysis can facilitate understanding of socio-institutional structures, actors, linkages, and approaches to enhance knowledge transfer, including tacit and explicit knowledge [12]. When the three cities are highly connected, services related to data, education, environment, health, media, and tourism receive higher emphasis in eigenvector centrality compared with a weighted degree, as presented in Table 5. The shades in Table 5 illustrate the rising elements in eigenvector centrality compared with weighted degrees. These elements include private, people, academic, and NGO sectors, and the services concerning data, education, environment, health, media and art, and tourism. The intermediated elements are shown to increase their connectivity and importance in connected sustainable smart cities. The connected, sustainable smart cities emphasize the human resources or agencies to connect services through their data and active participation. Private sectors play a crucial role in attracting innovations by collaborating with other stakeholders to provide various services. They cooperate with academia to research new technologies and services based on data provided by citizens and other organizations [63]. Smart cities have been developed in cooperation with the public and private sectors. From the perspectives of governance systems, local government partnerships, and supportive national governments are necessary to build on private creative ideas from private sectors and people who own successive localities from generation to generation. The local government subsidies contribute to the advancement of local communities and private sectors. Devolution strengthens and amplifies the networks among infrastructure, data, social governance, education and environment, and health, which are the result of this research. Social services interconnect private sectors with other sectors. The other four services, including data, government, environment, and health services, provide fundamental linkages between the public and private sectors. In this sense, city networks, which are built up on a foundation of developing local entities, transform the service ecosystems to bring out intermediated services, even though the cities are not geographically adjacent. --- Discussion The first result reflects the urban geographic economy and demonstrates the distinct features of conceptually related smart cities as they undergo socio-technical transitions through measures such as weighted degree, eigenvector centrality, and betweenness centrality. The context of smart city implementation plays a significant role in current developments, with high-weighted services reflecting the earliest implemented services, while the latter services are essential for connecting with existing or emerging services mediated by stakeholders. Data and knowledge are among the intermediated services that have recently gained importance amid socio-technical transitions. The concept of eigenvector centrality, betweenness centrality, and weighted degrees emerged in urban geography. The eigenvector centrality is correlated with self-reinforcement in the urbanization economy, while betweenness centrality is the intermediate element. As the urbanization economy shares some intermediate elements, such as business services, transportation services, public infrastructure, and labor pooling, organizations that require face-to-face contact, including corporate headquarters or knowledge-based businesses, tend to cluster as self-reinforcing factors [116]. This means that the intermediate elements are crucial to the modern economy [116]. In this paper, the services with weighted degrees include fundamental services such as infrastructure, economy, social, data and government, while the service with high betweenness centrality is data, which connects the high-weighted services so that it has high eigenvector centrality. In this light, services with high betweenness centrality have the potential to drive emerging industries or services, similar to how it occurs in the urban geographic economy. The use of connected intelligent data, including artificial intelligence, can serve as a unifying force to link urban services, living organisms, organizations, and environments within a governance model in order to create a safer and more prosperous world in densely populated and centralized areas [59,117]. Accordingly, the ecosystems of evolving service systems are identified through the geographic economy. When it extends to the relationship among geography, organization and specific fields, Ma (2023) clarifies multi-proximity factors driving dynamics, including geographical proximity, research contextualized cognitive proximity, and organizational proximity [105]. The first result extends the existing literature by clarifying the initial implemented services frame influence on the later network of services. It highlights the necessity of understanding the context of smart city services evolution in the context of socio-technical transitions for making sustainable smart cities. Moreover, the study indicates that the smart cities' service development ecosystems are analogous to the urban geographic economy regarding the relationships among stakeholders or organizations, intermediated services, and self-reinforced services in the urbanization economy. The second finding underscores the importance of multi-stakeholder partnerships from perspectives of service development. It presents various stages of developing conceptually related smart city services, depending on stakeholder partnerships. Public-private partnerships have been increasingly utilized in the implementation of smart city services and urban planning in recent years. The concept of public-private partnerships in urban governance, which is influenced by neo-liberalization, intends to achieve a common goal, often in the form of infrastructure development or service provision. While the public private partnerships have the potential to bring innovation and efficiency to urban planning, there are also concerns that they may prioritize profit over public interest and may not adequately address issues of social equity and environmental sustainability [118]. In this context, the concept of communicative planning has emerged as an alternative approach that seeks to involve diverse stakeholders in decision-making processes. This approach prioritizes inclusivity and seeks to ensure that all voices are heard and taken into consideration in the planning process. By doing so, communicative planning can help to avoid the isolation of vulnerable ecosystems and species and promote more sustainable and equitable outcomes. Moreover, multi-stakeholder partnerships have the potential to offer an alternative solution to emerging challenges of urbanization, particularly regarding the issues of energy and waste. The telecommunication fields endeavors to establish dual systems of smart city service systems between top-down and bottom-up that are ontology-based systems mediated by data to provide improved services to all stakeholders with limited resources [63]. Urban planning balanced between top-down and bottom-up approaches has the potential to provide solutions to the future smart city challenges by encouraging citizens to interconnect with urban systems and organizations through mobilizing sustainable smart cities based on vision and measurable and controllable elements in master plans [119]. The diverse stakeholder partnerships include cooperation among public, private, academic, and NGOs sectors and embrace the concept of devolution or decentralization. Devolution, referring to the definite ownership and self-responsibility of investment from a city or a region's ownership, contributes to increasing data awareness among stakeholders [59]. The connectivity and growing power of regions or cities lead to devolution, as occurred in the 20th century in response to colonization [120]. The sustainable development goals indicate two innovation approaches which are gaining prominence, including the vital role of local leaders to drive global change in the transformative power of urbanization (equity innovation with multi-stakeholders input), and in leaving no one behind (inclusive innovation) [4]. The interactions with local government foster connectivity with more local players, leading to prompt and suitable actions toward local challenges with voluntarily participation from diverse stakeholders that results in improvements in the services and connectivity with decision making processes [121]. The New Urban Agenda points out the appropriately balanced governance systems among the national government, subnational and local governments, and relevant stakeholders to revitalize, strengthen and create partnerships [122]. In this sense, this study provides a managerial contribution regarding what types of partnerships are appropriated for European sustainable smart cities to promote specific services. Furthermore, it empirically demonstrates the necessity of multi-stakeholder partnerships for making sustainable smart cities. The last finding concerns the city network. The concept of a city network has been discussed in line with glocalization, a concept that appeared in the Harvard Business Review [123], conurbation as mentioned by Patrick Geddes [124], and city knowledge exchanges in city expositions or exhibitions. City networks have been traditionally considered among geographically adjacent cities, such as in the concepts of conurbation and decentralization underlying the concept of city-regional development. However, with emerging smart city development, the direction has changed into networking for improving governance systems and technologies, taking into account existing assets, budgets, challenges and the background of socio-technical urban transitions. A typical instance is a city memorandum of understanding between public organizations or public and private organizations. Calzada (2017) argues that cities have the power to compete or cooperate as an investment destination so that the national government is not necessarily distributing subsidies for them [59]. Notably, the private sector has been at the forefront of digital transformation, especially during the COVID-19 era [4]. This is based on local governmental supportive investment and legislation. The United Nations highlights that the next generation of digitalization requires an ecosystem-centric approach in which the public sector plays an entrepreneurial role in spurring innovation with private sectors based on fruitful research in high-growth and high-risk areas and bringing diverse stakeholders for long-run growth strategies [4]. Serrano et al. (2020) raise the issue that smart city networks can become a regional gateway to expand the business of multinational firms rather than empowering medium-sized cities or small national firms. In this regard, the local government should take an entrepreneurial role to empower local communities and corporations to germinate local innovation, and expand their influences globally with other smart cities as a form of multinational organization. This paper empirically identifies the emerging stances of private sectors under the assumption of connected cities, services, and stakeholders as a whole in sociotechnical transitions. --- Conclusions Sustainable Smart cities are multi-faceted living ecosystems developed through diverse stakeholder participation in sociotechnical transitions. However, the lack of a suitable governance model that incorporates the various components of these systems, such as connected technologies, data, services, stakeholders, organizations, and legislation, has led to indiscriminate development with a variety of names and oversimplification of technologies without consideration of the local context and traits of smart urbanization in sociotechnical transitions. This study aims, from the governance and sociotechnical systems perspectives, to identify the characteristics of conceptually related smart city services implementations depending on stakeholder partnerships. To achieve this goal, this study has narrowed down three objectives: (1) to clarify the characteristics of services in the evolution of conceptually related smart cities by expanding on the existing literature, (2) to demonstrate the different phases of developing conceptually related smart cities services depending on different stakeholder partnerships, and (3) to identify connected services and stakeholders, assuming that conceptually related smart cities are connected virtually and physically. The application of social network analysis illustrates the relationships among stakeholders, services, and cities in establishing a smart governance model. The data for the method is based on the findings of Kim and Yang's (2023) study [34], as their objectives and ideas pertaining to sustainable smart cities align with this study. The target cities selected for analysis are European sustainable smart cities, given Europe's continued leadership in egovernment development, as evidenced by its consistent top ranking in the United Nations e-government development index (EGDI) since 2010. Specifically, Barcelona, Berlin, and London were chosen as they exemplify the European cases for the operational definition of sustainable smart cities used in this study. The dataset on stakeholders, services, and cities reveals several key findings. Firstly, the initial services associated with the conceptually related smart cities are reflected in the accumulated and current characteristics of the smart city services, depending on stakeholder partnerships. However, the network features are different between the initial and later services. Secondly, the development of different services depends on stakeholder partnerships, indicating that multiple stakeholders, including local entities, must establish partnerships to tackle the current challenges of massive urbanization. Finally, the analysis highlights the growing presence of private sectors and intermediate services in the global network of cities. This study is subject to certain limitations despite the sophisticated structures utilized to demonstrate the empirical CRSCs service evolutionary characteristics. Firstly, there is little explanation provided for how services are adapted to different geographical contexts based on stakeholder partnerships by maximizing societal benefits without incurring negative externalities. This could be addressed by consulting the existing literature, particularly Kim and Yang's (2023) study [34]. Secondly, the study lacks a temporal or geographical dimension, which could be remedied by researching geographically networked services among organizations with yearly or monthly data utilizing geographic information systems. Thirdly, the results may not be readily generalizable, as they are based on data from only three cities. To address this issue, it is recommended to analyze an identical number of sample cities in each continent, using identical methodologies for data collection, sorting, coding, classifying, and analysis, with periodic matrix taxonomy and social network analysis. Lastly, this study does not serve to meet the urgent requirement in respect of implementation of the governance model or the provision of pragmatic specifications for the most effective ICT investments to be made. Nor does it contribute to the decision-making process addressing the grand challenges for Europe and global cities in meeting political commitments regarding climate change mitigation and adaptation. Further studies need to address these issues by conducting qualitative in-depth research to investigate the significant challenges confronting in European and global megacities. Nonetheless, the results of this study have significant managerial implications, as they enable the identification of elements with high eigenvector, intermediate elements, and highly implemented fundamental services, depending on different stakeholder partnerships. These findings can inform decision-making regarding services development and contribute to the development of new smart cities by creating a smart city governance model with multi-faceted, multidisciplinary, and multilevel systems of stakeholder sectors and services, all connected by multiple partnerships. Additionally, this study has theoretical implications, as it empirically demonstrates the necessity of multi-stakeholder partnerships and devolution to build sustainable smart cities. Overall, this research can help to advance the understanding of smart city development, contributing to the practical and theoretical discourse on the subject. --- Data Availability Statement: The data presented in this study are available in [dataset] Kim, N., & Yang, S. (2023). Sociotechnical Characteristics of Conceptually Related Smart Cities' Services from an International Perspective. Smart Cities, 6(1), 196-242; https://doi.org/10.3390/smartcities6010011. Acknowledgments: Not applicable. --- Funding: This research received no external funding. --- Conflicts of Interest: The authors declare no conflict of interest. | The absence of a comprehensive smart city governance model has prompted research into the characteristics of the relationships among cities, services, and stakeholders. This study aims to identify, from the perspectives of governance and sociotechnical systems, the characteristics of conceptually related smart city service implementations based on stakeholder partnerships. Social network analysis was utilized based on existing research datasets. Stakeholders, services, and representative European sustainable smart cities were included in the dataset in relation to this study's operational definition. The first finding is that the initial conceptually related smart city services are reflected in the accumulated and current characteristics of the smart city services. These depend on stakeholder partnerships, while the network foundation differs between the initial and latter services. The second finding clarifies how different development services depend on stakeholder partnerships and how multiple stakeholders, including local entities, are vital to deal with current challenges in massive urbanizations. The third finding demonstrates the emerging roles of private sectors and some intermediate services in the global network of cities. This study contributes to the management of smart cities by identifying how service development occurs based on stakeholder partnerships and contributes to their theoretical basis by empirically demonstrating the importance of multi-stakeholder partnerships to address current urbanization challenges. |
Introduction This research maps the evolution of two local food systems over time in order to understand broader trends in the evolution of local food marketing. The trajectory and pace of change within local food networks offers clues about how rapidly the parts of the sum evolve. Local food systems are thought to strengthen social ties between growers and eaters (Hinrichs 2000), giving a sense of community and shared social values that translate into shared political agendas (Obach and Tobin 2014). The resulting "alternative food network" (AFN) connects and mobilizes people toward "civic agriculture" (Lyson and Guptill 2004;Lyson 2012) forming what some scholars consider to be a social movement (Huey 2005;Starr 2010;Levkoe 2014) that, at times and in certain communities, advocates for farmland preservation (Brinkley 2018) and/or food justice (Allen 2008(Allen, 2010;;Alkon and Norgaard 2009;Alkon and Agyeman 2011;Sbicca 2012). Local food activists tout broad promises of transformation, from improving diets that promote individual health (McNamee 2007;Waters 2011;Slocum 2011;Prosperi et al. 2019) to landscape-level changes (Vaarst et al. 2018) that reduce urban sprawl (Lima et al. 2000;Wekerle and Jackson 2005), boost local economies (Brown and Miller 2008;Winfree and Watson 2017;O'Hara and Shideler 2018), and enhance ecological sustainability (DeLind 2002;Horrigan et al. 2002;Altieri 2018). Investment in these promises occurs through purchasing food labeled as "local" and supporting markets that carry and advertise such food (Howard andAllen 2006, 2010;Eden 2011). In sum, local food systems engage people in more than just social connectedness-they also prompt collective action against the status quo by reorienting markets (McAdam 2003). The notion of "local food" is not a monolith, nor is there a neat dichotomy between "global" and "local" (Hinrichs 2003). The boundaries of what constitutes "local" are blurred; the benefits of local food networks vary by community; and priorities and allegiances shift over time. In interviewing Community Supported Agriculture subscribers, Schnell (2013) finds that the notion of "local" is not an objective spatial denotation, but a social contract between food producers and consumers who share similar values. Local food may be considered food grown and consumed within 100 miles (Smith and MacKinnon 2007) or 100 yards (Schnell 2013). Food that is advertised as "local" is not always produced with the same values. While some farming operations may emphasize fair labor, not all do (Born and Purcell 2006). Further, many farmers change their positions over time on a variety of issues, from organic agriculture to animal welfare certifications. As such, this research explores the heterogeneity and changes in social ties across a variety of local food distribution practices without imposing limitations on distance. --- Analytical framework: understanding network architecture Social Network Analysis (SNA) can help food scholars understand the future trajectory of local food systems, and can help reveal locations where marketing networks are realigning with concurrent social movements. SNA is used to examine ties/relationships between network actors, such as individuals or, in our research, individual markets and farms. SNA statistics help elucidate which actors are central, and presumably more influential, to a network, playing a coordinating or broker role in transmitting knowledge, values, and political agendas. In addition, SNA can quantify the architecture of groups within a network and highlight where there are rifts or mutually reinforcing relationships. SNA has been used to understand social movements where the constellation of actors and organizations involved influences the outcomes (Andrews 2001;Andrews and Edwards 2004) changing how rapidly a movement can build alliances (Knoke 1990), share ideas and practices (Gerlach 1971), coordinate activities (Staggenborg 1998), legitimize political organization (Hadenius 2001), and prompt change (Andrews 2001;Andrews and Gaby 2015;Biggs and Andrews 2015). SNA can help scholars predict if local food systems are stable, growing, or shrinking. There is a common narrative among scholars and policy-makers that local food systems have been steadily growing (Low et al. 2015;Martinez et al. 2010). Acknowledging the rise of local food systems, the United States Department of Agriculture (USDA) began collecting direct marketing data for the agricultural census in 2002, finding a 32% increase in the percentage of direct-market sales from 2002 to 2007, and a 5.5% increase in the number of farms with DTC sales between 2007 and 2012 (Low et al. 2015). In 2012 nearly 8% of farms in the United States marketed foods locally, which the USDA defines as either directto-consumer (DTC) sales, such as farm stands, You-Pick operations, farmers' markets, or Community Supported Agriculture (CSA), or sales through intermediaries such as restaurants, grocery stores, schools, hospitals, or other businesses (Low et al. 2015;Martinez et al. 2010). Intermediated markets account for two-thirds of local food sales (USDA NASS 2017) and are slowly gaining more research attention (Dimitri et al. 2019). In addition, short supply chains can connect farmers to consumers through food donations or urban gardening, where food is shared but not sold (Vitiello et al. 2015). These relationships are not tracked by the agricultural census, but may be just as important to civic agriculture (Lyson 2012). On the other hand, some argue that local food networks are transient. Small scale farms make up the majority of those participating in local food systems (Kirschenmann et al. 2008) with 85% of farms that sell in local markets earning less than $75,000 in gross cash income in 2012 (Low et al. 2015). These smaller-scale operations spend considerable time and effort in marketing, while also being under constant threat as they compete for marketing contracts against larger growers. Additionally, some researchers have emphasized the perils of farming on the edge of urban development (Hart 1990;Kirschenmann et al. 2008). Landowners located on the periphery of growing urban areas are often tempted to sell farmland for more lucrative housing development (Kirschenmann et al. 2008). As urban areas grow outward, land values rise, creating a peri-metropolitan "bow wave" of higher prices that also increases the cost of doing business by raising land values and taxes for farmers (Hart 1991;Martellozzo et al. 2015). Indeed, increased suburbanization has resulted in loss of prime agricultural land (Seto and Ramankutty 2016). For this reason, local food proponents often tie local food systems to attempts to rescue farmland from the avalanche of urban development. For example, non-profit farmland 1 3 preservation groups spend up to $124,000 per acre to buy development rights and preserve land in agriculture (Brinkley 2012). Although many customers are willing to pay nearly double the price for locally-grown food products (Brown and Miller 2008;Darby et al. 2008;Feldmann and Hamm 2015), these trends do not necessarily translate into stable local food networks. As shown by an autopsy on 32 farmers' market closures in Oregon, even as new local food outlets arise, many fail within a few years of opening, in part due to "individualized, complex issues that are internal and/or external to the market" (Stephenson et al. 2008). Although the agricultural census measures the total number of participating farms and the composition of marketing methods, little is known about how individual farms and markets connect to one another, and how those marketing connections change over time. Some scholars posit that the increased trust and personal relationships characteristic of local food systems creates enduring social ties (Starr et al. 2003;Chesbrough et al. 2014) based on "bonding" social capital (Putnam 2000) that would lead to long-term relationships and stable growth. In support, relationships that form through supply chain networks of local food systems exhibit transparency, a hallmark of trust (Hinrichs 2000(Hinrichs, 2003)). For instance, restaurants often promote their local suppliers as part of their routine advertising efforts, and diners build loyalty with the farms that grew the products they consume (Starr et al. 2003;Chesbrough et al. 2014;Brinkley 2017Brinkley, 2018)). This interpretation of local food systems would lead researchers to assume that local food system growth reported in the agricultural census is a result of the addition of new members to a stable and growing cohort. On the other hand, cumulative pressures on local food systems would indicate that while there may be overall local food system growth, actors and market channels may shift or die off at high rates, particularly at the urban edge. In such cases, the local food system would be made up of what Granovetter refers to as "weak ties" (Granovetter 1977(Granovetter, 1983)), defined as loose affiliations that can nimbly innovate. Arguably, communities with "bridging" social capital (weak ties across groups) as well as "bonding" social capital ("strong ties" within groups) may be the most effective in organizing for collective action (Granovetter 1973;Putnam 2000). SNA can be used to visualize and quantify the spatiality and social clustering of relationships in the local food system as it changes over time, helping to make sense of underlying drivers and limits to local food system change and its affiliated social impacts. Broadly speaking, alternative food movements have been shifting priorities and increasingly incorporating concerns for food justice (Pothukuchi and Kaufman 1999;Hammer 2004;Wekerle 2004;Horst et al. 2017), but little is known about how these shifts prompt changes in the architecture of their constituent market networks. As activists conceptualize scaling up the political ambitions of alternative food movements (Blay-Palmer et al. 2016), SNA of network architecture and change over time can illustrate how to move toward a globally interlinked network of local food systems. Such changes may be complex, as social values differ across marketing pathways and from community-to-community, and they also shift over time. The longitudinal, comparative research that we present here offers a starting point for understanding where a network of local food systems builds into larger scale social movements. For example, Hinrichs (2000) theorized that CSA members have more rural-focused values (e.g., concerns for soil health and ecological sustainability) than consumers who shop at urban farmers' markets, thus shaping the social relationships formed within these market pathways. One might expect communities with more prominent CSA presences to have a greater focus on farmland protection and growing practices. In addition, local food systems have internal feedback loops; for example, O'Hara and Shideler (2018) found that increasing DTC food sales prompted increased sales at restaurants in metropolitan counties. Thus, a better understanding of the heterogeneity in market channels offers insights into which locally-oriented markets may grow in the future and how their growth may shift their political attention. To build toward the above, this research uses SNA to understand how local food system networks evolve. Scholars have only recently started to apply SNA to the study of food systems. Lucy Jarosz (2000) called for the combined use of network theory and supply chain analysis for regional food systems. Two decades later, Trivette (2019) utilized SNA on 687 farms and 702 retailers across a three-state region in New England to reveal the central role of grocery stores and restaurants in local food systems. In addition, Brinkley (2017Brinkley (, 2018) ) applied geo-social network analysis to understand the extent to which local food systems are socially and geographically embedded in the two study counties used in this research, finding evidence of the local food system's impact on land-use policies. Our research contributes to these pioneering methodological efforts and is the largest SNA of local food systems in scale, and the first to utilize longitudinal data to examine change over time. --- Methods --- Case selection This study focuses on the local food systems of Chester County, Pennsylvania and Baltimore County, Maryland, both of which are located in peri-urban areas of the northeastern United States, in close proximity to the large urban markets of Philadelphia, Baltimore, New York City, and Washington D.C. These counties have a long history of direct marketing and local food distribution channels (Brinkley 2017). The 2012 food network data was previously collected in both counties (Brinkley 2017(Brinkley, 2018)), thus allowing for a novel, longitudinal approach to food systems network analysis. This research compares data collected in 2012, and again in 2018. Both counties show flux within their agricultural sectors, which make them interesting cases for comparison. Baltimore County saw an 8% increase in acreage of farmland within the county from 2012 to 2017 ( --- Data collection Social networks are comprised of "nodes," which are the actors or members of the network, and "edges," which are the ties or relations linking the nodes. Data collection was limited to raw agricultural products, rather than processed food or inedible value-added products (Table 1). Nodes include the farm, as well as the location of its first point of sale or donations (Table 2). The basis of ties (edges) between actors is the distribution of food, both via sales and donations. Based on the USDA definition of local food, sales could be made directly to consumers via CSAs, farmers' markets, and you-pick operations, or to intermediaries, such as restaurants, distributors, grocery stores, food banks and institutions (Table 3). We focused on nodes and edges that are transparent, meaning that connections are publicly documented. Data were collected through the review of publicly available online information, including LocalHarvest.com, county documents, and the official websites and social media pages (including Facebook and Instagram) of farms, restaurants, farmers' markets, food banks, food pantries, and schools. Snowball sampling was then used to identify other actors and their relationships in the network. For example, the first node added to the 2018 Baltimore County dataset was a farmers' market. The farmers' market website listed all the vendors that sell at the market, thus enabling us to capture the second node in the dataset: a farm also located within the county. From node two's website, we were able to capture their extensive list of direct sales relationships, which included actors both inside and outside of the county. We also logged attribute information for each node, including the name of the business, business address (recorded as latitude and longitude), identification number, an agricultural production typology code (Tables 4,5), website address, contact information, and notes on how the node was found. Edges were coded based on the types of relationship they represented (e.g. wholesale, CSA, farm stand, donations). For instance, a relationship between a farm and a farmers' market was coded as "farmers' market" in the edge table. Table 6 in the Appendix shows the coding guide and relationship typologies captured. The boundary that we set for this study was spatially defined by the political delimitation of each county (Chester County, Pennsylvania and Baltimore County, Maryland). We only captured relationships that involved at least one actor located within the county. As a result, we also included farms outside of Chester and Baltimore Counties that distribute their product into the county (for instance, if a farm from another county sells raw products at a farmers' market within the county). Similarly, we also captured relationships between farms located within one of the study counties, and sales outlets located outside of their respective county. However, we only captured instances in which the products would be distributed via ground transportation. --- Data preparation For both counties the data from 2012 to 2018 were merged into a single dataset using an R script. Edges and nodes were then individually coded based on whether they were unique to the 2012 data set, unique to the 2018 data set, or present in both data sets. In 2018, we cross checked the nodes in each dataset to find establishments that appeared to have closed since 2012. Closures were denoted in our datasets. --- Social network analysis and visualization The SNA software package Gephi was used to visualize the network graph and run descriptive statistics on the network data. The network was visualized using the force-directed Fruchterman Reingold projection, which places nodes connected by an edge in relatively close proximity with one another (Fruchterman and Reingold 1991). The forcedirected, multilevel YuFan Hu projection was also used. This projection uses coarsening and clustering to simplify the output graph (Hu 2005). Finally, we also used Gephi's Geo-Layout plugin, which allows for the integration of geospatial analytics, in order to visualize the spatiality of the network. Visualization in the exploratory stage of the analysis allowed us to identify apparent hubs in the network, which are nodes that have high in-degree (incoming) or out-degree (outgoing) connections across the network. We identified intermediaries and hubs by running statistics on degrees of centrality and clustering coefficients. We performed descriptive statistics for changes in the numbers of nodes and edges between 2012 and 2018, as well as changes in the distribution of types of sales outlets. Using online business profiles on Yelp.com, Google, business websites, and social media we manually calculated the percentage of establishments that appear to have closed since 2012, and we used Gephi to calculate the proportion of connections that have been lost due to these business closures. --- Limitations Because data was manually scraped from the web, the network data is limited by how up-to-date and extensive the various actors' publicly available information is. This is also a challenge faced by previous studies that have applied SNA to local food systems (Trivette 2019;Brinkley 2017Brinkley, 2018)). Although there is an economic incentive to keep distribution channels up-to-date for all of the actors involved, we know that not all of this data is an accurate reflection of the network. For example, many producers still listed restaurants that had recently closed on their list of distribution partners. Second, data on closures in the network are likely incomplete. Business profiles on Yelp.com and Google report which restaurants and grocery stores have closed, likely because those types of locations are often visited by the general public. However, because not all farms, farmers' markets, and small vendors maintain a robust public-facing web presence, it is often difficult to tell if they are still in operation. Third, in addition to utilizing manual web scraping, the 2012 datasets were supplemented with online surveys (Brinkley 2017(Brinkley, 2018)), which accounted for 195 nodes and 210 edges, with 90% of these in the "Other" category for node type (Table 2). --- 3 Surveys were not used to augment the 2018 data set. Arguably, therefore, the 2012 dataset includes more comprehensive information on the local food network. As a result, comparisons of the 2012 and 2018 datasets become less accurate, particularly in terms of magnitude. At the same time, however, smartphone ownership has skyrocketed from 35% in 2011 to 81% in 2019 (Pew 2019), and the prevalence of online marketing has likely increased in tandem, thus arguably making online marketing a more robust data source in 2018 when compared to 2012. Last, the data provided in this research omits numerous actors in the local food system, most notably consumers. Consumers play a large role in driving and (re)orienting the food system, local and otherwise. --- Results SNA is a powerful tool in quantitative analysis. Social networks are comprised of nodes-which are the actors, or members, of the network-and edges-which are the ties, or relations, linking the nodes in the network. Nodes may have one or more relation, and types of relations, with each other (Marin and Wellman 2011). For example, a farm might sell produce to consumers at a farmers' market. However, the same farm might also utilize their booth at the same farmers' market as a CSA pickup site. As such, there would be two edge connections between the farmers' market and the farm: one denoting DTC sales via farmers' market sales unrelated to the CSA, and another denoting DTC sales through a CSA-based relationship. This distinction is important because, as Hinrichs (2000) notes, CSAs and farmers' markets offer differently embedded social relationships. Although farmers' markets enable face-to-face interactions between farmers and consumers, they are not necessarily developing longer-term continuous relationships (Hinrichs 2000). On the other hand, the CSA model can foster greater trust and value-driven relationships between customers, who buy shares for the growing season, and CSA farmers, who are commonly motivated by non-economic factors and set share prices that are not exclusively profit-driven (Galt 2013). Such relationships may have different staying power over time, or allow for different evolutions across the network as farms transition from one form of marketing to another. We are able to explore both relationships over time using SNA. --- Growth and death To start, we provide a descriptive comparison of both counties and the proportion of network actors and ties, then we explore change over time and network architecture. Although Chester County has a larger local food system network, both in terms of nodes and edges, the overall local food network of Chester County is shrinking, while the local food network of Baltimore County is growing (Table 1). During the 6-year study period, Baltimore County saw the addition of 284 new nodes and 495 new edges in the network. During the same time period, Chester County saw the addition of 360 new nodes, and 684 new edges, but lost 393 nodes and 738 edges (Table 1). One possible explanation is that local food systems may reach a point beyond which added growth is very difficult, due to plateauing consumer interest (Low et al. 2015) or market saturation. However, when delineated by category (Table 2), all sectors within the Chester County local food system are growing. The one exception is the "Other" category which is primarily comprised of sales and donations to institutions and civic organizations. This category relied more heavily on 2012 survey data to uncover the many farm-to-food bank donations across Chester County. Such donations are not as readily advertised on farm websites and may therefore lead to under-counting in the 2018 dataset. This finding points to nuances in how local food system growth is tabulated both in research, such as this, and by the agricultural census, where categories are broad and may overlook central connections like that of the Chester County Food Bank. Both networks show substantial change from 2012 to 2018, with a relatively high rate of turnover of actors within the network (Table 2). When examined by node or edge category, both counties show nearly equal rates of growth and death in network actors (nodes) and their marketing relationships (edges). Despite growth in many categories, more than half of the participants in the local food system changed over the 6-year period, with only 40% of Baltimore County's 2012 nodes found in the 2018 data, and only 35% of Chester County's 2012 nodes found in the 2018 data. More telling, the connections across the network changed even more than the actors themselves, with only 18% of edges staying the same across both 2012 and 2018 in both counties. The fluctuation in edges indicates that, while actors may be stable, their relationships with one another evolve. The rates of endurance by category varied. In the Chester County dataset, the following nodes endured: 91 farms, 23 schools involved in farm-to-school and food bank connections, 18 farmers' markets, 18 grocery stores, 15 restaurants, 11 churches involved in food bank gardening and distribution, and 3 food banks. These locations accounted for 85% of the actors that endured from 2012 to 2018. The rest of the actors were CSA drop-off locations, community gardens, and food hubs. By comparison, the Baltimore County dataset showed 37 farms, 30 restaurants, 20 grocery stores, and 14 farmers' markets active in the network in both 2012 and 2018. These actors made up 87% of the actors that endured within the dataset. The remaining enduring actors include CSA drop-off locations, two schools, two catering companies, and two churches. Generalizations across categories are shown in Table 2. In 2012, the Chester County "Other" node category included 80 civic organizations (e.g., schools, churches, and retirement communities), many with gardens that donated food to other civic organizations. These gardens largely catered to schools or the Chester County Food Bank. The Chester 2012 data in the "Other" category also included 88 CSA drop-off locations. While the number of restaurants, farmers' markets, farms, and grocers increased over the 6-year period, the miscellaneous category decreased, with a decrease in both civic organizations and CSA drop-off locations (Tables 1 and2). This change is likely because the number of gardens associated with the food bank and other civic organizations were not as readily found online in 2018. Similarly, the 2018 Baltimore County "Other" category included 15 churches and 3 food banks. Importantly, the "Other" category is larger than any other category across both counties. This indicates the variety of actors beyond farms, farmers' markets, restaurants and grocers, which are currently the main focus of much of local food systems research. The "Other" category also captures new marketing typologies that may tap into other socio-political movements. For example, the 2018 Baltimore County dataset included a recently legalized cannabis shop, which purchases infused honey from a local beekeeper. Although the cannabis shop typology was collapsed into the "Other" category for our analysis, this represents a new aspect to local food systems that warrants further investigation, particularly as hemp-derivatives become more common in other local food spaces, such as farmers' markets, and as local food systems spread into new spaces with their own divergent or intersectional political objectives. Separating network actors into categories allows us to explore further properties of local food system stability. For example, farmers' markets were the most stable nodes within the network across both counties. This may be because farmers' markets generally have an explicit goal of providing business opportunities for local food producers, thus making them a relatively stable outlet for local food system sales. More than half (55%) of the farmers' markets stayed open in Chester County through the 6-year study period, and nearly half of them (47%) stayed open in Baltimore County. This finding supports USDA agricultural census information, noting that in 7 years (2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016), the number of farmers' markets increased by 270% (3 to 11) in Chester County and by 40% (12 to 17) in Baltimore County (USDA Food Environment Atlas). However, our data also show high rates of turnover, with over 40% of the 2012 farmers' markets no longer in operation by 2018. This flux over the course of a 6-year period indicates a certain degree of market instability, as well as rapid evolution in how consumers interact within an ever-changing local food system. Across both counties, grocers also appeared to be relatively stable actors in the local food system, with a little less than half (45% and 47% in each county) of the 2012 grocers remaining in the 2018 local food network (Table 2). Because grocers are important intermediaries that are often central to local food networks (Trivette 2019;Brinkley 2017Brinkley, 2018)), their relative stability in the network offers promise for long-term stability and growth in local food systems. The two counties in this study differ in terms of the growth of this food system actor, with grocers making up the largest growth (53%) in the actor category for Chester County, but not Baltimore County (12%) (Table 2). Baltimore County's local food system is comparatively more reliant on restaurants. This might explain the greater growth in the restaurant category, with the addition of 84 new restaurants between 2012 and 2018. Although 30restaurants remained in the Baltimore County local food network throughout the course of the study, a nearly equal number of restaurants (35) also dropped out of the network between 2012 and 2018 (Table 2). The restaurant category had higher turnover in both counties when compared to grocers. Our data indicate that, unlike restaurants, farms have greater staying power. They are also increasingly joining the local food system in both study counties. Although the USDA agricultural census noted a 30% decrease in the number of farms (128 to 91 farms) that sell through directmarket channels from 2007 to 2012 in Baltimore County (USDA, Food Environment Atlas nd), our data shows an 11% increase in the number of farms in the local food system (Table 2). Similarly, the USDA agricultural census notes a modest 4% increase in farms that sell through direct-market channels in Chester County (from 735 to 782) throughout 2007-2012; our research indicates that this county saw a 25% increase in the number of farms involved in the local food system (Table 2). The differences in figures could be because our data also capture farms that sell through intermediate markets. Intermediate markets account for twothirds of local sales (USDA NASS 2017). Further, the offset in years between the USDA agricultural census data collection and this study may also explain the difference in figures. Also of note, the Baltimore dataset appears to capture a more representative sample of direct-market farms compared to the census, while the Chester County dataset captures about 30% of direct-market farms compared to the USDA agricultural census. This may partly be because Chester County has a large portion of Amish farms that may take part in the agricultural census, but may not have an online presence as a result of religious restrictions on technology use. Due to the nature of online data collection methodology employed in this study, we were not able to verify these Amish farms and, as a result, we could not access their marketing connections. Confirmed business closures between 2012 and 2018 provide supporting evidence for the broad categorical trends above. Importantly, closure is distinct from actors simply dropping out of the network, as closure implies a complete and indefinite severing of network ties. Uniquely, SNA allows us to assess the disproportionate impact that the loss of specific actors can have on a network. Restaurants made up 60% of the 36 confirmed closures in Baltimore County. The second highest category of closures were farms, which represented an additional 16% of total closures. Similarly, half of the Chester network's nineteen confirmed closures were restaurants (Table 1). Additionally, four grocery stores, three farms, two farmers' markets, and one CSA distribution location closed, thus removing them from the 2018 network. If a local food system is more dependent on restaurants, the flux within the network could be greater, as is the case in Baltimore County. The Baltimore County dataset shows a greater loss of nodes in terms of confirmed closures, with 12% of the nodes from 2012 having closed by 2018. This resulted in a 20% loss of edge connections, as compared to a 3% loss rate for nodes and edge connections in the Chester County dataset. Restaurants have a median lifespan of 4.5 years (Luo and Stark 2014), and other network actors may have a longer business lifespan, thus translating to increased stability within the network. Many restaurants that close see the owners or chefs establish new eateries shortly thereafter. Future research could track such transitions to see if relationships are re-established with the same farmers and distributors as new spaces open up, or if restaurants that source locally have different survival rates than their nonlocally sourcing counterparts. Another possible explanation is that local food systems may need to achieve critical mass in order to compete with larger-scale food supply chains. It is possible that Chester County's large local food system has less flux compared to the still growing local food system of Baltimore County. Another way to view the confirmed closures is that each actor is a unique contributor to the local food system. The confirmed closure of 36 actors in the Baltimore County network had a disproportionate impact on edge connections, resulting in 125 lost relationships. Conversely, while Chester County also saw the closure of a few actors (19), those closures only resulted in the loss of 30 edge connections. In Baltimore County, the closure of five actors, in particular, resulted in a substantial loss of edges. These actors included the following restaurants and farms: Simmer Rock Farm, Atwater's Ploughboy Kitchen, Big City Farm, Woodhall Wine Cellars, and Clementine Restaurant. Simmer Rock Farm opened in 2010 and closed by 2013, resulting in the loss of 25 connections, including three farmers' market sales locations, 15 restaurants that carried their food, one grocery store, and a CSA. The restaurant Atwater's Ploughboy Kitchen also closed, resulting in the loss of 37connections. Big City Farm was a collection of urban farmers; its closure resulted in the loss of 14 connections, and the closure of Woodhall Wine Cellars and Clementine restaurant both resulted in the loss of seven connections. Collectively, these account for the 72% of lost connections due to closures within the network, pointing to the significant impact that a few actors can have on local food system dynamics. --- Visualization of network architecture To understand if markets are growing outward socially or if new members are incorporated at the heart of the network, we use SNA visualization to show how the web of market ties have changed over time. When visualized socially, with the most connected actors at the center of the network, Chester County's local food system shows growth and decay concentrated along the network's outer margins, though growth and death within the network is widespread (Figs. 1 and2). In contrast, Baltimore County shows significant network decay amongst actors that were central to the network in 2012, with growth occurring on the network's periphery (Figs. 3 and4). Broadly, such patterns may be the hallmarks of a larger, more established local food system in Chester County evolving at the margins, with stable central network actors maintaining the core relationships and network architecture. Conversely, Baltimore County appears to be reinventing itself, with high turnover in actors that were once central to the network. Basic network statistics help reinforce the findings from visualizations, while telling a more nuanced story about the evolution of the local food systems in both counties (Table 3). To quantify how connected the local food system is, we use the average degree statistic, which indicates the average number of actors to which each node is tied. Chester County had a stable average degree measure between 2012 and 2018, while the average degree of Baltimore County declined substantially from 2.023 to 1.37, meaning that actors within the local food system have fewer average connections in 2018 than they did in 2012. The clustering coefficient indicates the degree to which the neighbors of a node are connected. A coefficient of 1 would indicate that all neighbors are connected to each other, while a coefficient of 0 would indicate that none of a node's connections have mutual ties. While the average clustering coefficient for Chester County remained stable at 0.0023 between 2012 and 2018, the clustering coefficient for Baltimore County dropped from 0.032 to 0.023. In sum, Baltimore's network became sparser and more porous due to the many confirmed closures, mentioned above, that were central to the network architecture (Figs. 3 and4). As central actors dropped out of Baltimore County's local food system (Figs. 3 and4), newer actors grew at the network's fringe. However, this growth was not fast enough to reestablish the same level of connectivity across the network. To understand how information might travel across the network, we use network diameter, which indicates the maximum distance between any two nodes within the network. The network diameter shrank for both networks, indicating that the overall local food system became more close-knit (Table 3) potentially enabling information to travel across market ties more quickly. Similarly, the average path length for both networks also declined. The average path length indicates the average steps needed to get from one actor in the network to another and is often used to gauge how quickly information can travel across a network. Declines in network diameter and average path length indicate the development of a more tightly integrated and consolidated local food system. Had the network split, the path across would have become disconnected or very long. Such splits can occur when social or market networks fraction, but this was not the case in either county. Finally, graph density shows the total number of edges within the network relative to the possible number of edges within a network. In other words, if every node within a network were connected to every other | Local food systems are growing, and little is known about how the constellation of farms and markets change over time. We trace the evolution of two local food systems (Baltimore County, Maryland and Chester County, Pennsylvania) over six years, including a dataset of over 2690 market connections (edges) between 1520 locations (nodes). Longitudinal social network analysis reveals how the architecture, actor network centrality, magnitude, and spatiality of these supply chains shifted during the 2012-2018 time period. Our findings demonstrate that, despite growth in the number of farmers' markets, grocery stores, farms and restaurants in both counties, each local food system also experienced high turnover rates. Over 80% of the market connections changed during the study period. Farms, farmers' markets, and grocery stores showed a 40-50% 'survival' rate, indicating their role in sustaining local food systems over longer time periods. Other actors, such as restaurants, had a much higher turnover rate within the network. Both food systems became more close-knit and consolidated as the center of gravity for both local food systems pulled away from urban areas toward rural farmland. Evidence of both growth and decay within local food systems provides a new understanding of the social networks behind local food markets. |
grew at the network's fringe. However, this growth was not fast enough to reestablish the same level of connectivity across the network. To understand how information might travel across the network, we use network diameter, which indicates the maximum distance between any two nodes within the network. The network diameter shrank for both networks, indicating that the overall local food system became more close-knit (Table 3) potentially enabling information to travel across market ties more quickly. Similarly, the average path length for both networks also declined. The average path length indicates the average steps needed to get from one actor in the network to another and is often used to gauge how quickly information can travel across a network. Declines in network diameter and average path length indicate the development of a more tightly integrated and consolidated local food system. Had the network split, the path across would have become disconnected or very long. Such splits can occur when social or market networks fraction, but this was not the case in either county. Finally, graph density shows the total number of edges within the network relative to the possible number of edges within a network. In other words, if every node within a network were connected to every other node in the network the density value would be 1, while if no nodes were connected to each other the density value would be 0. Both networks saw graph density decline between 2012 and 2018. As both local food systems are maturing, they are consolidating and reducing the redundancy in connections. --- Centrality of actors The perseverance of actors and ties across both years could be interpreted as strong ties among actors, while new connections and nodes may represent innovation and "weak ties." Between 2012 and 2018 the actors most central to both networks cultivated new sales and market channel relationships, both with actors that were new to the network and with enduring actors with whom they were not previously connected. This finding indicates innovation among both enduring and new network actors. Collectively, the above statistics demonstrate that the total makeup of the network is in considerable flux. Additionally, the data indicate that the centrality of actors is changing. Betweenness centrality indicates the extent to which a node acts as a bridge between two other nodes. As such, high betweenness centrality can suggest a node's substantial power within a network, as it may serve as a broker between other actors. In Baltimore County, only one node (Springfield Farm) was ranked in the top ten highest betweenness centrality in both 2012 and 2018. Similarly, within the Chester County dataset, only one node (the Chester County Food Bank) was ranked in the top ten highest betweenness centrality across both years. Previous research has demonstrated the role that these specific actors have played in brokering new partnerships across the food system and influencing land-use policy (Brinkley 2017(Brinkley, 2018)). The turnover of other actors central to the network was an unexpected finding, showing deep changes within the local food system as the constellation of people and organizations changed. These changes likely translate to shifts in the sphere of influence of these actors as well. Scholarly literature has portrayed growing local food systems as creating enduring, embedded ties while also having a high turnover. While these claims appear paradoxical, this research helps show why such assertions may be simultaneously true. The persistence of high-centrality nodes, like the Chester County Food Bank and Springfield farm, and strength of their ties across the local food system may be especially important in an ever-changing network that is dominated by weak ties. Such weak ties foster innovation (Granovetter 1977(Granovetter, 1983) ) as new forms of market channels and associated socio-political alliances are formed across the local food system. --- Network spatiality Last, spatial trends related to network change over time help build on earlier research that considers the growth of local food systems as a response to the bow wave of urban development (Hart 1990;Zasada 2011;Brinkley 2012). The Chester County dataset shows growth of the local food network in the northern parts of the county (Fig. 5), and a simultaneous loss of food system actors in the southern portions of the county. Actor loss was clustered close to the City of Philadelphia. In Baltimore County (Fig. 6), network actors that were present across both years of the dataset were engaged in forming new edges and maintaining old connections. Similar to Chester County, actor loss is clustered in the southern portion of Baltimore County, which is closest to the City of Baltimore. Growth within the network is clustered to the north, which corresponds with Baltimore County's more rural areas. In both counties, the local food system experienced actor loss closer to urban areas, and new growth further from cities in peri-urban and rural areas. It is important to note that actors are not only farms, but also other nodes, such as farmers' markets. This finding suggests that there may be spatial boundaries to the ideological objectives of the local food movement. As farms are forced further away from urban areas, the distances to get to urban markets may become too far to traverse. At the same time, suburban growth may also stretch the social distance between urbanites and rural dwellers, placing the many shared objectives of the local food movement further from people's reach, both physically and mentally. While the counties have many differences, the similarities across both datasets may point to larger trends regionally or nationally in local food marketing. We show that farms are joining the local food movement. This change is not captured in the USDA agricultural census for either county, though it is noted nationally. The number of farms with DTC sales increased by 5.5% from 2007 to 2012, but with no increase in DTC sales (Low et al. 2015), and then the number of farms with DTC sales declined in 2017 (O'Hara and Benson 2019). Like the USDA agricultural census, we found that the most common way of selling local food was through intermediate markets, and that online marketing appeared to be on the rise. Marketing pathways are rapidly changing. In addition, both networks are consolidating and becoming more tight knit. Such change would indicate that these local food systems are made up of weak ties, enabling rapid innovation, with ever decreasing distances from one side of the network to the other. As a result, news travels faster. The network architecture of these two cases reveals that despite these weak ties both counties have a stable central actor that maintains the core identity of the county through political engagement with land-use policy and planning. These network findings help make sense of seemingly conflicting accounts that local food systems struggle and are growing; innovate and are historic (Pretty 1990;Vitiello and Brinkley 2014); and last, that they are dominated in numbers by weak ties and in central actors with strong bonds. --- Discussion and conclusion This research challenges common narratives about local food systems. The substantial flux captured across both food systems has not been anticipated in past literature, which often frame local food systems in terms of stable growth, but overlook their simultaneous decay. We found that the local food systems in both northeastern counties reinvented themselves by half and rewired nearly 80% of their connections within 6 years (Table 1). Identifying drivers of growth, stability, and decay are important for generalizing findings further. While past literature acknowledged that local food systems are multifaceted (Born and Purcell 2006), complex, and adaptive (Nelson and Stroink 2014;Blay-Palmer et al. 2016), the extent and timescale of their evolution generates new questions about how rapidly the social movements they represent shift socio-political focus, and their constituents along with them. There is evidence of these shifts at the national scale. For example, the rise of food justice movements highlights the lack of access to land ownership and markets for farmers of color. As these movements continue to gain momentum, task forces made up of growers and market managers of color are producing policy platforms. Soul Fire Farm in New York and the Northeast Farmers of Color alliance put forth a 'Food Sovereignty Proposal' in Soul Fire Farm and Northeast Farmers of Color Alliance 2018, which was acknowledged in Elizabeth Warren's national presidential campaign (2020). SNA, in combination with qualitative research, could highlight where and how "Buy Black" campaigns (Hinrichs and Allen 2008) or boycotting certain stores changes marketing networks and their embedded power structures. Similarly, SNA in combination with spatial regression analysis can trace if local food is increasingly moving to whiter more affluent block groups and where it interfaces with lower income communities and majority-minority block groups. Our research suggests that forming a "network of networks" (Levkoe 2014;Blay-Palmer et al. 2016) to scale up the political ambitions of broader food movements may prove especially challenging given the high flux and heterogeneity at the local level, but such an effort could happen rapidly given how local food networks are already reorganizing. To this end, social movement scholars note that the impact of a social movement on political change is understudied (Burstein et al. 1995) and that the outcomes over time must be measured against shifts in network composition, political focus and tactics (Andrews 2001). As this research reveals, the very social architecture of local food systems is shifting. One would expect the political objectives to also change. The decay of the network, particularly at the heart of the local food system in Baltimore County, prompts further considerations. How much can a "social network" change and still endure? The answer depends partly on how rapidly the network replenishes its ties and actors, and how adept it is at recruiting. Our research suggests that a complete disruption in recruitment into the local food system could see the food system itself cease to exist in a 12-year time frame if it followed a linear pattern. There may be cascading events where closures create ripple effects and network disruption occurs more quickly than expected. Based on the architecture, we suspect a long-tailed distribution of network ties, which would indicate that growth and death is exponential, not linear. Such considerations are important to understanding how local laws restrict the ability of new local food systems to grow, endure, and thrive. For example, cities limit permits for new farmers' markets (Brinkley 2017), and nations direct agricultural subsidies in a manner often counter to local food systems (Randall 2002;Marsden and Sonnino 2008). Framed another way, with more supportive policies, our research gives clues to how quickly a local food system might blossom. There are ample examples from the organizational literature with regard to how agricultural policies create new marketing networks; allowing, for example, the rapid agricultural transformation in Cuba (Messina 1999). If network growth socially builds outwards from a stable core, as it has in Chester County, non-linear, exponential growth can be expected. Shifts in network alliances are of particular concern in understanding how communities regulate land-use. Spatial findings help reinforce research that considers the rise of local food as a response to a wave of urbanization (Brinkley 2012). Further, the "eat local" political focus of local food systems, particularly around county-level land-use policies (Brinkley 2018), suggests that as the system rewires, it may reactively form new alliances in anticipation of major planning efforts. Both Chester and Baltimore Counties showed network growth in more rural areas, and network decay closer to the urban centers. These findings lend support to John Hart's concept of a perimetropolitan bow wave, in which metropolitan areas steadily encroach upon, and eventually engulf, adjacent peri-urban farmland (1991). Even prior to engulfment, encroachment has implications for farming operations-as the bow wave approaches and land values rise, farmers often shift their production and market channels (Zasada 2011). Our findings demonstrate where constituents are turning to local food systems as an antidote. During this study period, the housing market was steadily recovering from the 2008 recession. The shift of local food systems further from urban areas may differ under different housing markets or economic recessions, a topic for future research on just how reactive or protective the local food movement may be for slowing suburbanization. The spatial aspects of network decay also indicate that land-use patterns that keep rural and urban land-uses in close proximity may help foster greater network ties and stability across the network. In turn, such market connections should reinforce rural-urban social relationships that produce mutual understandings and a shared political agenda. The use of SNA uniquely highlights the disproportionate impacts that a few organizations or individuals can exert on total network stability. The Chester County Food Bank's role in promoting new farms and markets while connecting them to civic society (Brinkley 2017) undoubtedly contributes to their own stability and centrality to the network, but also to the broader objectives of the local food movement in Chester County to preserve farmland and provide food security. This study was conducted during a time period with relatively low unemployment rates, but economic recession will add pressure for food banks to mobilize food and volunteers, and serve more people. Chester County's food bank is well-positioned (centrally, even) in mobilizing the local food system to such a daunting task. Other food banks nationally are also interfacing with local food movements (Vitiello et al. 2015). Such findings highlight the ties between local food and food security, and open new avenues of research into how food banks both sustain the local food movement's transactional markets, and interface with its political objectives. Broader trends within marketing categories offer further timely generalizations for how to sustain local food systems during times of crises. Many states have banned restaurant dining during the COVID-19 pandemic, and quarantine protocols have placed considerable economic pressure on small businesses. Half of small businesses have enough cash to survive for 27 days without new revenue; restaurants have 16 buffer days on average (Farrell and Wheat 2016). Local food systems with larger percentages of restaurants and which are more dependent on restaurants for network growth, like that of Baltimore County, (Table 1) will likely have larger blows dealt to the local food system than counties that are not as reliant on restaurants. Widespread restaurant closures may have ripple effects across the local food movement, impacting collective action and mobility for a variety of topics ranging from food justice policies to land-use planning. While turnover in the restaurant business is well documented, with the median restaurant lifespan of 4.5 years (Luo and Stark 2014), this research raises questions about the median lifespan of other businesses, such as CSA farms and farmers' markets, and the impact of market outlet closure on small-scale farms. SNA also demonstrates that the closure of just a few nodes can substantially alter network connectivity, be those restaurants or other node typologies. Such findings help reinforce the notion that collective action in the food movement is dependent on many forms of food sales and donations. Using a longitudinal SNA approach to compare the evolution of two local food systems opens the doors for a number of future studies. This data raises questions about what methods of direct marketing are most vulnerable to disappearance and change. Chester County, Pennsylvania saw a significant reduction in the number of CSA connections in the network. Are CSAs used as stepping stones towards other forms of direct and indirect sales relationships? Online and platform-based marketing introduce new questions about embeddedness characteristics as the local food system moves from a face-to-face interaction to a virtual "know your farmer" experience. Will these new forms of embeddedness flavor the endurance or loyalty of network actors, and differently influence civic engagement? The collection of qualitative data through interviews and surveys could add additional detail to these findings. Indeed, this research does not cover changes in consumer ties to markets, which would presumably influence staying power. Consumer ties likely have important impacts on overall network architecture, as well as associated local policy objectives and outcomes. Future studies may replicate findings and move the literature toward a typology of local food systems. Some, like Chester County, may be relatively stable, with the addition of new network members and connections on the periphery of the network (Figs. 1 and2). Others, like Baltimore, could be reinventing themselves at their very core (Figs. 3 and4). Understanding how such changes in network architecture broadly correlate to shifts in policy objectives will yield new insights into how a network of local food networks could be scaled up globally, currently a theoretical concept for broad social change (Blay-Palmer et al. 2016). Farms with a "you pick" option where customers can come directly to the farm to pick their own produce Mobile farmers' market Farmers' market "truck" that brings fresh produce to communities Cidery Self-identified cidery Hobby gardener Someone who grows in their backyard but sells some products Community garden In Chester a lot of the community gardens are donating to the food bank Butcher Butcher shop Company that offers on-or off-site catering (may also be associated with a restaurant) Farm to farm One farm or producer selling directly to another farm or producer School PK-12 or college/university. Also includes dining service operators that work within schools Hospital Medical hospitals CSA pickup (customer to farm) Customer goes to the farm to pick up their CSA share CSA pickup (farm to location) A location other than the farm at which a customer can pick up their CSA share Farm visits Farm offers farm visits for schools/education Farm stand Farm sells their products at an on-site farm stand or store Value added producer A company that buys produce/product directly from the farm to produce a value added product (such as a hot sauce or jam company that does not operate a restaurant) Restaurant Includes brick and mortar locations, coffee shops, bakeries, food trucks, and farmers' market vendors who turn farm goods into value added products that they sell at a farmers' market Donation Donation sites, such as food banks or churches Box scheme Produce delivery services Online sales Online sales direct from the farm You-pick Farms with a "you pick" option where customers can come directly to the farm to pick their own produce Mobile farmers' market Farmers' market "truck" that brings fresh produce to communities Butcher to farm A butcher shop that sells their meat at farm stands Donation (raised beds) Donations from raised bed gardens to the Chester county food bank (particular to Chester) Fresh2you A specific program of the Chester county food bank for distributing fresh food in the county Farm to winery Farms that sell fresh product to wineries (for the restaurant of the winery) --- Appendix See Tables 4, 5 and6. | Local food systems are growing, and little is known about how the constellation of farms and markets change over time. We trace the evolution of two local food systems (Baltimore County, Maryland and Chester County, Pennsylvania) over six years, including a dataset of over 2690 market connections (edges) between 1520 locations (nodes). Longitudinal social network analysis reveals how the architecture, actor network centrality, magnitude, and spatiality of these supply chains shifted during the 2012-2018 time period. Our findings demonstrate that, despite growth in the number of farmers' markets, grocery stores, farms and restaurants in both counties, each local food system also experienced high turnover rates. Over 80% of the market connections changed during the study period. Farms, farmers' markets, and grocery stores showed a 40-50% 'survival' rate, indicating their role in sustaining local food systems over longer time periods. Other actors, such as restaurants, had a much higher turnover rate within the network. Both food systems became more close-knit and consolidated as the center of gravity for both local food systems pulled away from urban areas toward rural farmland. Evidence of both growth and decay within local food systems provides a new understanding of the social networks behind local food markets. |
Introduction Promoting sustainability while ensuring the world is safe and secure for its people and other species is an urgent concern for decision-makers, governments, consumer industries, and ordinary citizens worldwide (Strengers and Maller 2014;Sze et al. 2018). This objective has been a central aspiration in the United Nations conceptualization of sustainable development since the early formulations in the Brundtland Report from 1987, where it is stated that: "Certain aspects of peace and security bear directly upon the concept of sustainable development. Indeed, they are central to it" (UN 1987, p. 131). These aspects are later specified as poverty, inequality, and uneven distribution of resources, thus appealing to a holistic and convoluted approach to sustainability efforts that associates social sustainability with security (Malmio and Liw<unk>ng 2023). The conceptualization of security as intertwined with social values has remained a fundamental pillar in the United Nations (UN) peace and development resolutions, communicated as an aim to build a world "free from fear and free from want" (UNDP 1994, p. 24) and to "foster peaceful, just and inclusive societies, which are free from fear and violence" (UN 2015, p. 2). These ideas have also impacted how the defense sector in a Western liberal context has justified its monopoly on violence (Zehfuss 2018), where maintaining democratic values and strengthening society's overall ability to deal with stress have remained pivotal factors for building a secure society (Bourbeau 2015;Grove 2017). Handled by Julia Maria Wittmayer, DRIFT Erasmus University, Netherlands. While the association between security and sustainability has been an ongoing discussion in the UN over the last 40 years, it has recently gained new momentum, exemplified by the 2022 special report released by the United Nations Development Programme (UNDP), which addresses new threats to human security in the era of the Anthropocene (UNDP 2022). Particular areas of concern include the increasingly visible effects of climate change and its existential consequences (Sahu 2017), augmented by the uneven distribution of resources and global inequality, which has intensified insecurity worldwide (UNDP 2022). Furthermore, the COVID-19 pandemic brought a heightened awareness of how structural inequalities and vulnerabilities shape and aggravate security issues (Newman 2022) and thus made the interconnections of security and social values more apparent. In addition, the rapid development of artificial intelligence has actualized various social problems that profoundly impact security, such as societal polarization (WFE 2023), violent extremism (Burton 2023), and social bias of vulnerable populations in society (Benjamin 2020). These developments have reignited the relevance of acknowledging the intertwined character of sustainability and security as essential factors for development and world peace. However, the association of security and sustainability has proven itself a source of theoretical inconsistencies, especially when considering the destructive nature of military conflict, which presents deeply rooted assumptions of "security" that contradict the three principles of sustainability: environmental integrity, social equity, and economic prosperity (Elkington 2008;Purvis et al. 2019). Attempts have been made to define and accommodate this conceptual relation, academically and in extensive policy work in the UN, but several theoretical problems remain. One persistent issue is the normative valence associated with the concepts, which invokes disharmony when they are combined. The normative understanding of social sustainability encompasses a plurality of social values (Raymond et al. 2019) involving multiple stakeholders with conflicting goals (Leal Filho et al. 2022), is context-dependent (Sze et al. 2018), and accentuates a "mess of diversity" (Kenter et al. 2019). Security, in contrast, is heavily influenced by ideals associated with "national security," which is mainly focused on external threats and territorial security (Luttwak 2001;Newman 2022). This view radically contrasts the holistic and humanitarian approach of the sustainability agenda, which aims to "promote peaceful and inclusive societies for sustainable development, provide access to justice for all, and build effective, accountable, and inclusive institutions at all levels" (UN 2015, p. 3). Another theoretical issue is the underlying notion of security as a "hegemonic normative commitment" (Walker 2016, p. 89), meaning that items associated with this concept often take precedence over other issues, with the implication that a wide range of societal issues can be reformulated to legitimize a political state of exception (Oels 2012;Sahu 2017;Waever 1993). Accordingly, the security-sustainability conceptualizations seem to harbor an inherent predisposition that favors a narrow perception of security, indicating a trade-off arrangement of security and sustainability efforts. In effect, the state's interests remain at the center of security and development aspirations, contributing to an outlook of security in opposition to sustainability resolutions. One initial conclusion is, therefore, that there are precarious elements in this connection that work unfavorably for any reformulation, which makes one wonder, can security be sustainable? In response to this theoretical incongruence, a growing field of research has raised critical questions about the destructive effects of security on sustainability measures by highlighting the ecological, social, and economic imprints caused by military operations on local communities (Bildirici 2018;Jorgenson and Clark 2016;Smaliukiene 2018). Several studies have identified the connection between climate change and security, where the environment is an arena of amplified conflict and a policy area for increased securitization (Barnett and Adger 2007;Busby 2021;Oels 2012;Sahu 2017). The issue of normative imprecision within the concepts themselves has also been addressed from multiple angles, including the inside-outside relationship between national and social security (Neocleous 2006;Walker 2016), the positive and negative value of security (Hoogensen Gjorv 2012;Kivimaa et al. 2022;Nyman 2016), and conflicting values in the sustainability conceptualizations (Arias-Arévalo et al. 2017;Kenter et al. 2019;Redclift 2005;St<unk>lhammar and Thorén 2019). However, a general trend in this research field is a significant compartmentalization of security and sustainability, while a substantial focus has been rendered on ecological and economic aspects. Hence, there is a need to address the linkage between social sustainability and security and the "conceptual messiness" (Durose et al. 2022) that emerges when laboring on a theoretical understanding of this relationship. By this positioning, the central contribution of this article is to illustrate, with the assistance of three contrasting perspectives, paradox, co-production, and deconstruction, how values and ideological aspects can influence contemporary world politics and affect the conceptualizations of security and social sustainability. Therefore, with the three perspectives as a starting point, I want to unpack and explore what possibilities these perspectives suggest for the conceptual manifestation of social sustainability and security while addressing the boundaries and openings they present. Specifically, how is the interlinkage of security and social sustainability affected when the three distinct perspectives are applied, and by doing that, can we gain a deeper understanding of how conflicting values operate in world politics? The article is structured as follows. The first section clarifies the methodological approach and how the theoretical perspectives of paradox, co-production, and deconstruction have been used as illustrative tools to study the relational dynamic of social sustainability and security. After that, I will continue with the three perspectives and describe their effects on the conceptual pair. The first part addresses the paradox perspective, which stresses an essentialist view of values that convolute a reconciliation of social sustainability and security. The second part focuses on the relationship between security and social sustainability from a constructivist proposition of co-production, which pronounces reciprocity and co-creation. After that, the conceptual association is approached from a poststructuralist perspective of deconstruction, focusing on the underlying processes that produce meaning while paying attention to the hierarchical positioning of values. Lastly, I will discuss what can be discerned from studying the conceptualization of security and social sustainability using the three perspectives. --- Methodological approach This article has proceeded as a conceptual analysis to investigate what boundaries and openings three distinct perspectives of the connection between social sustainability and security might produce. The prime focus is, therefore, not so much on explaining exactly how the conceptual pair of security and sustainability has been discursively discussed in the UN, but rather, in a bricolage-inspired process focused on bringing together concepts, questions, and controversies, identifying how the meaning and the effects of this conceptual pair are altered depending on which perspective is applied (Aradau et al. 2014). In this setting, the three perspectives function as illustrative tools to understand the performative character of concepts in their contextualized materializations rather than analyzing their textual definitions per se (Guzzini 2013). Comparing concepts with various value-based compositions can bring vital information on how their dynamic unfolds in different theoretical frameworks (Garnett 2014) while providing an integrative tool for further theory development (Jaakkola 2020). Furthermore, a conceptual analysis also helps to highlight the circular connection between values and knowledge-making in their influence on governance and security measures (Jasanoff 2004). The analyzed material consists of five UN policy documents listed in full in Appendix 1. While the selected documents address the connection between sustainability and security using slightly different approaches reflecting on the specific context in which they were created, they provide a generic account of how the conceptual pair has been discussed in the UN and holds a central position in the evolution of Sustainable Development and its strong association with security. Two of them, "Our Common Future," also known as "The Brundland Report," released in 1987 (UN 1987), and "Transforming our World: the 2030 Agenda for Sustainable Development," released by the UN General Assembly in September 2015 (UN 2015), are considered canonical documents in the UN work on Sustainable Development (Mensah 2019) while presenting valuable insights on how security has been approached from a sustainability perspective. Three reports were included from the UNDP: "The Human Development Report," released in 1994 (UNDP 1994), the first report in which "human security" appears, further expanded in "Human Security Now," also called the Ogata- Sen report, from 2003(CHS 2003). These two reports are vital documents in the UN formulation of Human Security and have been discussed frequently in academic literature (Wibben 2011). A more recent publication, "2022 Special Report on New Threats to Human Security in the Anthropocene: Demanding Greater Solidarity," released in 2022 (UNDP 2022), brings an updated account of how security and its linkages to social sustainability are conceptualized today. In addition, relevant academic contributions and grey literature within security studies, sustainability, and human security have been added to exemplify the divergent standpoints produced by the theoretical perspectives of paradox theory, co-production, and deconstruction. The literature discussed has been applied to illustrate how the relationship between security and social sustainability is altered depending on which perspective is applied. Therefore, a limitation is the textual body on which the study has based its conclusion. However, the focus has been on analyzing the contrasting outcomes produced by distinct ideological vantage points rather than providing an exhaustive literature review. --- Analytical framework Previous research describing the relationship between security and sustainability has often relied on "human security," expanding on notions of negative and positive security as formulated in the traditionalist/widening-deepening debate (Hoogensen Gjorv 2012; Kivimaa et al. 2022;Nyman 2016). While this application is suitable for describing how the UN, in most parts, has addressed security, it follows a dichotomous reasoning that fails to encompass the full complexity of security when aligned with social sustainability, which includes a wide range of societal aspects. To fully comprehend this dynamic, this article has applied three perspectives: paradox, co-production, and deconstruction. Different values and epistemological orientations underpin these perspectives and represent distinct standpoints of what constitutes "true" security and sustainability. They also allow a more holistic and flexible analysis to conceptualize security and sustainability as a relational process that materializes differently depending on the perspective involved. How the perspectives have been analyzed is listed in Table 1. The paradox perspective highlights an essentialist understanding of values as absolute qualities pronouncing differences and clear-cut categories. Essentialist thinking often leads to dualistic categorization, separating distinct elements with well-defined boundaries (Jackson 1999). This epistemological baseline is influential in political realism and proceeds as a commonsensical approach to how security generally operates in world politics, emphasizing explicit categories of enemies and allies with the accumulation of power as a primal concern (Morgenthau and Thompson 1993). Paradoxes have been approached in previous research from many angles. This article has supported its conclusions based on literature from organizational studies to describe the theoretical framework of paradox theory (Hahn et al. 2018;Lewis 2000;Schad et al. 2016). Paradoxes in military philosophy rely on literature from war studies (Luttwak 2001;Morgenthau and Thompson 1993;Rothschild 1995) and critical security studies (Walker 2016;Wibben 2011). In contrast, the co-production perspective describes a constructivist view of security and sustainability as "two sides of the same coin." The co-production view is exemplified by the view of development and security described in "Human Security," relating to the UN's conceptualization of security. This approach proceeds from constructivist ideas of values as variables that depend on historical, cultural, political, and social contexts (Hopf 1998), emphasizing the interaction between science, values, and policy (Mach et al. 2020). Co-production as a theoretical framework has been widely applied in various disciplines, including studies on global sustainability (Miller and Wyborn 2020), future studies (Durose et al. 2022), and policy research (Wyborn et al. 2019). Co-production is used in this article to illustrate the widening debate in security studies (Hoogensen Gjorv 2012; Kivimaa et al. 2022) relating to the composition of sustainability and security in the form of human security (Alkire 2003;Hanlon and Christie 2016;Sen 2004), development (Duffield 2007;Nussbaum 2007), and emancipation (Booth 1991). The third perspective, deconstruction, offers a poststructuralist lens on the relationship between security and sustainability, highlighting the processes that infuse concepts with meaning and valance. Although initially associated with the philosopher Jacques Derrida in his work on critical literary analysis, this approach has been widely used as an analytical tool in critical research to highlight the processes through which meaning is constructed, contingent, and, therefore, changeable (Shepherd 2021). Accordingly, deconstruction provides an approach to the scientific critique of taken-forgranted assumptions on the constitution of the world order (Neocleous 2006; Zehfuss 2018) and how they materialize in policy (Avelino and Grin 2017;Telleria 2021) and highlights questions of power and hegemony (Burke 2002;Walker 2016). "Three perspectives of security and social sustainability" will continue with a more in-depth analysis of the three perspectives separately. --- Three perspectives of security and social sustainability Paradox A paradox can be described as a phenomenon that consists of embedded contradictions between various aspects, which seem logical when studied in isolation but absurd The context and meaning are not fixed, opens up conceptualizationsacknowledges a power dimension and irrational when appearing simultaneously (Lewis 2000). Paradoxes tend to accentuate tensions between competing yet interrelated objectives emanating from contrasting logics that operate at different levels in various time frames (Hahn et al. 2018). These tensions often originate in an essentialist understanding of values as fused with inherent qualities (St<unk>lhammar and Thorén 2019), therefore generating polarized either/or distinctions that appear paradoxical when contrasted to other values. Accordingly, a paradox perspective on the relationship between security and social sustainability emphasizes differences and frictions by associating the concepts with absolute values, such as destruction-development and power-inclusion that remain relatively fixed, therefore appearing to obstruct a reconciliation. However, because "human security and state security are mutually reinforcing and dependent on each other" (CHS 2003, p. 6), the connection between the two approaches to security produces a paradox. Besides providing a distinct comparison tool, paradoxes can occur in times of uncertainty and ambiguity, where simplified descriptions of a complex phenomenon are applied to overcome cognitive disharmony (Ford and Backoff 1988). Paradoxical reasoning, therefore, typically emerges in contexts characterized by a paradigmatic change where challenging old ideas invoke dissonance and perplexity (Kuhn 2012). As a policy intention and research agenda, sustainable development exemplifies a transformative motion in "setting out a supremely ambitious and transformational vision" (UN 2015, p. 3) for radical change. However, the obsolete core competencies that hinder true transformation appear resistant to alteration, creating a paradox of development and continuity. This is acknowledged in the 2022 Special Report on Human Security as a development paradox: "Even though people are on average living longer, healthier, and wealthier lives, these advances have not succeeded in increasing people's sense of security." (UNDP 2022, p. iii) The UN conceptualizations of security thus occasionally appear contradictory and ambiguous. While one explanation emphasizes a functional rationale, where paradoxical descriptions effectively accentuate differences, another conclusion is that the paradoxical interpretation of security emanates from "the contested concept of security" itself (Smith 2005). According to Merriam-Webster, the dictionary definition of security is "the state of being free from danger or threat," which appears unproblematically straightforward. However, when probed more deeply, the concept emerges as vague and highly normative. It thus opens up a wide range of politically motivated and occasionally conflicting views of what security, in practice, means (Booth 1991;Nyman 2016;Walker 2016). The most prevailing account of security has been recognized as "national security," which historically has focused on threats and locating danger, referents to be secured, agents that provide security, and means to contain danger (Wibben 2011). From this perspective, security is understood in deterministic terms as the pluralistic objectives of individuals and states to protect and prevent future attacks from antagonistic threats (Morgenthau and Thompson 1993). As such, it displays a stark association with a concentrated effort to enforce foreign and defense policy mechanisms to avoid, prevent, and win interstate military disputes (King and Murray 2001). In Rob Walker's words, this understanding of security operates as a hegemonic logic, which has "invoked realities and necessities that everyone is supposed to acknowledge, but also vague generalities about everything and nothing" (Walker 2016, p. 84). The perception of military reasoning as situated in a realist ontology that is ubiquitous and implicit, yet disorderly and imprecise, supports the paradoxical interpretation of security. This sentiment is illustrated by the (in)famous quote made by Publius Flavius Vegetius in the fourth or fifth century A.D.; "Si vis pacem, para bellum," "To secure peace is to secure for war" (Vegetius 1475). The proverb has supported the ambiguous dogma of what the military mission, in essence, encompasses and implies circular reasoning where peace is perceived as a prerequisite for war (and vice versa). When constructed in this way, the paradox is not conferred from a platform of opposition but instead appears as a nucleus. As such, the paradox is conceptualized not as cast by either-or thinking but instead forms an integral part of the military organization's core identity (Luttwak 2001). This type of paradoxical logic accentuates realist ideas of military actions as structural necessities where states are predestined to act in specific ways. Although it might appear unproblematic when viewed from a military context, the security paradox becomes an obstacle when combined with the core value in the social dimension of sustainability, which "values life for itself" (UNDP 1994, p. 13). This is partly explained by the ambiguity of the negative connotation of security as connected with destruction while simultaneously being concerned with peace maintenance, thus conveying a positive value. The ambiguous quality is further reinforced through covert ideas of an embedded power asymmetry operating at the center of sustainability efforts. This paradoxical construction appears in quotes supporting ideas of military power as something that can neutralize a potential threat and, therefore, "protect the people" (CHS 2003). As a result, this viewpoint displays an image of people needing protection yet simultaneously being capacitated to autonomy, emancipation, and self-government (UN 2015). In essence, the historical and political fabrication of security as a "national interest" appears paradoxical when juxtapositioned with the values of social sustainability. It draws a sharp boundary between how the concepts can be merged. A paradox perspective thus leads to the conclusion that a foundational aspect of how security operates is oppositional to ideals relating to the social dimension of sustainability. To summarize: • A paradox perspective proceeds from an essentialist understanding of values that fortifies binary evaluation structures. • Security is theorized in ambiguous terms of negative values associated with destruction, yet is associated with peace maintenance, indicating a positive value. • Boundaries to conceptualizations of security and social values are fortified by a predisposition that views the concepts as trade-offs, with the overall understanding that national security must be prioritized. • However, one potential opening for this conceptualization is that pluralistic value systems often appear paradoxical; it does not mean they cannot co-exist. --- Co-production In contrast to the sharp boundaries presented by the paradox perspective, the co-production view offers a broad theoretical spectrum where the interdependency of knowledge, culture, and power is at the center of inquiry (Durose et al. 2022;Miller and Wyborn 2020). Co-production provides a constructivist framework to expand notions of science (Wyborn et al. 2019), emphasizing reciprocity and exchanges between various stakeholders (Durose et al. 2022), and is a valuable tool for improving critical analysis and addressing normative research (Jasanoff 2004;Miller and Wyborn 2020). Analyzing security and social sustainability from a co-production perspective thus means a co-constitutive approach to producing and organizing knowledge and governance rather than treating them as separate domains (Mach et al. 2020;Turnhout et al. 2020). Accordingly, co-production contrasts the realist ideology that seeks to disconnect elements of nature, facts, objectivity, and policy from those of culture, value, subjectivity, emotion, and politics (Jasanoff 2004). When viewing the relationship between social sustainability and security from a co-production perspective, the outcome is that although social sustainability and security can be seen as derived from very different core values, there is a deep connection between them. Not only are they connected, but they are also co-constitutive since a basic level of security is required to realize a sustainable future. Conversely, legitimate security can only be achieved through sustainable development. This sentiment permeates the UN 2030 Agenda and is illustrated by Sustainable Development Goal 16 as an overarching objective to: "Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels." (UN 2015, p. 14) From a co-production perspective, security proceeds from a normative frame where security is viewed as a positive value associated with a solid emancipatory agenda (Nyman 2016). The positive view of security is a prominent cornerstone in "human security," originally intended to extend the narrow understanding of "national security" to endorse a value-based framework focused on conflict resolution and peacebuilding (Hanlon and Christie 2016;Hoogensen Gjorv 2012). A vital part of this extension includes a humanitarian approach centered on the people's well-being, highlighting the fulfillment of basic personal needs such as being fed, fully clothed, and safe from harm (CHS 2003;Sen 2004). This concept thus invites an analytical level of security focused on how people and communities can manage their needs, rights, and values concerning international security (Alkire 2003). However, this softer approach to security is not entirely separated from national security since good governance is recognized as an imperative factor for making people feel safe. This sentiment is especially prominent in the Human Security Now Report, where it is stated in several places that human security complements "state security" (CHS 2003). Therefore, the force of violence can be deployed by states that react to threats from extra-state actors to assure people that their human rights are protected and secure (Hanlon and Christie 2016). Another critical point is that even though the level of analysis is focused on the individual, the determinants of human security are affected by past courses, such as colonization and war, while ongoing developments, like climate change and trade liberalization, generate precariousness that can accentuate future vulnerabilities (Barnett and Adger 2007;UNDP 2022). One essential component in the co-production perspective of security and sustainability is emancipation. According to this view, security and emancipation are two sides of the same coin. Security equals the absence of threats, thus freeing people (as individuals and groups) from physical and human constraints, making them more emancipated (Booth 1991). This way, emancipation is the key to achieving "true" security. This idea is also notable in "The Capability Approach," developed by Amartya Sen and Martha Nussbaum, who link the social dimension of sustainability with a broad meaning of security through the concept of capabilities referring to aspects of basic human needs (CHS 2003;Nussbaum 2007;Sen 2004). The capability approach is closely related to human rights and guided by the principles of social justice and emancipation. Emancipation is further associated with development ideas, which connect insecurity and conflict with underdevelopment, since "sustained, inclusive and sustainable economic growth is essential for prosperity" (UN 2015, p. 8), while; "greater freedom enhances the ability of people to help themselves and influence the world, which is vital for development" (UNDP 2022, p. 27). Accordingly, it is crucial to include social elements in marginalized communities as critical focus areas to build sustainable peace (UNDP 1994). Therefore, effective states should protect and improve people's lives in ineffective ones since providing this help will enhance security everywhere (Duffield 2007). The idea in the co-production perspective is that security and sustainability are both interrelated and co-contingent and that: "Sustainable development cannot be realized without peace and security, and peace and security will be at risk without sustainable development." (UN 2015, p. 9) A co-production thus undoubtedly opens up a broad application of how the concepts can be merged. However, there is also a possibility that the broad implication of a coproduction view can add to conceptual confusion, imprecision, and vagueness while implying that a diverse range of human activities can be turned into security issues, which can justify undemocratic measures. Furthermore, another issue is the view of development and security as co-contingent. The overriding logic in this assumption is that development reduces poverty and diminishes the risk of future instability, thus contributing to improved global security. However, development as a necessary precondition for security can lead to intensified climate change, with negative consequences for both social sustainability and security. Furthermore, the development concept relates to a particular rhetoric, which serves to justify and disguise the prevailing patterns of global hegemony (Walker 1981). In effect, the assumption that the Western world is the most developed and, accordingly, both responsible and entitled to "saving" the rest of the world is reinforced. Consequently, viewing security and social sustainability as co-produced does not resolve the issue of power asymmetry but instead supports it in a reconciliatory vocabulary. To summarize: • The view of co-production understands security and sustainability as co-productive, interdependent, and necessary. The values of sustainability and security are given equal importance. --- Deconstruction As discussed in "Paradox", the paradox perspective implies a narrow conceptual boundary due to the fixed core values associated with security. At the same time, the co-production perspective described in "Co-production" suggests a conceptual openness that is too broad. Still, the power dimension is something that neither of the views adequately addresses. Thus, instead of identifying the variables that allow or inhibit a conceptual configuration of security and sustainability, another potentially more fruitful question is to ask what security "does" and how its performance affects the values of the concept to which it is attached. This undertaking invites a deconstructive approach that moves beyond scrutinizing specific components of security and social sustainability to focus on the underlying logic that infuses these concepts with valance and meanings. Deconstruction is a philosophical and literary analysis associated with the French philosopher Jacques Derrida that analyzes how language produces meaning and what consequences particular readings produce. It is not a method, a philosophy, or a practice, but something that happens when the arguments of a text undercut the presuppositions on which it relies, and the deconstruction takes on a life of its own (Culler 2008). Accordingly, this perspective involves a shift from exploring the meanings of the concepts to questioning "what remains to be thought, with what cannot be thought within the present" (Royle 2000, p. 7). To understand where a deconstructive approach might fill the gaps, let us briefly return to the paradox perspective and its pronounced distinctions of opposing categories. One explanation for paradoxical thinking can be attributed to the Western idealization of logocentrism, which values presence, the factual and real, as the highest goal in knowledge production (Culler 2008). However, this ideal is upheld by the constitution of its presumed opposite, accentuating binary relations that typically imbue a hierarchical valuation process (Zehfuss 2002). Accordingly, how we understand the world and its textual descriptions proceeds from differentiation, where "every concept is inscribed in a chain or a system within which it refers to the other, to other concepts, by means of the systematic play of differences" (Derrida 1982, p. 11). These binary constructions are, however, neither stable nor reflect reality per se, and as soon as they are uttered, they fall apart (Edkins 2013). Ambiguities and paradoxical constructions can, therefore, accentuate undecidable elements in conceptualizations, which open possibilities "to transform concepts, to displace them, to turn them against presuppositions" and, in that process, produce new configurations (Derrida 1987, p. 22). In other words, undecidable elements are neither one thing nor the other, and at the same time, they are simultaneously both. They can, therefore, illustrate how the arrangements for a particular phenomenon's possibility can simultaneously be the conditions for its impossibility, thus opening "the experience of the impossible" (Derrida 2007). Thus, although the connection between sustainable development and security in most parts is strikingly straightforward, it also contains undecided elements that open a deconstructive movement. This is exemplified by the following quote from the Brundtland Report: "The absence of war is not peace, nor does it necessarily provide the conditions for sustainable development." (UN 1987, p. 3:24) The quote portends an undecided and uncertain space between war and peace and that sustainable development is not necessarily achieved through a state of peace. This statement, therefore, implies that the categories of war and peace could be something else, thus revealing an undecided element in their conceptual constitution. Another example is found in a quote from the Human Development Report: "Human security is more easily identified through its absence than its presence. And most people instinctively understand what security means." (UNDP 1994, p. 23) This quote identifies security as an absence and something most people instinctively understand. What is implied by absence is the opposite of security, which is insecurity. The implication is, therefore, that security as a dominant category can only materialize through the continuous fabrication of its presumed opposite, insecurity. Not only is the universalist claim of security as an ultimate and overriding human value reinforced, but its association with military force is implicit, exemplified by an excerpt from the same report below: "The battle of peace has to be fought on two fronts. The first is the security front where victory spells freedom from fear. The second is the economic and social front where victory means freedom from want. Only victory on both fronts can assure the world of an enduring peace..." (UNDP 1994, p. 24) Accordingly, the undecided structure of "security" is what produces and maintains "presence" and creates a more stable construction for the undecided element of "the absence of war." This reading, therefore, fortifies a commonsensical notion of security as a human necessity and justifies a "security first" perspective forged around "its claim to embody truth and fix the contours of the real" (Burke 2002, p. 5). The undecided element of security appears to attach itself to a deterministic idea where the quest for more security is a chronic condition. Social sustainability, too, carries undecided elements in its conceptualization. One example is the relationship between the emancipation of individuals and the universality of the common good. The liberty and emancipation of the individual are potent ideas in linking social sustainability with security. Yet humanity is often approached as a "single and universal identity," described as the "people." This universal identity is extended in the Special Report from 2022 to encompass the whole planet: "The world is not only interconnected but also characterized by deep interdependencies across people as well as between people and the planet." (UNDP 2022, p. 27) The tension between the individual and universal extends the dimension of time, where the current generation and the next are approached as a unity with similar needs and demands. This contradiction can lead to 'dark' and 'unintended' effects of social change, intensifying power struggles and added inequalities (Avelino 2021). The universalist claim epitomizes questions of power further, appearing in conceptual configurations through philanthropic expressions of protection: "Human security is deliberately protective. It recognizes that people and communities are deeply threatened by events largely beyond their control." (CHS 2003, p. 11). This type of sentiment reveals an undecided element in the vocabulary of sustainable development that seeks to empower people yet describes them as lacking agency and needing protection. A deconstructive reading of the conceptual relationship between security and social sustainability implies that, on the one hand, these concepts are volatile and open to various interpretations while, on the other, exhibiting opposing core values that appear impossible to merge. However, this conclusion simultaneously involves a possibility since society needs security to fulfill the essential components of social sustainability, such as governing institutional justice, spreading resources more fairly, and protecting democratic functions (UN 2015). In this way, linking security to social values acknowledges how socially constructed identities and ideologies (re)create structural (un)certainties that underpin violent conflicts and consider these questions necessary items on a security agenda. In contrast to the co-production perspective, where social sustainability and security are seen as two sides of the same coin, a deconstructive approach acknowledges that merging security with social sustainability is "possible only on the condition of being impossible" (Derrida 2007, p. 451). Security and social sustainability | Security and sustainability are prioritized goals in the "Western liberal" world. Maintaining democratic resources while simultaneously strengthening society's ability to deal with security issues firmly resonates with ideals associated with social sustainability. However, merging normative theories like security and social sustainability produces conceptual difficulties that are hard to resolve. Based on key literature in this field and policy documents from the UN, this article uses conceptual analysis to investigate what boundaries and openings three distinct perspectives of the connection between social sustainability and security might produce. The perspectives chosen as illustrative tools are paradox, co-production, and deconstruction. The paradox perspective pronounces inherently divergent qualities of sustainability and security, which implies a trade-off situation. In contrast, the co-production perspective views social sustainability as a critical component in security issues, while security, in turn, is a prerequisite for sustainability. A third perspective, deconstruction, highlights underlying processes that produce and prioritize specific meanings. The perspectives of paradox, co-production, and deconstruction identify how competing values operate in conceptual configurations, highlighting the limitations and possibilities of security measures to accommodate values of social sustainability. Applying distinct approaches as illustrations for disparate ideological standpoints can deepen the knowledge of how multiple and occasionally competing outcomes are formed while considering the normative foundations enfolding inquiries of security responses to societal challenges. |
their control." (CHS 2003, p. 11). This type of sentiment reveals an undecided element in the vocabulary of sustainable development that seeks to empower people yet describes them as lacking agency and needing protection. A deconstructive reading of the conceptual relationship between security and social sustainability implies that, on the one hand, these concepts are volatile and open to various interpretations while, on the other, exhibiting opposing core values that appear impossible to merge. However, this conclusion simultaneously involves a possibility since society needs security to fulfill the essential components of social sustainability, such as governing institutional justice, spreading resources more fairly, and protecting democratic functions (UN 2015). In this way, linking security to social values acknowledges how socially constructed identities and ideologies (re)create structural (un)certainties that underpin violent conflicts and consider these questions necessary items on a security agenda. In contrast to the co-production perspective, where social sustainability and security are seen as two sides of the same coin, a deconstructive approach acknowledges that merging security with social sustainability is "possible only on the condition of being impossible" (Derrida 2007, p. 451). Security and social sustainability are thus in the process of ongoing co-creation, producing a "state of dynamic equilibrium" (Ben-Eli 2018, p. 1339) in which they hold each other in check while continuously conditioning the existence of the other. A deconstructive perspective can thus open a more flexible conceptualization of security and social sustainability in presenting a link between opposing categories. As such, it can create a framework that gives meaning to contradictions, showing how they are perspectival and fluctuating. It further highlights how power operates, often conceptually construed in benevolent cloaking as "development and protection" while reproducing hidden assumptions and problem formulations that legitimize unsustainable practices (Avelino and Grin 2017). However, this perspective also allows for relativistic conceptualizations, where the normative valence of these concepts risks diluting the conceptual meaning (Collier et al. 2006). To summarize: • The meaning of a concept is not a decided quality. Therefore, a deconstructive approach focuses on the processes that produce meanings. • Concepts have an undecided disposition, which embodies impossible and possible manifestations of values, which removes their hierarchical positioning. • When boundaries are not fixed, new approaches to conceptualizations are opened while hidden assumptions, such as power, are acknowledged. • Because the meaning, context, and realization are not fixed, this can lead to relativistic interpretations and unforeseen deconstructions. --- Discussion: openings and boundaries The perspectives discussed in this article have been used as illustrations to expose different manifestations of the conceptual connection between social sustainability and security while addressing the boundaries and openings they present. As described above, the paradox perspective fortifies a dualistic categorization with clearly defined boundaries, whereas a co-production perspective approaches security and sustainability from a pluralistic lens with interdependent elements. The third perspective, deconstruction, suggests an approach to sustainability and security that moves beyond the dichotomous structure of constant tensions while highlighting how power operates through hidden assumptions. So, what can the illustration of perspectives tell us about the relational dynamics between social sustainability and security? In addition, is it possible to reconcile these concepts? In the following, I will consider these questions from the dimensions of values, the opposition between fixed and unstable components, and the production of power and normative approaches to security and social sustainability. --- Dimension of values A critical parameter in analyzing the three perspectives of security and social sustainability is the dimensions of values embedded in the concepts and whether or not they should be treated as an inherent autonomous domain or as an external and context-dependent factor. The dimension of values does not have to be an either-or position, nor is it a static condition. However, depending on how the dimension of values is construed, reconciling values with disparate value-based origins will be either more accessible or challenging. If we view values as having an intrinsic quality with distinct conditions, normative sources, and standards (Erman and Möller 2015), then a conceptual merging will be more complex, especially when the values are highly normative and ambiguous. Values in this perspective become more fixed and inflexible, illustrated by the paradox perspective, which pronounces differences and binary oppositions. However, even though values with an absolute and fixed position may cause tensions and paradoxical arrangements, they can also teach us something by pointing out potential scopes of friction. The other perspective on values is that they are not inherent nor absolute but have an external and, hence, a variable quality, which means they depend on contextual influences and, therefore, have a more interchangeable character. In this setting, the values depend on other factors that change dynamically, exemplified by the co-production perspective. This means a shift in focus from defining the qualities of a particular concept to studying actual situated practices in the context that can help make conclusions about security and social sustainability for that specific case. A deconstructive approach shares this propensity. However, this perspective focuses more on studying the process where concepts become intricately infused with values while highlighting the hierarchical ordering principle that follows with this structuring. This leaves an open dimension where the values of security and sustainability are concurrently decided and undecided. However, while this undertaking is an essential aspect of any critical interrogation, it might lead to "so what" conclusions that do not help to bring about conceptual clarity. --- The opposition between fixed, interchangeable, and fluctuating components As discussed in "Three perspectives of security and social sustainability", one problem with the conceptualization of sustainability and security is the inherent value attached to each concept, which appears fixed and resistant to alteration yet, as illustrated by the deconstruction perspective, carries an element that remains in constant motion. This constitution invites alternative normative positions, which causes conceptual imprecision, nurtures ambiguousness, and imbues relativistic interpretations. However, due to the fixed element, potential openings are impeded. This opposition is characterized by different arrangements of fixed and interchangeable components of security and social sustainability. As argued in this article, security encompasses a fixed hegemonic logic that obstructs any reformulation to include social values. Thus, a conceptual understanding of security is inextricably grounded on a paradoxical structure emphasizing both negative (destruction) and positive (protection) aspects. It has clearly defined boundaries, accentuating differences and forming a normative baseline that appears rigid and inflexible. In this view, security supports an unappealable claim of military violence as "the ultimate solution," meaning that security takes precedence over other values. In contrast, social sustainability is a concept consisting of highly interchangeable elements that are not decided, displaying a plurality of values (Kenter et al. 2019) and reinforcing a high degree of uncertainty regarding how it should and could be defined (Leal Filho et al. 2022). In practice, this means that the fixed component of security remains relatively unaltered, even though it is filtered through the generous lens of the co-production perspective, which ultimately reproduces the dichotomous understanding of security and sustainability it initially set out to challenge. In this regard, a deconstructive approach might present a solution by offering a view of values as a process in "the making" and, hence, not a fixed thing since "context is never absolutely determinable" (Derrida 1988, p. 370). Because the structure of concepts is ambiguous, everything depends upon "how one sets it to work" (Derrida 1987, p. 22), which implies that security, too, can be "overturned" and situated differently. For this to work, however, it is vital to acknowledge the "messiness" exhibited by a mosaic reality composed of intricate clusters of competing values originating in different disciplines, contexts, and political orientations. In this perspective, the conceptualization of security and social sustainability proceeds from a processual perspective, which endorses a pluralistic value system composed of infinite possibilities. --- Power and normativity In addition, a dimension of power in these concepts arises from the intersection of instrumental objectives in the sustainability agenda and the normative approaches utilized to address these objectives. Instrumental objectives focus on task completion and strategic problem-solving while neglecting the normative complexities brought to attention through the undecided elements. Focusing on problemsolving is an approach that is, as suggested by Vince and Broussine (1996), a strategy to control uncertainty, which is a fundamental part of the normative application of sustainability. However, a problem-solving approach includes an implicit element of power that carries a compelling influence in policymaking. It is, therefore, essential to deepen the understanding of ideas that motivate different standpoints and the theoretical tools that ground the choice of selecting and implementing policy (Bicchieri and Mercier 2014). The paradox perspective acknowledges the accumulation of power as the essential goal for stakeholders in world politics, a goal that can never be fully reached. The co-production perspective approaches power from a softer proposition of liberal rationality, which strongly favors the protection and betterment of the essential processes of life associated with the population, economy, and society (Duffield 2007). However, as Turnhout et al. (2020) argue, a co-production perspective also allows elite actors to shape processes that serve their interests by pronouncing a view of power that leans on idealistic and humanitarian ideals. This view proceeds from a positive view of security, tightly connected with ideas of development and emancipation as a prerequisite for joining security with social sustainability. However, there are problems with this broadening, as it tends to reproduce the hidden assumptions on security, power, and development it initially was set out to challenge. These assumptions proceed from the idea of the protector and the protected and cement a power hierarchy which, arguably, does not sit well with the ambition in the UNDP Report that "people should be able to take care of themselves" (UNDP 1994, p. 24). In both cases, the comprehension of power strengthens Western hegemony and fortifies ideas of development as a linear progression. The perspective of deconstruction might offer a solution to the power dilemma by leaning on an understanding of power as something that is never settled, but in continous motion. Understanding power in this setting allows for conceptualizing security and sustainability as a deconstructive movement where the logic of a value-based position contradicts the position being affirmed. In this way, the different perspectives define the boundaries of the other, and as such, they also present openings. --- Conclusion The three perspectives described in this article reflect on the underlying tensions formed by disparate ideological foundations, which condense into questions of what is to be sustained and what or who is to be secured. These are critical questions to address, especially considering the complex issues the world is currently facing, which require a constant renegotiation of what values society wants to promote. To seriously address these questions requires a high degree of conceptual flexibility in responding to the intricate mixture of political motives and ethical challenges that arise when probed more deeply. The answer to these questions also sets boundaries and openings for how these concepts can be merged. The three perspectives of paradox, co-production, and deconstruction show that the conceptualization of security and social sustainability motivates different agendas that can inform how future policy is constructed and can be a productive way to sharpen the analysis of how this conceptual relationship might be approached. Recognizing the dimension of values that underpin these conceptualizations, especially by paying attention to fixed and interchangeable components and how normativity and power operate, could ease the way for integrating a conceptualization of security to accommodate the values of social sustainability. Yet, as argued in this article, the security-sustainability conceptualizations harbor an inherent predisposition that reproduces a hegemonic perception of security, leading to a continuous trade-off arrangement of security and sustainability efforts. Another point of departure is understanding the perspectives of paradox, co-production, and deconstruction as a dynamic interrelation where various aspects can be highlighted in multiple settings. This also applies to the wide range of actors approaching the conceptual pair in policymaking, who must deal with this complexity when defining the boundaries and openings for conceptualizing security and sustainability. Applying distinct perspectives as illustrations for disparate ideological standpoints can deepen the knowledge of how multiple and occasionally competing outcomes are formed beyond dominant categories. Experimentally bringing together concepts, questions, and controversies can lead the way for opening up discussions of what is taken for granted in a world of ever-increasing complexities (Aradau et al. 2014) while inviting us to reconsider the normative foundations on which any inquiry into security responses to societal challenges is based. This article has contributed with an analytical tool of illustrative perspectives on how the conceptual relation of security and social sustainability can be approached. However, to gain a deepened understanding of how this plays out in the real world, the perspectives should be empirically studied in actual situations by analyzing how different actors engage in discursive arguments and how this is reflected in world politics. --- Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1007/ s11625-023-01450-w. Funding Open access funding provided by Lund University. The Swedish Defence University financed this study. --- Declarations Conflict of interest There are no other conflicts of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | Security and sustainability are prioritized goals in the "Western liberal" world. Maintaining democratic resources while simultaneously strengthening society's ability to deal with security issues firmly resonates with ideals associated with social sustainability. However, merging normative theories like security and social sustainability produces conceptual difficulties that are hard to resolve. Based on key literature in this field and policy documents from the UN, this article uses conceptual analysis to investigate what boundaries and openings three distinct perspectives of the connection between social sustainability and security might produce. The perspectives chosen as illustrative tools are paradox, co-production, and deconstruction. The paradox perspective pronounces inherently divergent qualities of sustainability and security, which implies a trade-off situation. In contrast, the co-production perspective views social sustainability as a critical component in security issues, while security, in turn, is a prerequisite for sustainability. A third perspective, deconstruction, highlights underlying processes that produce and prioritize specific meanings. The perspectives of paradox, co-production, and deconstruction identify how competing values operate in conceptual configurations, highlighting the limitations and possibilities of security measures to accommodate values of social sustainability. Applying distinct approaches as illustrations for disparate ideological standpoints can deepen the knowledge of how multiple and occasionally competing outcomes are formed while considering the normative foundations enfolding inquiries of security responses to societal challenges. |
Introduction Considering Western culture and its orientation toward appearance, young girls and women are susceptible to the desire to be thin so they would achieve an ideal body shape [1,2]. According to the Tripartite Influence Model [3], women internalize idealized thin body shapes from the media, which includes traditional mass media and the internet, including health-oriented websites. Exposure to thin-ideal content can have a negative impact on women because it is associated with their drive for thinness and eating disturbances [4,5]. In this study, we focus on the drive for thinness, which is motivation for a thin or thinner body and the desire to lose weight [1,6]. It is considered a risk factor for well-being because it is associated with decreased psychological health and the later development of anorexia and bulimia nervosa [7,8]. Because of its potential harm, it is crucial to understand the factors that are associated with the drive for thinness. Although previous studies investigated the role of the media in relation to the drive for thinness [1,5,9], there is a lack of evidence for health-oriented websites and the role they play in promoting weight loss. We intend to contribute to this area by focusing on these types of websites within the theoretical framework of the Tripartite Influence Model [3]. Moreover, our aim is to enrich this model, which posits socio-cultural influences on eating disturbances, by including the role of the individual factors associated with the drive for thinness. Specifically, we examine the role of these websites for the perceived online social support, neuroticism, and internalization, and their direct and indirect effects on the drive for thinness. As a result, our aim is to extend the knowledge about the role of health-related websites in the development of eating disorders by showing how and for whom these online spaces pose a risk. Based on our conclusions, we propose recommendations for prevention and intervention efforts. --- Drive for Thinness and Health-Oriented Websites The drive for thinness is a motivational orientation toward having a thin or thinner body and a desire to lose weight [1,6]. It emerges as a motivated behavior in order to reduce body-related discontent [10], which is manifested by eating restraint and a preoccupation with body shape and weight [11]. It is considered a risk factor to women's health because it is associated with decreased psychological well-being, like body dissatisfaction [10], body-related anxiety [12], lower self-esteem [8], or perceived stress [13]. Moreover, the drive for thinness is one of the diagnostic criteria for anorexia and bulimia nervosa, and it is associated with the later development of both [7,8,11]. The ideal of thinness [1], the drive for thinness, and related eating disorders are more prevalent in women than in men [14]. Therefore, we focused the study on women. Considering the potential detrimental effects, it is important to understand the factors which exacerbate the drive for thinness. According to the Tripartite Influence Model [3], there are three main influences on disordered eating: parents, peers, and the media. The role of the media has been highly debated in relation to disordered eating. In the past two decades, substantial attention has been given to the role of new technologies, such as social networking sites, eating-and exercise-related websites, personal blogs with pro-eating disorder content, and various health-related discussion forums [15]. We focus on websites related to weight loss, nutrition, and exercise. These websites act as important sources for general online information related to nutrition, fitness, weight loss, and a healthy lifestyle. There are plenty of websites that address these topics, including personal blogs, informational websites for particular health-related themes, discussion forums, and social-networking groups [16,17]. Websites can be focused on weight loss, body shaping, healthy lifestyle, eating, dieting, nutrition plans for specific illnesses, recipes, and exercising [15,18,19]. Visitors may go through content, read articles, make and read comments, and obtain advice and inspiration. Moreover, websites can serve as a social environment where people interact with messages, comments, and evaluations, and they are places where people can receive support from other visitors [20][21][22]. However, these websites can have a negative impact on women because they display content that is associated with the drive for thinness, body dissatisfaction, and eating disturbances [4,5,23,24]. Specifically, some of these websites display pro-ED (pro-eating disorder) content that suggests that maintaining an eating disorder is a positive lifestyle choice [25]. They also contain positive comments about being thin, guilt-inducing messages related to food, stigmatization about weight, and expressions of negativity about being fat or overweight. They include content related to dieting and eating restraint, and the promotion of a thin-ideal appearance [18,26]. This appearance-oriented content can have a negative effect on women through the maintenance of weight-and appearance-related concerns [27]. The current study focuses on young female visitors of health-oriented websites in the Czech Republic. According to the data from Eurostat [28], 54% of Czech women aged 16 to 29 searched for online health-related information in 2016, which is the year when the data for our study was collected. The European average during that time was 60% of young women. Concerning the general usage of the internet, 95% of Czech women aged 16 to 29 stated that they used the internet in the preceding three months in 2016, whereas the European average was 96% [29]. This means that the usage of the internet and the online health seeking behavior among Czech women is similar as in other European countries. --- Internalization The negative effect of the exposure to the appearance-related online content can be explained with the Tripartite Influence Model, which suggests that the link between exposure to media ideals and eating disorders is not direct. It proposes that internalization of the appearance ideals serves as a mediating factor interfering association between media effect and disordered eating [30]. Media impact on disordered eating via internalization, as proposed by Tripartite Influence Model, was examined and supported by previous studies [31][32][33][34][35]. In the context of developing and maintaining eating disturbances, internalization is the process of adopting socially and culturally defined norms about body shape, which are commonly maintained as body ideals in everyday social interactions and in the media. By internalizing these ideals, one's conception of self could be affected because the ideals can come to represent personal standards against which one could appraise self and others [34]. Since the idealized appearance depicted by the media does not always correspond with one's real body shape, inconsistencies can emerge between the internalized norm and the actual body. Internalized ideals and perceived discrepancies can lead to consideration about how to obtain this ideal body [1]. This in turn results in disordered eating. Several studies investigated specifically drive for thinness and how it is related to internalized appearance ideals in adolescent girls and young adult women. Internalization is a significant factor associated with the drive for thinness in both categories [1,8,15,35,36]. Moreover, the mediational role of internalization in the association between media exposure and the drive for thinness was supported [15,35]. However, less attention has been given to the individual factors which may be salient in this process and help explain who is susceptible to internalize media content. Therefore, in this study, we focus on two factors: online social support and neuroticism. --- Online Social Support Research has shown that seeking support from others is a frequent motivation for using health-oriented websites and participating in health-related online groups [37][38][39]. The online space offers various ways to get in touch with others, so there are also diverse ways to seek help and receive support. Social support, which in this context is mostly provided as emotional support, is expressed through emotions, empathy, and as informational support, like sharing knowledge regarding eating or fitness activities [21,40]. Online social support has been investigated as an important factor among people who struggle with eating disorders. For instance, women who engaged in an internet weight loss community mentioned encouragement, motivation, information, and shared experiences as significant resources. They appreciated the accessibility, the anonymity, and the non-judgmental interactions as unique characteristics of internet-mediated support [21]. Moreover, examinations of ED discussion forums and ED-oriented support groups have revealed that these online sites provide relevant information, emotional support, personal disclosure, help, friendship, peer support, and a safe space to ventilate feelings [20,22,39,41]. Though receiving social support is, in many occasions, a very beneficial process, we also examine its potential for the reinforcement of the drive for thinness via increased internalization. This process can be described with two theories: Social Identity Theory, which refers to an individual's knowledge of belonging and the perceived emotional and value significance of group membership [42]; and the Self-Categorization Theory [43,44], which depicts how membership in social groups affects an individual's behavior. Social identity refers to an individual's knowledge of belonging and perceived emotional and value significance of group membership [42]. Social identity can act as the basis for both giving and receiving social support. Perceived social support can additionally promote the sense of shared identity and the subjective importance of one's group membership [19,42,45,46]. Subsequently, social identity and group membership are associated with the internalization of group norms. The norms and attitudes shared within the group are internalized as personal standards and the individuals act accordingly [47]. On websites related to weight loss, nutrition, and exercise, users share body-appearance standards, which are demonstrated by the website content, and have discussions about ideal appearance and figure [18]. With these shared interests, the goals, the mutual interaction, and the social support that are exchanged among visitors, the websites have a social character. Thus, consistent with the Social Identity Theory approach, the perceived social support from the health-oriented websites can promote a sense of shared social identity and the perception of salience within the website group membership. Consequently, norms and standards regarding body appearance can be internalized even more. --- Neuroticism Neuroticism is defined in terms of the inclination to emotional reactivity, instability, perceived anxiety, and high vulnerability when coping with stress [33,48,49]. Individuals who are high in neuroticism are excitable, easily upset, and prone to experiences that are unpleasant [50]. They are also more sensitive to criticism; they experience higher levels of rejection; and they have lower self-esteem [51]. In prior research, neuroticism has been connected to the increased drive for thinness in women [52,53], to heightened food and body preoccupation [54], to body dissatisfaction [55], to the self-regulation of eating attitudes (e.g., food temptation) [56], and even to eating disorder diagnosis [48,57] and binge eating [58,59]. According to Fischer, Schreyer, Coughlin, Redgrave, and Guarda [52], the facets of neuroticism, including irritability and difficulty with emotional regulation, are risk factors for developing an ED. Moreover, disordered eating is associated with neuroticism because it can serve as a coping mechanism with which neurotic individuals deal with negative feelings [58,60]. In this study, we examine neuroticism as a risk factor for increased internalization, which can lead to a stronger drive for thinness. The link was proposed by Scoffier-Mériaux et al. [56], who hypothesized internalization as a mediator between neuroticism and unhealthy dieting behavior. This model was subsequently tested by Martin and Racine [49], who examined the mediating roles of thin and athletic-ideal internalization in association between neuroticism, body dissatisfaction, and compulsive exercise. Using the sample of 531 college students (58% women) aged 18-44, they found that thin-ideal internalization mediated the link between neuroticism and body dissatisfaction, and the internalization of athletic ideals mediated the effect of neuroticism on compulsive exercise. Moreover, several prior studies have found that neuroticism is associated with higher internalization [49,50,56,61]. To explain this link, Roberts and Good [50] suggest that women with increased neuroticism compare themselves to attractive people, and this comparison is more likely to result in negativity due to their emotional liability. This negative effect, which arises from the incongruity between the internalized body ideal and the actual body shape, can result in an increased drive for thinness, as has been proposed by previous studies [52,53]. Therefore, we hypothesize that internalization may be a mechanism through which neuroticism is positively linked to the drive for thinness in women. --- Research Goals This study focuses on the drive for thinness, which is considered a risk for women's well-being. It aims to enhance our understanding of the risk factors that contribute to its development, specifically with regard to the influence of media and the role of individual factors in young women. Previous studies have shown that the media can have a negative effect on women because exposure to its content is associated with their desire to have a thin body shape [1,5,9]. However, these studies mainly investigated traditional media (i.e., TV, magazines) and pro-eating-disorder websites. There is a lack of research in health-oriented websites, which are currently popular. These websites display content that is associated with the drive for thinness, body dissatisfaction, and eating disturbances [4,5,23,24]. Therefore, our aim is to fill this gap and bring more insight into the association between visiting health-related websites and the drive for thinness among women. Furthermore, our study aims to enrich the Tripartite Influence Model [3], which is the theoretical framework that explains eating disturbances with socio-cultural factors, by incorporating neuroticism and perceived social support as individual factors. Specifically, we test whether web content internalization mediates the effect of these factors. We propose that increased neuroticism and perceived online social support positively affects web content internalization, which in turn affects the drive for thinness. Considering that disordered eating can be related to age and Body Mass Index [62][63][64], we also control for both of these factors. --- Materials and Methods --- Study Sample This study uses data from a project which focused on the visitors of websites oriented toward nutrition, weight loss, and exercise. The data were collected through an online survey between May and October 2016. Participants were recruited with an invitation on 65 Czech websites, web magazines, social networking sites, blogs, and discussion forums that focused on weight loss, diet, eating habits, and exercise. The original sample comprised of 1002 respondents (81.6% women, aged 13 to 62, M = 24.8, SD = 6.9). The project was approved by the Research Ethics Committee of the University. The current study focuses on a subsample of 445 young adult women, aged 18 to 29 (M = 23.5, SD = 3.1). Because the ideal of thinness is aimed mainly at women [1] and the drive for thinness and eating disorders are more prevalent in women [14], we focused on women in our study. Moreover, young adult women were the major part of the health-oriented website visitors in the project, and we did not have a sufficient amount of data from participants of other ages and genders. The original sample of women in the age range from 18 to 29 comprised of 632 participants. We excluded respondents based on their motivation for visiting health-oriented websites and because of missing data. We excluded women who reported that the reason for their website visits was because of the health issues of someone else (as indicated by the question Do you visit the sites about nutrition or sports not for yourself, but mainly because you want to help with the nutrition or sport of another person (partner, child, parent, etc.)?) (N = 37). In addition, participants with a substantial number of missing values for the key variables (N = 150) were excluded, and there were no significant age differences between our sample and excluded respondents (t = 0.37, p = 0.71)). --- Measures --- Perceived Online Social Support Perceived online social support was assessed using three items adapted from Graham, Papandonatos, Kang, Moreno, and Abrams [65]: I get advice and support here that I would not get elsewhere; It is encouraging to know that there are other people making similar efforts (with regard to nutrition or sport); and I feel that other visitors (or authors) of sites are giving me support, with answers that ranged from 1 = Definitely does not apply to 4 = Definitely applies. A higher score indicated higher perceived support. The internal consistency was acceptable (<unk> = 0.72, M = 2.8, SD = 0.7). --- Neuroticism We measured neuroticism with three items from the short 15-item Big Five Inventory [66]. The items were I worry a lot; I get nervous easily; and I remain calm in tense situations (reverse scored). Participants answered on a six-point scale that ranged from 1 = Does not apply to 6 = Definitely applies. A higher score indicated higher neuroticism. The internal consistency was acceptable (<unk> = 0.67, M = 3.7, SD = 1.1). --- Web Content Internalization Internalization was measured using the question "To what extent do the following statements apply to you in regards to these sites?" with three items that were adapted from Cusumano and Thompson [67]: I compare my appearance with people on these sites; I try to look like the people on these sites; and The content on these sites inspires me in how to look attractive. Participants answered on a six-point scale that ranged from 1 = Does not apply to 6 = Definitely applies. A higher score indicated higher web content internalization. The internal consistency was satisfactory (<unk> = 0.81, M = 2.4, SD = 0.8). --- Drive for Thinness The Drive for Thinness subscale from Eating Disorder Inventory-3 [68] was used. The scale consisted of seven items (e.g., I feel extremely guilty after overeating; I am preoccupied with the desire to be thinner). Participants responded on a six-point scale that ranged from 1 = Never to 6 = Always. A higher score indicated a higher drive for thinness. The internal consistency was satisfactory (<unk> = 0.86, M = 3.4, SD = 1.2). The latent variable was constructed with the parceling approach; specifically, we made three parcels, combining low-loading and high-loading items [69]. Parcels were computed as a mean of the items. --- BMI Participants provided information about their current weight (in kilograms) and height (in centimeters). BMI was computed as follows: Weight (kg)/Height (m) 2. --- Results We examined the correlations among the variables (Table 1): perceived online social support, neuroticism, web content internalization, and the drive for thinness. The results were as expected: the drive for thinness was positively correlated with online social support (r = 0.11, p = 0.03), web content internalization (r = 0.51, p <unk> 0.001), and neuroticism (r = 0.23, p <unk> 0.001). Web content internalization was positively associated with online social support (r = 0.24, p <unk> 0.001) and neuroticism (r = 0.16, p <unk> 0.001). Additionally, the drive for thinness was positively associated with BMI (r = 0.20, p <unk> 0.001), but not with age (r = 0.02, p = 0.67). To test our presumptions, Structural Equation Modeling (SEM) was used with a Robust Maximum Likelihood (MLR) estimator. We used R software, and lavaan, semTools, and semPlot packages. We tested a model with indirect effects, predicting drive for thinness. We included neuroticism and online social support as predictors, the web content internalization as a mediator of the effect of neuroticism and social support, and age and BMI as controls. The model had an acceptable fit, CFI = 0.98, TLI = 0.97, RMSEA = 0.04. Results are displayed in Figure 1 and Table 2. Perceived online social support from health-oriented websites predicted web content internalization (<unk> = 0.28, p <unk> 0.001). Perceived online social support did not have a strong direct effect on the drive for thinness, though the effect was weak and marginally significant (<unk> = -0.11, p = 0.06; CI = -0.61; 0.01). Moreover, we found a significant indirect effect for online social support on the drive for thinness via web content internalization (<unk> = 0.16, p = 0.001). Neuroticism predicted web content internalization (<unk> = 0.24, p <unk> 0.001), and it had a direct effect on the drive for thinness (<unk> = 0.14, p = 0.01). Moreover, we found a significant indirect effect for neuroticism on the drive for thinness through web content internalization (<unk> = 0.14, p <unk> 0.001). Therefore, the link between neuroticism and the drive for thinness was partially mediated by the web content internalization. Regarding controls, BMI positively predicted the drive for thinness (<unk> = 0.17, p = 0.001), but there was no association between age and the drive for thinness (<unk> = 0.02, p = 0.60). --- Discussion In our study, we examined the factors associated with the drive for thinness in young adult women who visited websites oriented toward weight loss, nutrition, and exercise. Specifically, we investigated the perceived online social support of other website visitors, the neuroticism, and the web content internalization of the body appearance standards, and their direct and indirect effects on the drive for thinness. Our objective was to investigate whether the web content internalization mediates the links among the perceived online social support, the neuroticism, and the drive for Perceived online social support from health-oriented websites predicted web content internalization (<unk> = 0.28, p <unk> 0.001). Perceived online social support did not have a strong direct effect on the drive for thinness, though the effect was weak and marginally significant (<unk> = -0.11, p = 0.06; CI = -0.61; 0.01). Moreover, we found a significant indirect effect for online social support on the drive for thinness via web content internalization (<unk> = 0.16, p = 0.001). Neuroticism predicted web content internalization (<unk> = 0.24, p <unk> 0.001), and it had a direct effect on the drive for thinness (<unk> = 0.14, p = 0.01). Moreover, we found a significant indirect effect for neuroticism on the drive for thinness through web content internalization (<unk> = 0.14, p <unk> 0.001). Therefore, the link between neuroticism and the drive for thinness was partially mediated by the web content internalization. Regarding controls, BMI positively predicted the drive for thinness (<unk> = 0.17, p = 0.001), but there was no association between age and the drive for thinness (<unk> = 0.02, p = 0.60). --- Discussion In our study, we examined the factors associated with the drive for thinness in young adult women who visited websites oriented toward weight loss, nutrition, and exercise. Specifically, we investigated the perceived online social support of other website visitors, the neuroticism, and the web content internalization of the body appearance standards, and their direct and indirect effects on the drive for thinness. Our objective was to investigate whether the web content internalization mediates the links among the perceived online social support, the neuroticism, and the drive for thinness. We found support for our presumption: both online support and neuroticism were positively linked with the tendency for internalization, which, in turn, increased the drive for thinness. In our data, we found a substantial connection between internalization and the drive for thinness. Our findings are in line with the Tripartite Influence Model [3,[31][32][33], which suggests that body image concerns and eating disorders are affected by socio-cultural factors (e.g., media pressure, parental criticism, peer criticism) and indirectly through the internalization of the medialized body ideals. Moreover, we enriched the propositions of the Tripartite Influence Model [3] by including individual factors. This line of research was recently developed in studies that focused on perfectionism, self-esteem, depression, and anxiety [30][31][32][33][34]70]. This focus helps to better understand the risk factors, which strengthen the tendency for internalization. Specifically, we found that perceived support increased the drive for thinness via its reinforcement of internalization. Our findings correspond to knowledge regarding ED (eating disorders) online groups, in which perceived support was connected to a higher sense of belonging and the acceptance of thin-ideal norms [20,22,39,71]. ED online groups and communities act as an important source of support that can be difficult to obtain elsewhere for individuals who struggle with ED and body image concerns [39,41]. However, support received from these online groups can be detrimental to women's health because it endorses negative attitudes toward their bodies and promotes extremely thin body shapes as attainable standards. Haas et al. [72] examined the social support on pro-anorexia websites and discovered that visitors received support for eating restraint and reinforcement for their negative views of themselves and their bodies. Sowles et al. [73] pointed out that members of the pro-ED online community disseminate images that depict thin body shapes and promote the thin ideal by labeling them as their desired goals. Similar findings emerged from a study by Marcus [40], who found that members of a pro-anorexic community shared photos of extremely thin bodies to motivate users to maintain their diets and to outline the beauty standards of the group. In this manner, women are encouraged to adopt body appearance standards that lead to a desire for a thin body. The findings of our study suggest that these processes apply not only to ED online groups, but to health-related websites as well. Health-oriented websites, with their opportunities for social interaction (e.g., discussion with other users about specific health-related topics, personal messages, inspiration, sharing experiences, memories, feelings), enable visitors to receive social support. The perceived social support is associated with the acceptance of group norms due to the higher subjective salience of the social group to which the individuals belong [40,44,47]. In line with social identity theory [44], the stronger identification with a group would result in the acceptance of group norms and, in the case of websites focusing on nutrition and fitness-these probably supported the thin and fitness-oriented images of the ideal body. Thus, though the perceived support is often seen as a positive aspect of online interaction, in these instances, it may result in negative outcomes. However, when interpreting these results, the limitations of this study should be taken into consideration. Due to the correlational nature of the data used, it was not possible to draw causal conclusions. Thus, the association between online social support and the drive for thinness may work in the opposite direction, meaning that women with a stronger drive for thinness may more often seek social support for their goals and efforts in the online space and, specifically, via health-oriented websites. Moreover, this finding should also be compared to the results for the direct effect of support on the drive for thinness. This effect was rather weak and just marginally significant; however, it may indicate that the role of support is diverse. If we disentangle the indirect effect that positively affects the drive for thinness from the direct effect, we find that support negatively affected the drive for thinness. To interpret this finding, we should acknowledge that perceived support helps to increase overall well-being [74][75][76], which decreases the tendency for unhealthy and disordered eating habits [77,78]. Thus, perceived online social support can actually function as both a risk and a protective factor. On one hand, it may contribute to the development of the drive for thinness via increased internalization. On the other hand, it may also serve as a buffer for this negative effect, probably via the increase of overall well-being, which was not included in this study. This presumption could be pursued in future examinations. Thus, we still need to consider other factors which underlie the internalization of the web content. Our study focused on neuroticism, which showed to be positively linked to the drive for thinness and also had an indirect effect via internalization. Therefore, the effect of neuroticism on women's drive for thinness was partially mediated by the internalization of the body appearance standards displayed on health-oriented websites. In line with prior studies [49,50,52,53,56,61], our findings showed that people with heightened neuroticism are more prone to accepting the norms, and, probably because of the increased tendency for social comparison, tend more to strive to be thin. However, besides the mediated effect, we also found a direct positive link to the drive for thinness. This suggests that increased internalization is not the only mechanism through which people with neurotic traits can be more at risk. However, considering that we found support for the tendency for heightened internalization from the websites, and upon the propositions of the Tripartite Influence Model [3], we could expect that the mechanism could be similar in relation to parental and peer norms, which have not been measured in this study. This poses one of the limitations for our study. Concerning other limitations, it should be stressed that we used cross-sectional correlational data based on a sample that was self-selected through health-oriented websites. Thus, though we examined the proposed model for the mechanisms to increase the drive for thinness, the research design complicates drawing causal conclusions. Future research should implement a longitudinal research design to make more reliable causal conclusions and to capture potential reciprocal associations. Moreover, we were not able to control for the effects of additional variables on the drive for thinness. These are factors (e.g., body dissatisfaction) [79] that are related to the drive for thinness and disordered eating, and it would be appropriate to control for their effects to obtain more accurate results. Furthermore, we do not have information about the specific content that respondents encountered. It would be useful to incorporate objective measures and directly observe the effects of participants' exposure to online content. Finally, although the thin ideal displayed in the media and the related drive for thinness is more prominent in women [1], future research could focus on men, their internalization of the body appearance norms, and their motivations for body change. In the current study, our aim was to propound a model that comprises of the individual factors that affect women's drive for thinness. Based on our findings, we can formulate several implications. According to the theory and the available data, we propose the following processes: online social support from the visitors of health-oriented websites and neuroticism affect the drive for thinness, and these links are mediated by the internalization of body appearance standards. Thus, alongside previous research in this area [1,8,15,35,36], our study supported the predictive role of internalization in the drive for thinness among women. Specifically, our study provided insight into the internalization of the content of health-oriented websites, which had not been sufficiently investigated and had not been taken into account in relation to women's drive for thinness. Our results imply that it is crucial to acknowledge health-oriented websites and their potential impact on women, especially in the context of the internalization of body appearance norms. Health-oriented websites, which are not generally acknowledged as harmful to women's body image, can be a significant source of body appearance norms and subsequent body image concerns [18]. As was discovered in the current study, women internalize body ideals from health-oriented websites and this, in turn, increases their drive for thinness. This connection should be actively acknowledged by health-care professionals. It is important for professionals to ask their clients who have ED-related problems about their technology usage and to provide them with space to talk about it [80]. Thus, in the context of the current study, health-care professional should discuss with clients who are struggling with EDs their usage of health-oriented websites, specifically with a focus on their exposure to the thin-or fitness-ideal content. We also showed that both online support and neuroticism present risk factors because they can increase the tendency for internalization and, in turn, increase the drive for thinness. Therefore, it is important to be aware of the possible negative effect that online social support may have on women and to address it when preventing or reducing the drive for thinness. However, the findings of this study showed that online social support can function both as a risk and a protective factor. Thus, when discussing the use of health-oriented websites with ED clients, it is important to disentangle the different forms of social support that women receive from the visitors of these platforms. In addition, neurotic individuals experience higher levels of negative emotions and stress, which makes them more susceptible [52,53]. Based on our results, we suggest that preventive health programs, intervention, individual psychotherapy, counseling, and other health policies can be focused on the reduction of the negative emotions and stress in women. These can also help the reduction of the internalization of the body appearance standards promoted on health-oriented websites. --- Conclusions This study focused on the factors associated with the drive for thinness in young adult women who visited weight loss, nutrition, and exercise websites. These platforms are currently of high use, yet they have not been sufficiently studied in relation to eating disturbances. We examined the direct and indirect effects of perceived online social support from fellow website visitors, neuroticism, and the web content internalization of the body appearance standards on the drive for thinness. Our findings supported the predictive role of web content internalization on the drive for thinness in women. Moreover, we showed that the perceived online support from the health-oriented websites and neuroticism can pose risk factors because they are associated with a higher tendency for internalization and, in turn, with a stronger drive for thinness. Our results indicate that it is crucial to acknowledge health-oriented websites and their potential impact on women and their drive for thinness, especially in the context of the internalization of body appearance standards. We also discuss the role of social support, and its double role of risk and protection. Our findings can be used to establish prevention and intervention efforts to help individuals who struggle with body image and eating disturbances. --- Author Contributions: Writing-original draft preparation, N.K.; writing-review and editing, N.K., H.M. and D.S..; supervision, H.M. and D.S.; project administration, H.M. and D.S.; formal analysis, N.K. All authors have read and agreed to the published version of the manuscript. --- Conflicts of Interest: The authors declare no conflict of interest. | One of the debates about media usage is the potential harmful effect that it has on body image and related eating disturbances because of its representations of the "ideal body". This study focuses on the drive for thinness among the visitors of various health-oriented websites and online platforms because neither has yet been sufficiently studied in this context. Specifically, this study aims to bring more insight to the risk factors which can increase the drive for thinness in the users of these websites. We tested the presumption that web content internalization is a key factor in this process, and we considered the effects of selected individual factors, specifically the perceived online social support and neuroticism. We utilized survey data from 445 Czech women (aged 18-29, M = 23.5, SD = 3.1) who visited nutrition, weight loss, and exercise websites. The results showed a positive indirect link between both perceived online social support and neuroticism to the drive for thinness via web content internalization. The results are discussed with regard to the dual role of online support as both risk and protective factor. Moreover, we consider the practical implications for eating behavior and weight-related problems with regard to prevention and intervention. |
INTRODUCTION Today, tourism has been named as the fastest growing economic sector and the largest income generation that is relied upon as the spearhead of the economies of various countries in the world (Azizi & Shekari, 2018;Nagarjuna, 2015;Ma'ruf, Handayani, & Ummudiyah, 2013;Phanumat, et.al. 2015;Pongponrat & Chantradian, 2012). Tourism is able to be a driving generator for the growth of other industrial sectors such as hospitality, communication and transportation, trade, souvenirs, culinary, etc. Tourism also acts as a reactor for development in various regions through the provision of jobs, income from foreign exchange, strategic markets for potential local products, support for equitable distribution of infrastructure development, and improvement of quality of life in various regions (Guo, et al. 2018;Ma'ruf, Handayani, & Ummudiyah, 2013;Moscardo, et.al., 2017;Pramusita & Sarinastiti, 2018;Thetsane, 2019). An ideal tourism is tourism that is able to synergize three core stakeholders --- UNNES JOURNALS of tourism to move together, which consists of society, government, and the private sector. Between the three stakeholders, the community has enormous urgency that is expected to contribute to the development of tourism. The community is the party who owns tourism resources in the form of Attraction which includes aspects of something to see, something to do, and something to buy, Amenities, and Accessibility (3A) (Aref, Gill, & Aref, 2010). The society with all its socio-cultural aspects is also a tourist attraction and has a major contribution in realizing Sapta Pesona Wisata (Giampiccoli & Saayman, 2018). Reflecting on the conditions above, it is better for the community to not only to be placed as a tourist attraction, but also to be empowered as a tourist subject through involvement in all stages and dimensions of tourism development (Aref, 2011;Azizi & Shekari, 2018;Birendra, et al. 2018). The community as the owner of various tourism resources should not be colonized in their own country because the developed tourism is still controlled by exploitative and capitalist capital owners (Sidiq & Resnawaty, 2019). The community is also a party that directly or indirectly feels the positive and negative impacts of tourism so that community participation is crucial in order to ensure the sustainability of tourism and economic resources (Adikampana, Sunarta, & Pujani, 2019;Salleh, et al. 2016). To realize tourism that is driven from the community, by the community, and for the community, a strong social capital is very much needed. It could increase community cohesiveness. Social capital is a collection of actual and potential resources related to ownership of a long-lasting network of mutually beneficial interaction relationships that are institutionalized and are formed from norms and beliefs (Dickinson et al. 2017;Guo et al. 2018;Zhao, Ritchie, & Echtner, 2011;Macbeth, Carson, and Northcote, 2004). Social capital is defined as a set of informal values and norms that are shared between community members who support cooperation between them. Social capital is a factor that connects community members who can promote efficient coordination and cohesiveness between communities in tourism development (Azizi & Shekari, 2018). Social capital relates to values and norms, goodwill, trust, networks (fellowship), cooperation, social relations, and empathy among individuals who are able to form and drive a social unit (McGehee et al. 2010;Moscardo, et al. 2017). The high social capital in the tourist destination community will be linear with the high welfare of the community so that the improvement in the quality and strength of social capital is the main key in increasing the communitybased tourism (Pramanik, Ingkadijaya, & Achmadi, 2019). Therefore, a comprehensive study of the strength of social capital in tourism destination communities has enormous urgency. This is undeniable because of the crucial knowledge of the strength of social capital as a reference for the community and stakeholders in planning and evaluating tourism development in order to achieve public welfare. Several studies have proven the strength of social capital as the main mechanism that encourages and attracts people to participate and move together in reviving tourism in their area. Some of these studies include research from Pongponrat and Chantradian (2012), Borlido and Coromina (2018), Kusuma, Satria and Ana Manzilati (2017), Puspitaningrum andLubis (2018), Moscardo, et al. (2013), Birendra, KC. et.al. (2018), Kencana andMertha (2014), Baksh, et al. (2013), and Musavengane and Mutikiti, (2015). Social capital also contributes in realizing sustainable tourism as described in the study of Liu et al. (2014), Ma'ruf, Handayani, andUmmudiyah (2013), Sunkar, Meilani, andMuntasib (2018), andOktadiyani, Muntasib, andSunkar (2013). Based on these studies, social capital has a crucial role in the success of tourism development in various regions. One area of tourism that has great urgency in strengthening social capital is the Karimunjawa area. This condition is motivated by the status of Karimunjawa which is a leading tourism destination with abundant natural potential that is excellent for marine --- UNNES JOURNALS tourism lovers (Laksono & Mussadun, 2014;Qodriyatun, 2018) and also as a National Park area that must be preserved according to the Decree of the Minister of Forestry and Plantation No. 78/Kpts-II/1999. Therefore, tourism development in Karimunjawa should be directed towards sustainable community-based tourism development (Thelisa, Budiarsa, and Widiastuti, 2018) so that the social capital charging is very necessary. This is also motivated by the demographic and socio-cultural conditions of the Karimunjawa community which is a multicultural area consisting of Javanese, Madura, Bugis, Bajau, and other tribes (Central Statistics Berau of Jepara Regency, 2018). This study intended to analyze the integration of social capital in the development of sustainable marine tourism to improve the economic strength of Karimunjawa community. --- METHOD This research was conducted in the Karimunjawa National Park Area. This research was conducted using a qualitative approach. The selection of a qualitative approach is motivated by the purpose of research which is to understand the phenomenon of social interaction from the other side of the institutions that occur in Karimunjawa society in depth. This is because qualitative research is not only able to describe the surface of a large sample in the population, but also able to explore a deep understanding of organizations or special events so that they will be able to capture the meaning of each actor's perceptions, attitudes, and actions in the field (Denzin & Lincoln, 1994 ). This research uses primary and secondary data sources. Primary data is obtained through field studies, while secondary data is obtained through written literature review which can be in the form of scientific journal articles, books, archival documents, statistical data, and data from the Karimunjawa National Park Office. The research subjects consisted of the general public and tourism practitioners in Karimunjawa, the Management of the Karimunjawa National Park Office, tourists, the Government, and NGOs tourism stakeholder in Karimunjawa. The research sample was determined by the Snowball Sampling method. Data collection is done by in-depth interviews, participatory observation, and documentation. The data analysis was carried out by the interactive method according to Milles and Huberman's theory (1992: 16-19). This analysis method consists of three main stages namely the process of data reduction, data presentation, and conclusions drawing. Furthermore, interpretation is held, namely by explaining the symptoms that exist and looking for the relationship between the symptoms that have been found in the field. --- RESULT AND DISCUSSION The Overview of the Karimunjawa Region Karimunjawa Islands are located in the northwest of the Capital City of Jepara Regency. It is separated by a stretch of the Java Sea. The average height of land in the Karimujawa Islands is between 10-100 meters above the sea level. The distance from Karimunjawa Subdistrict to the Capital of Jepara Regency is 90 km. Astronomically, Karimunjawa Subdistrict is located between 5o49'9" to 5o81'9" south longitude and between 110o27'32" to 110o45'89" east longitude. The Karimunjawa Islands consist of 27 islands which are all part of the Karimunjawa National Park area. Administratively, it is the part of the Karimunjawa Sub-district, Jepara Regency, Central Java Province. --- The Tourism Conditions in Karimunjawa Tourism activities in Karimunjawa have been started since 2006 and are still running today. The number of tourists visiting Karimunjawa is increasingly linear with the increasing popularity of Karimunjawa tourist destinations that offer amazing natural marine beauty. Tourism activities in Karimunjawa consist of land tour activities carried out by exploring tourist destinations on land and on the coastline along Karimunjawa and other sea tour in the form of snorkeling and crossing the ocean to small islands in the Karimunjawa Islands. As the time passed by, from year to year the development of tourism in Karimunjawa is increasingly moving towards the final progress. This condition is supported by the increasing of public awareness and participation in reviving tourism activities in Karimunjawa. The supporting facility from the government, especially the electricity network, greatly impacts the progress of tourism in Karimunjawa. Moreover, it coupled with the increasing sea crossing transportation services that facilitate the accessibility to Karimunjawa. The more advanced the existing tourism, linear with an increase in the number of tourists who come to Karimunjawa which can be seen in the Table 2. The tourism activities in Karimunjawa run and develop through tourism groups in the region. The tourism actors consist of the government, private sector, Non-Governmental Organizations (NGOs), and the one who have the largest contribution is the local community as a tourist who supports the running of tourism in Karimunjawa. Tourists in Karimunjawa belong to the communities formed by the initiation of a community that is aware of the tourism potential. --- Social Capital in Tourism Social capital is a network of cooperation in society that could act as a lubricant that facilitates collective action in achieving the goals (Azizi & Shekari, 2018;Dickinson, et al. 2017). Social capital has several elements as the main foundation which differ from one expert to another. However, the majority of experts agree that the elements of social capital generally consists of confidence (trust), norm, and the network (Borlido & Coromina, 2014;Fathy, 2019;Liu et al. 2014;Oktadiyani, Muntasib, & Sunkar, 2013;Sunkar, Meilani, & Muntasib, 2018). In addition to these three elements, there are other expert opinions that include elements of reciprocity, cooperation (McGehee et al., 2015;Park, et al., 2012) social interaction, collective action (Giron & Vanneste, 2019;Moscardo, et.al., 2013) empathy, and tolerance (Macbeth, Carson, & Northcote, 2004). From exposure to the stretcher we can draw a common thread that element of social capital consists of three main founda- --- UNNES JOURNALS tions in the form of a network (networks) as input, norms and beliefs(norm and trust) as input and also output, and concerted action (collective actions) as output. These three elements are cyclical processes that are interconnected and influence each other. Strengthening social capital in tourism development can be initiated through an analysis of the actual potential of social capital in a tourist destination community. In this study, to analyze the actual potential of social capital in Karimunjawa, it was conducted using theories from Giron & Vanneste (2019). To assess social capital, Giron & Vanneste (2019) collaborate on two factor domains, namely the first factor with a focus on the key dimensions (elements of social capital) seen as dynamic processes, and the second factor with a focus on the level of social capital coverage in the structure of tourist destination. The first factor to focus on key dimensions (elements) of social capital, contains three key dimensions consisting of networks (networks), norms and beliefs (norm and trust), and action (collective action). These three elements are used as the foundation of the framework in analyzing the dynamic process of social capital (Giron & Vanneste, 2019). As a dynamic process, the three elements of social capital are an integrated system with specific functions that are interdependent. In the second factor with a focus on the level of social capital coverage in the structure of tourist destinations, some experts agree that the level of social capital consists of Bonding Social Capital and Bridging Social Capital (Macbeth, Carson, & Northcote, 2004;McGehee, et.al., 2015;Moscardo, et.al., 2017), but there are also experts who complement the third level namely Social Linking Capital (Abdullah, 2013;Arianto & Fitriana, 2013;Fathy, 2019;Giron & Vanneste, 2019;Musavengane & Mutikiti, 2015). Bonding Social Capital emphasizes horizontal social ties, Bridging Social Capital emphasizes horizontal social ties with new groups or actors, while Linking Social Capital emphasizes vertical social ties with groups that have power or who have control over the key resources (Giron & Vanneste, 2019). The two factors of social capital are then combined to obtain a more organized and more connected method of social capi- tal assessment. With this combination, we can analyze the actors involved, supporters, and obstacles to their relationship at various levels in the tourist area. The combination provides a platform to reflect on how to increase the collective capacity of tourist destinations. To get a more detailed picture of the framework for assessing social capital that has been prepared, the writer provides figure 1. The Dimensions of Social Capital in Tourism in Karimunjawa --- Network The form of social networks in tourism in Karimunjawa is the existence of tourism actors in Karimunjawa that have been organized and interdependent with each other. In Karimunjawa, there are several patterns and levels of networks formed in tourism activities. The smallest network pattern formed is a family or kinship network pattern. This network pattern integrates individuals with equality of blood relations in society. This network is not official or formal, but has a strong connective and able to strengthen cooperation between individuals because it is based on conscience and equality of blood relations. This network can be formed by the desire of family members to make ends meet by looking at the tourism potential in Karimunjawa and then moving together in the family tourism business. The next network pattern is in the form of a neighbor network and a network of close friends. This network pattern integrates individuals who have a close location of residence, or individuals with friendships that have existed for a long time. This network was born and formed because it is supported by the territory of Karimunjawa which is not too broad with a population that is not too dense, making it easier for individuals to get to know, interact, and cooperate. As is the case with kinship networks, this network is not formal and can develop dynamically. The next network pattern is a network formed by the initiation of a group of people with similar interests and shared goals. This network is more formal than family networks and networks of friends and neighbors. This network takes the form of associations, communities or groups of tourism actors in Karimunjawa. This network can move in the economic, social, educational, cultural, or environmental fields, which can be made up of members of the general public, Non-Governmental Organizations (NGOs), or within the scope of government. Networks that have an interest in activating tourism in the economic sector in Karimunjawa include: 1) The community of homestay owner, 2) The ship owner community, 3) Tour package sellers or travel agents (Tourism Bureau), 4) Motorcycle rental owners, 5 ) Car rental group and shuttle service (Karimun Trans), 6) Souvenir and culinary merchants association 7) Merchants association, 8) Airport car pickup group (Kemojan), 9) Indonesian Tour Guide Association (HPI) as tour guide (tour guide)), 10) Karimunjawa typical souvenir entrepreneur (Pawon Nyamplungan), and 11) Entrepreneurs for snorkeling equipment rental. Those engaged in the socio-cultural sector including the dance group in Kemojan Village and the Arts Group in Karimunjawa. The networks were born and developed in the midst of the community, from the community, by the community, and for the community. In addition to these social networks, there are also social networks in the form of community groups engaged in the environment, including the Pitulikur Pulo Karimunjawa Foundation, Karimunjawa Community Forestry Partners (MMP), Karimunjawa Supervisory Groups (Pokmaswas) Karimunjawa, and the Segoro Karimunjawa Society. In addition, there is also the Wildlife Conservation Society (WCS) which is a non-governmental organization that is active in several areas with the main mission of educating the community to always preserve the environment, one of which is in Karimunjawa. In addition, at the government level, social networks are also formed at the national and village level. The government as a tourism stakeholder consists of the Jepara Regency Tourism Office, the Central Java --- UNNES JOURNALS Province Tourism Office, the Karimunjawa National Park Office, the Transportation Office, the Karimunjawa District Government, the Karimunjawa Village Government and the Kemojan Village, as well as the government institutions under it. In addition to the government, networks are also formed between the public and the private sector, namely with various entrepreneurs who have businesses in tourism in Karimunjawa, for example small island managers in Karimunjawa, travel agents from outside Karimunjawa, resort owners, and managers in several tourist attractions in Karimunjawa. --- Norms and Trusts Norms and trust have a crucial role in strengthening social capital. Some norms that are still thick flowing in the pulse of the people of Karimunjawa include friendly attitude and mutual harmony among the community. Being friendly and being friendly is one of the important assets that can integrate the multicultural Karimunjawa community. In fact, the same feeling as a newcomer in Karimunjawa helped increase harmony between communities. This attitude encourages people to get to know each other and establish intense interactions so as to give birth to feelings of solidarity. The friendly and mutual attitude is not only applied to fellow Karimunjawa people, but also to all tourists who come to Karimunjawa so that they are able to build a sense of comfort in themselves. The second norm is related to family attitudes and brotherhood between communities. This family value is very closely motivated by the condition of the Karimunjawa region which is a remote area with not too large an area and the population is not too much. This condition causes the majority of Karimunjawa people to be born from the same ancestors so that they have blood relations (kinship). Moreover, coupled with the process of amalgamation between Karimunjawa communities consisting of various community backgrounds, the familial ties become wider and stronger. This sense of kinship is then able to facilitate the creation of social networks in the form of a family business union that drives tourism in Karimunjawa. The next norm that becomes the foundation of social capital strengthening Karimunjawa community is about the concept of sharing in everyday life. Various attitudes are not only implemented in the realm of the family, but also in the life of the wider community. The development of this sense of sharing is effective in strengthening cooperation between communities because it is driven by the perception of reciprocity in the future. Mutual attitude can be a lubricant for inter-community cooperation in tourism activities which are realized in the form of business partnerships, work teams, and subordinates. Social norms become a pillar for the solid social capital in Karimunjawa which in turn is a relief in the people of Karimunjawa who can accept whatever happens to them or that they obtain sincerely, sincerely, and willingly. Relief attitude implemented by the community is based on the belief that the task of humans is to try with all their strength and fortune that everyone has been guaranteed by God Almighty. These values of relief are able to move the community to continue to help each other and cooperate sincerely. The value of relief is a motivation for the community to maintain good relations with business partners or superiors, for example relating to profit sharing in business and also the provision of salary at work. The values of relief are also able to motivate people to strike a balance between cooperation and competition in business. This returns to the existence of public trust regarding the guarantee of fortune by God Almighty. That way, the community can work together without any stereotypes with business opponents who can instead be partnered with business partners to build broader relations. The values of relief are also able to encourage people to become individuals who are open minded and open heart so they are willing to share and cooperate with others. Religious values are still rooted in the hearts of the people of Karimunjawa. These --- UNNES JOURNALS values of religiosity also encourage people to always maintain good relations with others so that they contribute in strengthening the community solidarity and cooperation. Religious values also build community perception to jointly realize Karimunjawa tourism which prioritizes the preservation of cultural values and local wisdom. The next social norm is the values of love for hometown. This value is able to move the community to unite and work together to advance tourism in Karimunjawa. Although Karimunjawa consists of people from various regional and ethnic backgrounds, the value of love for Karimunjawa is very high based on the similarity of fate as migrants. A sense of love and pride in the hometown allows the community to work together to solve various problems faced. It also could prevent the intervention of tourism actors from outside Karimunjawa who are not responsible so that it can still be controlled by the local community. The feeling of love for hometown is also a driving force for the birth of caring for the environment in the community. This sense of environmental awareness is realized through various groups engaged in the field of environmental conservation in Karimunjawa. The value of environmental awareness is based on the awareness of the people who depend their lives on nature, namely the sea and beaches as the main base for marine tourism and the terrestrial environment as a location for daily living. The community realizes that the sea is their fortune field, so they must maintain their harmony to ensure the sustainability of their economic life. The strong norms in the midst of society are also supported by mutual trust between the people of Karimunjawa. At the local community level, trust becomes the glue of effective relationships between communities that strengthen their collaboration. Mutual trust between communities becomes an amplifier of solidarity and perpetuates the relationship of cooperation in the tourism business circle in Karimunjawa, both between business opponents, between business partners, superiors and subordinates, and between subordinates and superiors. This condition is proven by the strength of the cooperative climate compared to the competitive climate among the community of Karimunjawa tourism actors. --- Collective Actions The existence of social networks that are formed and norms and beliefs that are internalized and implemented by the people of Karimunjawa, then produce collective actions as outputs. Collective actions are in the form of cooperation between individuals and groups of tourism actors in advancing tourism activities in Karimunjawa. In a family or kinship network, collective actions can be seen through the efforts of family members who work together in bringing businesses to life as providers of home stays, vehicle rentals, and tour leaders or travel agents. In this family-wide business, family members coordinate with each other and have different job descriptions (for example, it is part of the promotion of tourism services, whether it be sea and land tour packages, home stay rentals, vehicle rentals, or other services. Then, there are also those who have the duty to provide services in marine and land tour activities, both related to transportation, accommodation, or tour guides. In networks of neighbors and friends, collective actions are reflected in the efforts of individuals or groups of tourism actors to promote the services of friends or neighbors. They work together to promote home stay services, vehicle rental, tour packages, boat rental, merchandise, and tour guides. Aside from being beneficial in promotion, it is also effective in helping spread information on tourist services for tourists. Collective actions then occur in every community of tourism actors (intra-community). Among the members of the association work together to achieve the goal of which is in the service of tourists in the field, where the members of the community will work together to achieve tourist satisfaction. Then, they also cooperate in the promotion of tourism services to get clients (tourists). In the Circle of Friends, the values of sharing are highly valued so that members can mutually guarantee that other members also --- UNNES JOURNALS get jobs (clients/ tourists). In each community, all members participate in planning, implementation, internal evaluation, problem solving, and policy-making activities including pricing policies and operational standards for tourism services. In addition, in every community there is also a cash system that must be paid by each member for the purposes of group progress. The existence of limited potential in each community of tourism actors then encourages cooperation and realizes collective actions between communities. They work together in providing services to tourists because they will not be able to serve their own tours. Between communities are bound by a sense of mutual need and interdependence so that they are able to move together. Then, the similarity of needs is strengthened by the norms and trust that grows between the associations. In addition to tourism service activities in the field, collaboration between communities is also manifested in the determination of policies, for example the determination of policies regarding the system and operational standards for the administration and service of travel tours in Karimunjawa, pricing policies, or related to tourism promotion activities and cultural events. As a manifestation of the values of relief from the people of Karimunjawa, they are also willing to collaborate with various entrepreneurs who come from outside Karimunjawa such as small island managers in Karimunjawa,agents travel from outside Karimunjawa,owners resort, and managers in several attractions in Karimunjawa. The community synergizes with entrepreneurs in improving services and tourist attractions in Karimunjawa. Not infrequently these entrepreneurs also provide financial support in various cultural events in Karimunjawa to increase tourism promotion. However, in its relations with entrepreneurs from outside, there are still many people who consider entrepreneurs with large capital as heavy rivals so that many people are stereotyped towards entrepreneurs. This condition is motivated by people's dissatisfaction with entrepreneurs and the striking difference in their social strata from entrepreneurs. This problem is triggered by the presence of several entrepreneurs who are less able to embrace and hold the community in running a tourism business so that a harmonious relationship between outside entrepreneurs and the community cannot be achieved. However, so far the community can still go hand in hand with outside businessmen without conflict because the community believes that fortune is something that is guaranteed by God. In addition to fellow tourism actors, in running the tourism business, the association of tourism actors also collaborates with various other groups in the environmental field, one of which is the Wildlife Corservation Society (WCS). In addition to conducting education, the Wildlife Corservation Society (WCS) often collaborates with tourism actors to conduct environmental conservation activities in the Karimunjawa area, both at sea and on land. Concrete activities include cleaning the beach, the sea, planting trees, and others. In this activity, they also collaborated with the environmental groups in Karimunjawa, namely the Pitulikur Pulo Karimunjawa Foundation, Karimunjawa Community Forestry Partners (MMP), the Karimunjawa Community Monitoring Group (Pokmaswas), and the Segoro Karimunjawa Society. Collective actions are also manifested in collaboration between the Karimunjawa community and the government. The government through the Karimunjawa National Park Office works in synergy with the community and all tourism actors and organizations engaged in the environment, working together to preserve nature and the environment Karimunjawa. Besides the Karimunjawa National Park Office, the community also collaborates with the Karimunjawa District Government and the village government, the Jepara Regency Tourism Office, the Central Java Province Tourism Office, and the Regional Planning and Development Agency (Bappeda). This synergy is manifested in tourism promotion activities in Karimunjawa, such as cultural events or festivals and also in the form of support UNNES JOURNALS from government facilities to support tourism activities in Karimunjawa. It also manifested in a number of training and outreach activities to increase the soft skills of tourism operators in Karimunjawa. However, so far the synergy of the community with the Department of Tourism is still not optimal. This is due to the lack of the role of the Tourism Office in providing assistance and infrastructure support for the tourism community. This problem causes the community to have low trust in the Department of Tourism. In addition, people also have low trust with the Department of Transportation. This condition is motivated by the still not optimal policies and services from the Department of Transportation in providing crossing transportation for the public and for tourists. This is because the effectiveness of crossings is a crucial requirement to facilitate the mobility of people who want to go home and go to Karimunjawa, and also has a big impact on increasing the quantity of tourists entering Karimunjawa. --- The Level of Social Capital in Karimunjawa The level of social capital formed in each tourist destination is certainly different. There are tourist destinations in which only one level of social capital is formed, but there are also tourist destinations in which can form two to three levels of social capital at once. As a fairly complex tourist destination with various actors in it, Karimunjawa formed three levels of social capital at the same time the three of them support each other. It consists of Bonding Social Capital, Social Bridging Capital, and Social Linking Capital. First, social capital bonding in Karimunjawa is formed by a family or kinship network and a network formed in the membership of various groups of tourism actors in Karimunjawa (intra-community). In this social capital bonding, the values of kinship and the value of brotherhood and sharing are deeply held by the community. In social capital bonding, every network in Karimunjawa has an inward orientation with very high collective values, has a relatively small number of groups, has the same background, namely the same family or the same work domain, and is based on mechanical solidarity. In this social capital building, family networks, family members feel safe, feel facilitated, and have a high level of care, as well as with members in each community of tourism actors. Secondly, as a consequence of differences in potential, then organic solidarity is born among the community of tourism actors in Karimunjawa. It formed strong social bridging capital. The bridging social capital formed in Karimunjawa is able to unite various groups of tourism actors to work together in providing services to tourists. It is the collaboration between the community of tourism actors that enlivens tourism in Karimunjawa because of their complementary characteristics. For example, tour activities will not be held if there are only groups of ships, and without involving tour guides. It also applies to other groups of tour operators. In addition, social capital bridging is also reflected in the relationship between the community of tourism actors and all environmental activist groups who have the same goal, which is to realize the preservation of Karimunjawa nature. Bridging social capital is also reflected in the synergy between various groups of tourism actors and entrepreneurs from outside Karimunjawa. This synergy also contributes to improving services and tourist attractions in Karimunjawa. However, the bridging social capital formed between the community of tourism actors and entrepreneurs is not as strong as the bridging social capital that exists between the community of tourism actors. This condition is motivated by the negative stigma of society towards entrepreneurs who tend to be exploitative and capitalist. The existence of bridging social capital allows each group to be able to establish mutually beneficial relationships with various networks outside the group that will encourage individual progress within the group. Bridging social capital is based on a sense of togetherness, openness and relief, UNNES JOURNALS humanity and pluralism. This bridging social capital is very relevant to be developed as a big power in reviving tourism in Karimunjawa. Third, social capital linking in Karimunjawa is formed through the synergy between the community and the Government as a tourism stakeholder in Karimunjawa. This synergy is formed between the tourism actors with the District Government, the Village Government, the Tourism Office, and the Regional Planning and Development Agency (Bappeda). In this synergy, the government is positioned as an activity planner and facilitator while the community is the executor. Collaboration on social capital linking can be seen in its strengths in cultural activities as an effort to promote tourism in Karimunjawa. Social capital linking is also reflected in the synergy between the tourism community and environmental activist groups with the Karimunjawa National Park Office. The existence of social capital linking is a community effort to expand hierarchical relations with the government to gain access to power and resources in the policy making process. --- CONCLUSION The strong social capital formed in Karimunjawa has a very big influence on tourism activities in Karimunjawa. The complexity of partnership relations between various tourism stakeholders in Karimunjawa based on the strength of norms and trust between stakeholders causes the complexity of the social capital processes formed in Karimunjawa. The social capital formed in Karimunjawa consists of Bonding Social Capital, Bridging Social Capital, and Social Linking Capital. Of the three social capital, bridging social capital has a very large contribution in reviving tourism activities in Karimunjawa and is a type of social capital that is very relevant to be developed as a major force in realizing the progress of tourism in Karimunjawa. However, in fact the three social capital are complementary and mutually reinforcing to each other so that it cannot be separated. Therefore, the community must always increase mutual trust and strengthen values and norms so that they can increase the strength of social capital among the people. The government as an authority holder, policy maker, and facilitator must be more concerned with the various needs and constraints faced by the tourism community in Karimunjawa so as to create a good synergy between the community and the government. | This study intended to analyze how social capital works in the developing marine tourism in Karimunjawa Indonesia. This research was conducted in Karimunjawa. The data was collected with snowball sampling techniques. The data collection methods used consisted of participatory observation methods, in-depth interviews, and documentation. The data obtained were analyzed using the interactive analysis method. The results of the study indicate that the strong social capital formed in Karimunjawa has a very big influence on tourism activities in Karimunjawa. The social capital in tourism in Karimunjawa is based on many networks that are formed, supported by mutual trust, and still rooted in various social values and norms in the community that support the strength of existing social capital. It results in collective actions in the form of synergy and cooperation between the community and various tourism stakeholders in tourism activities in Karimunjawa. The social capital formed in Karimunjawa consists of three types. They are Bonding Social Capital, Social Bridging Capital, and Social Linking Capital which are complementary and mutually reinforcing so that they cannot be separated. However, between the three social capitals, bridging social capital is the biggest power base in realizing the progress of tourism in Karimunjawa. |
increase mutual trust and strengthen values and norms so that they can increase the strength of social capital among the people. The government as an authority holder, policy maker, and facilitator must be more concerned with the various needs and constraints faced by the tourism community in Karimunjawa so as to create a good synergy between the community and the government. | This study intended to analyze how social capital works in the developing marine tourism in Karimunjawa Indonesia. This research was conducted in Karimunjawa. The data was collected with snowball sampling techniques. The data collection methods used consisted of participatory observation methods, in-depth interviews, and documentation. The data obtained were analyzed using the interactive analysis method. The results of the study indicate that the strong social capital formed in Karimunjawa has a very big influence on tourism activities in Karimunjawa. The social capital in tourism in Karimunjawa is based on many networks that are formed, supported by mutual trust, and still rooted in various social values and norms in the community that support the strength of existing social capital. It results in collective actions in the form of synergy and cooperation between the community and various tourism stakeholders in tourism activities in Karimunjawa. The social capital formed in Karimunjawa consists of three types. They are Bonding Social Capital, Social Bridging Capital, and Social Linking Capital which are complementary and mutually reinforcing so that they cannot be separated. However, between the three social capitals, bridging social capital is the biggest power base in realizing the progress of tourism in Karimunjawa. |
The goal of the study "Quality of Life and Well-Being of the Very Old in North Rhine-Westphalia NRW80+" is to provide a representative picture of quality of life (QoL) in the population of those 80 years or older [40]. This paper serves as an introduction to the thematic focus of this special issue, providing a basic understanding of the NRW80+ study and sample. All papers in this issue are based upon NRW80+ data. The aim of this introduction is twofold. First, a brief characterization of the tar-geted population is offered with respect to biographical background, historical context, and the age structure of today's very old individuals in NRW. Second, key aspects of the "Challenges and Potentials Model of Quality of Life in Very Old Age (CHAPO)" are discussed and the importance of stipulations of what constitutes the good life or successful life conduct are highlighted. --- S76 Zeitschrift für Gerontologie und Geriatrie • Suppl 2 • 2021 --- The population of very old individuals today There is no single agreed upon definition of "very old age". In the NRW80+ study, the definition of very old age as a chronological age of 80 years or older has been chosen primarily for pragmatic reasons, as it is often the case in population-based surveys [17,27,40]. It has been shown that from about 80 years onwards, the probability of a variety of age-associated changes such as health impairments increases. This has led to the well-known distinction between resource-rich third age and resource-poor fourth age [1,2]. Due to achievements in healthcare, social life, and technical advances, some scholars argue that today, people in their 60s or 70s no longer correspond to traditional understandings of old age. Rather, the fourth age appears to be the real age that bears strong resemblance to classical (negative) views on old age. Nevertheless, aging and old age can also be associated with positive aspects such as rich experience, accumulated knowledge, and serenity [25,33]. For a comparative overview of perspectives on the third and fourth age and risks of such a distinction, see Wahl and Ehni [41]. Today, life beyond 80 years of age may span one or even two more decades for many individuals, making the very old a group that comprises a great number of diverse birth cohorts. It is paramount to understand differences in early socialization, education, and life experiences as potential determinants of QoL outcomes in very old age; however, a comparison of age groups within very old age is hampered by quickly decreasing numbers of very old and oldest individuals in the population and a growing disproportionality of men and women particularly in the oldest age groups. As a consequence, many empirical studies offer limited possibilities to differentiate age groups within very old age, even if they do not specify a maximum age for study participation [5]. In NRW80+, three groups of very old people were considered: 80-84 years (born 1933-1937), 85-89 years (born 1928-1932), and 90 years or older (born before 1927). Reference studies in the field of aging research (e.g. BASE, SHARE, German Age-ing Study) have shown that the group of older people is very heterogeneous with respect to, for example, functional status [22] or social engagement [20]. Such interindividual differences may be due to differences in life courses. It has been shown that earlier life experiences influence not only health but also QoL in later life [3,19,30]. People's life courses are influenced by societal factors such as political decisions and historical circumstances happening at a certain time and experienced at different times in their life course. For today's oldest old, one important historical event was the Second World War (WWII) and its consequences. All NRW80+ age groups were socialized during times of National Socialism and war; however, participants aged 80-84 years and 85-89 years today were often young enough to be part of the Nazi evacuation scheme and may have participated as soldiers only towards the end of WWII. Older age groups were likely to have been more actively involved in war-related combat or consequences of the war in the home country. The post-war period was characterized by overcoming the traumas from the war period. The younger age groups may have been more influenced by the economic upswing and the worldviews of the Allied Forces. In general, the older age groups (85+ years) attained fewer years of education due to the war. A large percentage of this age group left school early, attaining lower secondary education at best, whereas the individuals of the younger age group usually reached higher educational qualifications [24]. Consequently, menbornaround 1930 had difficulties finding apprentice positions or take part in vocational training, often ending up in jobs without formal qualifications [4]. Moreover, the majority of women born around 1930 received no vocational training [21]. Beginning with the post-war period, the average age of marriage decreased until about 1970 and afterwards increased [11], and the average age of women when bearing their first child increased in younger birth cohorts [16]. There was a peak in the number of children born from women who were born in 1933 with a decreasing trend across later birth cohorts (i.e., women born before 1966) [10]. Due to the end of WWII, many people immigrated to Germany as they had to flee from other, mainly Eastern European countries [28]. For comparison of age groups, one means for making sure enough individuals of a specific age are available for analysis in survey samples is to oversample rare individuals (e.g., older men); however, the small population number of individuals in oldest age effectively limits the degree of disproportionality that can be achieved in the actual sample, especially when the total sample size is large. Because sample size and selectivity precludes a fuller picture of the heterogeneity of conditions that exists in this age group, current studies offer only limited potential to discuss normative aspects of QoL in the oldest old. In comparison to other ageing studies in Germany, NRW80+ is unique in that it includes individuals in care facilities and uses proxy interviews to represent those unable to answer questions themselves (e.g., due to cognitive impairment). --- The NRW80+ sample NRW is the most populous state in Germany, counting 17.9 million inhabitants, imcluding 20% old individuals. Furthermore, NRW has a history of immigrants, making its population heterogeneous. The NRW80+ study was designed for robust inference about age group and gender differences and built upon the results of a comprehensive feasibility study [39]. A priori power analysis indicated that a sample size of N = 1548 would enable detection of small interaction effects (F = 0.1) between design factors (age group <unk> gender) with high power (1-<unk> = 0.95) at a conventional alpha level of 0.05. The population of the study included all people who had reached 80 years of age by 31 July 2017 and whose registered primary residence was in NRW. This includes individuals living in private and non-private settings (e.g., long-term care). The sampling followed a two-step procedure: First, a sample of 94 communities was drawn from the entirety of all communities in NRW. In a second step, the registration offices of the selected communities provided a simple random sample of inhabitants, amounting to 48,137 addresses from the target population. The group of potential study participants (gross sample) was defined to comprise N = 8040 individuals based on an a priori power analysis and an expected response rate of 20-25%. Individuals from older age groups (85-89 years, 90+ years) and men were systematically oversampled, i.e. represented more frequently within the gross sample than would be expected in a simple random sample (. Table 1); however, equal sample size (N = 1340 or 16.7%) in each of the six design groups (i.e., age group <unk> gender) was not feasible due to the low number of men aged 90 years or older (M90+) in the population. Computer-assisted personal interviews (CAPI) were conducted by experienced and trained interviewers of Kantar (previously TNS Infratest, Munich, Germany). A total of 1863 interviews were realized, assessing-besides QoL resources and outcomes-central events in the life course. Response rates were lower for older age groups and lower for women compared to men; however, a minimum of 244 ob-servations could be realized for all design groups, allowing for robust subgroup analysis. Design weights were computed for all individuals selected into the gross sample to correct for selection of communities and oversampling of men and older age groups. Finally, calibration weightings were computed for participants in an iterative raking process based on the known demographic structure of the very old population with respect to age, gender, marital status, household size, institutionalization, and regional characteristics (for details see [9]). Even after applying weighting to correct for the disproportional sampling design and study nonresponse, effective sample size in all groups remained large. For example, the precision of population estimates in the strongly oversampled M90+ group in the NRW80+ sample is the same as the precision from a simple random sample of 206 individuals in this population group. Respondents wereonaverage86.5 <unk> 4.5 years old (range 80-102 years) at the time of the interview.. Table 2 shows that in the overall population of very old adults, 13.9% live in an institution. The number of very old individuals for which proxy interviews could be conducted was estimated at 8.8% in the population of the very old. Overall, only a minority of 33.2% of those 80 years or older show a formal need for care. Approximately half of the very old population showed medium levels of education, while high levels of formal education (i.e., bachelor's degree and equivalent professional level or higher) were found for only one out of five persons in this age segment. Substantial age group differences were observed with respect to educational background (ISCED; [14]), employment history, socioeconomic status (International Socio-Economic Index of Occupational Status [ISEI]; [15]), marital status, institutionalization, birth of first child, and age at immigration (. Table 3 and. Fig. 1). The risk of institutionalization increased across age groups. Oldest individuals attained lower educational levels (i.e., up to lower secondary) in comparison to those in younger groups; however, most heterogeneity in educational level was attributable to gender. In the youngest age group, the share of women never Tests for main and interaction effects used Taylor linearization to account for the multistage sampling and linear, logistic, or generalized logistic modelling for metric, ordinal, or nominal dependent variables respectively b Percentage of refusal to answer or "don't know" responses of all questions asked at the level of the individual. Hence, differences in the number of questions asked at the level of the individual due to filtering are accounted for in the average score given in the table c Interaction between age and sex having been employed was lower. Within the youngest age group and in women, divorce was more common. Furthermore, in the oldest age groups, more individuals were widowed. Of those having children, the oldest age groups (both men and women) were older when having their first child than the two youngest age groups (see. Fig. 1). Of those who migrated to Germany, the youngest age groups were younger at arrival in Germany (19 and 20 years for women and men, respectively) with increasing age in those between 84 and 89 years and 90+ years. More than half of the NRW80+ participants who immigrated did so shortly after the Second World War. Whereas women were on average younger when ending employment than men, no substantial age group differences were observed in both men and women. Item nonresponsemeasured atthelevel of the individual was generally low in this study. On average, less than 4% of all information asked from a respondent was lacking due to refusal to answer or "don't know" responses. Nevertheless, while the share of person-level refusals did not increase across age groups, "don't know" an- --- Original Contributions Fig. 1 8 Timing of historical events in the life course of cohorts of the very old and differences with respect to age at key biographical events. FRG Federal Republic of Germany, GDR German Democratic Republic swers did. Part of this effect was due to the increasing share of interviews with proxy informants in older age groups; however, additional analysis showed that age had an independent (albeit small) effect on item nonresponse over and above the effect of proxy informant and cognitive status (standardized beta = 0.14, 0.35 and 0.34, respectively). Hence, item nonresponse in this study of the very old was rare and multifactorial. Besides, the prevalence of cognitive impairment estimated based on the NRW80+ sample was comparable to prior epidemiological findings [13]. Two out of three respondents showed age-adequate cognitive functioning according to norm data and a similar proportion of individuals were screened or rated as mild cognitive impaired (MCI) or early dementia. --- A theoretical framework of QoL in very old age Even though a plethora of QoL studies exist on the individual, on a group or country level, and in many specific subpopulations [26], the QoL of very old individuals has rarely been examined and there are few QoL models focusing particularly on very old individuals [18]; however, existing studies [6,7] suggest that in older people-compared to younger age groups-QoL is determined by different aspects. For example, meaningful, eudaimonic aspects seem to be important in older adults [12]. A detailed investigation of different determinants (e.g., personal, environmental, or their interaction) of very old individuals may help to understand unexpected results, such as the wellbeing paradox in old age. For example, Schilling [34] found that the well-being paradox in old age (i.e., seemingly stable levels of well-being with decreasing levels of resources [36]) results from a change in health resources as well as differences between cohorts with regard to life satisfaction. In addition, it may be important to identify cohort-specific determinants of QoL in very old age, as early socialization or differences in the timing of major life events (e.g., education, childbearing, retirement) have been found to impact QoL at older ages (e.g. [23,30]). With respect to a broad understanding of QoL in very old age, Wagner et al. [40] proposed a framework to integrate major streams of research on subjective aspects of psychological well-being (e.g., life satisfaction) as well as the scientific investigation of the (societal) basis of economic welfare (e.g., education or income). The "Challenges and Potentials Model of Quality of Life in Very Old Age" (CHAPO; [40], see. Fig. 2) was based on Veenhoven's model [38] view questions, whereas a separate qualitative study evaluated the societal perspective based on stakeholder interviews. The CHAPO was developed as a conceptual framework to operationalize resources and outcomes that are central to the interdisciplinary discussion of QoL in very old age. Given the heterogeneity resulting from vastly distinct life courses of today's very old population, individual values may be idiosyncratic or not congruent with the values of others, younger generations, or today's society, creating a tension between societal groups with respect to the definition of QoL and successful aging. Furthermore, CHAPO conceptually adds to existing frameworks of QoL in that it explicitly acknowledges the fact that successful life conduct-as a systemic QoL outcome-depends both on resources and values of the older individual as well as on roles and appreciation of late life by society. It allows for descriptive, evaluative, and normative perspectives on QoL in very old age (for a detailed description see [29]). Whereas other QoL models postulate specific mechanisms that promote or prevent QOL, CHAPO-at first sight-distinguishes QoL resources as potential predictors for QoL outcomes; however, it should primarily be understood as a generic measurement model, serving as a basis to categorize indicators as life chances or life results and distinguishing personal from environmental indicators. Nevertheless, the operationalization of NRW80+ built on previous empirical evidence to include indicators particularly relevant for this age segment. With regard to life chances, indicators in NRW80+ include individual values (see [32] in this issue) or social relations (see [35] in this issue) for the person and environment level, respectively. Life results included indicators such as life satisfaction (see [8] in this issue). CHAPO adds to this the notion of successful life conduct as a systemic concept integrating the idea of personenvironment-fit and mechanisms to retain identity, autonomy, and participation in light of compromised physical and mental capacity that characterize fourth age [42][43][44]. Here, fit refers to a specific positive constellation of resources and demands that foster functionality, independence, or personal growth. Successful aging [37] is defined by an autonomous, generative, active, or productive behavior by using respective educational, social, infrastructural, technical, or economic resources. Indicators and determinants of QoL are assumed to be different even across age groups within very old age for a number of reasons. First, very old age today is predominantly female and gender differences for QoL predictors and indicators have to be considered [31]. Second, individuals in their beginning 80s may not (yet) experience a drastic decrease in individual resources (e.g., health, social network) and consequently depend less on environmental resources for QoL; however, the relative contribution of environmental resources for autonomy and QoL may be greater in the oldest old. --- Discussion The NRW80+ study allows making robust statements about age group differences within the population segment of very old adults and strengthens the state of research on quality of life of the oldest old in Germany. The sampling strategy was successful in guaranteeing a high level of precision of population estimates, particularly in the rare and hard to reach group of men aged 90 years or older and sufficient --- Original Contributions power to test the small to moderate effects expected in social-behavioral aging research. Age groups within very old age differed substantially with respect to health status, education, past employment, socioeconomic and marital status, resulting in very diverse conditions for and circumstances of realizing successful life conduct. Results showed differences in the timing of major life events across different age groups within very-old age. The particular age at which significant life transitions (e.g., childbearing) were experienced may influence subsequent biographies and QoL in very old age. For example, immigration at different ages may have consequences for the integration into a new community and therefore may impact QoL; however, several limitations of the current data are noteworthy. Firstly, operationalization of QoL focused on current status and offered only a limited window to study biographical antecedents. Secondly, with cross-sectional data, disentangling age or cohort effects was severely limited. Finally, individuals who survived up to a very old and oldest age can be expected to represent a specific subgroup of the respective birth cohorts. Finally, the face of very old age is changing quickly. The share of very old men, for example, is expected to increase substantially across the next decades. --- Conclusion The NRW80+ study offers a unique possibility to investigate QoL in a representative sample of very old adults from the most populous state in Germany. Whereas the share of older people in the German population increases, representative studies about QoL of this age group remain rare. The NRW80+ study meets a number of conceptual and methodological challenges of conducting a survey on QoL in the very old population. The CHAPO model considers eudaimonic concepts of QoL as well as concepts integrating personal and environmental aspects especially relevant in old age. A specific strength of this study is the possibility of distinguishing age groups of privately and nonprivately dwelling individuals within very old age, whose differences in socialization, education, and life experiences should exert pro-found impact on late life QoL outcomes. Hence, the NRW80+ study identifies needs and determinants upon which policy recommendations can be made to create conditions in which individuals may realize and retain successful life conduct throughout late life. --- Practical implications --- Declarations Conflict of interest. S. Hansen, R. Kaspar, M. Wagner, C. Woopen and S. Zank declare that they have no competing interests. This study was carried out in accordance with the ethical standards of the ethics committee of the Medical Faculty of the University of Cologne and with the Helsinki Declaration of 1975 (in its most recently amended version). Informed consent was obtained from all participants included in the study. Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | The study "Quality of life and well-being of the very old in North Rhine-Westphalia NRW80+" aims at giving a representative picture of the quality of life (QoL) in this population. Conceptually, QoL research has rarely considered the values of older individuals themselves and societal values, and their relevance for successful life conduct. Empirically, comparisons of different age groups over the age of 80 years are rare and hampered by quickly decreasing numbers of individuals in oldest age groups in the population of very old individuals. Study design and theoretical framework: This paper describes the population of the NRW80+ study and different age groups of very old individuals with respect to biographical background. Furthermore, using the challenges and potentials model of QoL in very old age (CHAPO), key aspects of QoL in late life are discussed and the importance of normative stipulations of what constitutes a successful life conduct are highlighted. In the NRW80+ study older age groups (i.e., 85-89 years, 90+ years) were deliberately overrepresented in the survey sample to enable robust cross-group comparison. Individuals willing to participate in the study but unable to participate in the interview themselves for health reasons were included by means of proxy interviews. The total sample included 1863 individuals and 176 individuals were represented by proxy interviews. Pronounced differences were observed between age groups 80-84 years (born 1933-1937, N = 1012), 85-89 years (born 1928-1932, N = 573), and 90 years or older (*born before 1927, N = 278) with respect to education, employment and the timing of major life events (e.g., childbirth). Conclusion: Different life courses and resulting living conditions should be considered when discussing QoL disparities in very old age. |
Background Sickle cell disease: a global health challenge Sickle cell disease (SCD) is an autosomal, recessive haemoglobinopathy characterised by ongoing haemolytic anaemia, episodes of acute pain caused by vasoocclusion (vaso-occlusive crises), and progressive organ failure. It is the most common monogenetic disease worldwide with an estimated 300,000 births annually and is recognised as an important public health problem by the World Health Organisation (WHO) [1]. Seventy-five per cent of the global burden of SCD occurs in sub-Saharan Africa, where the majority of children with the disease do not reach their fifth birthday [2]. In contrast, the life expectancy in well-resourced countries has significantly improved with almost all infants now expecting to survive into adulthood due to comprehensive care programs. However, the life expectancy of patients with SCD is still 20-30 years shorter than the average life span of the general population [3]. The Netherlands currently counts approximately 1500 SCD patients, half of which are children [4]. Most of those affected are from Asian or African ancestry, with a minority being of Middle Eastern descent [5]. In the Netherlands, care for paediatric patients with SCD is organised in centralised, comprehensive care centres, to ensure good quality of care [6]. --- Vulnerabilities in sickle cell disease In western countries, SCD predominantly affects racial and ethnic minorities. It is well-known that children from non-western ethnic minorities are more likely to live in poverty and reside in families with a lower family income [7]. Low socioeconomic status is associated with higher rates of illness, shorter life expectancy, high stress levels, low birth weight and many other negative health outcomes [8]. In addition to socioeconomic disadvantage, children with SCD and their families encounter many psychosocial issues including increased anxiety, depression, social withdrawal, aggression, poor relationships, poor school performance and impaired health-related quality of life [9][10][11][12]. These psychosocial issues mainly result from the impact of pain and other disease symptoms on everyday life, however they are also a result of society's unawareness of SCD and the lack of understanding and empathy towards those affected. Although life expectancy has improved, many outcome goals remain unmet. This is not only due to the biological burden of acute complications or chronic morbidity such as multiorgan failure, but also due to the complex interaction between patients with SCD and the socioecological system [13][14][15]. SCD has historically been described as a "black disease" [16]. This harmful association of the disease with race has resulted in social and ethical consequences that are tied to discrimination [15]. For example, the pain complaints of racial minorities are less likely to receive adequate attention due to the often complex communication between the patient and physicians or nurses [17][18][19][20]. In addition to stigmatisation in healthcare, significant gaps exist in both the equity of research funding and philanthropy for SCD [21,22]. --- Evaluating access to healthcare The accessibility of healthcare concerns the level at which people are able to utilise all healthcare resources they need to sustain or improve their health [23]. This accessibility is described by four overlapping dimensions: physical accessibility of healthcare, affordability of healthcare, accessibility of health-related information, and the principle of non-discrimination [24,25]. The comparison of 'amenable mortality' rates between countries allows the approximation of national levels of healthcare access and quality [26]. The Netherlands ranked 3rd in the Healthcare Access and Quality (HAQ) Index of the 2016 Global Burden of Disease Study [26]. From a global perspective, accessibility of healthcare might therefore not be seen as a matter of concern for Dutch clinical practice. The evaluation of the HAQ Index however, provides limited insight in accessibility disparities between different groups of society, or between patients with different diseases. Healthcare professionals have first-hand experience of barriers faced by patients when it comes to effective care. From professionals' anecdotal and seemingly unique stories, a picture emerges of the general challenges faced by our healthcare system when it aims to provide access to the highest attainable standard of care to every individual. --- Access to the highest attainable standard of healthcare Like most countries in the world, the Netherlands has signed and ratified multiple human rights treaties and conventions. The commitments made in these documents are important in the context of healthcare for children. As early as 1966, the International Covenant on Economic, Social and Cultural Rights (ICESCR) established access to healthcare as a fundamental human right. Furthermore, the Committee on Economic, Social and Cultural Rights (CESCR) obligates states parties to ensure equality in the access to healthcare and healthrelated services. They emphasise that children should be regarded as a vulnerable group that require explicit protection [27]. In 1989, the United Nations Convention on the Rights of the Child (UNCRC) reinforced "the right of the child to the enjoyment of the highest attainable standard of health" to provide additional safeguards for the protection of children [28]. In 2019, the UNCRC celebrated its 30th anniversary and to commemorate this milestone, we aimed to assess its implementation in a high-income country, focussing on accessibility of care for paediatric patients with SCD in the Netherlands as a case study. --- Study aim In this nationwide assessment, we aim to map barriers and facilitators in any of the four dimensions of healthcare accessibility faced by Dutch children with SCD and their families. By interviewing healthcare professionals, we try to identify common challenges and lessons learnt in clinical practice on a grassroots level. --- Methods --- Design and study setting In this study, a qualitative descriptive design was used. The qualitative approach, with its focus on subjective experience, is best to enhance understanding of the range of problems with healthcare accessibility that patients experience and that healthcare professionals observe. Interviews were conducted with SCD healthcare professionals working in various care settings. Participants were affiliated with the 'SCORE' (Sickle Cell Outcome Research) consortium of the Netherlands which includes all SCD comprehensive care centres and research institutes involved in clinical SCD research in the Netherlands. Study findings are reported in accordance with the Standards for Reporting Qualitative Research (SRQR) [29]. This project was approved by the Medical Research Ethics Committee of Erasmus University Medical Center and adhered to the Declaration of Helsinki [30]. The participants provided written informed consent. --- Participants To recruit a purposeful sample, we sought healthcare professionals, including (paediatric) haematologists, nurse practitioners, nurses, psychosocial staff and social workers, who provided care to paediatric and adolescent SCD patients. We identified eligible participants through a central list of SCORE professionals (key informant sample) and recruited a broad range of participants using a combination of maximum variation and snowball sampling [31,32]. A cyclical approach to sampling, conducting interviews and analysis and interpretation, allowed theoretical saturation to be attained when no new themesrelated to healthcare accessibilityemerged from subsequent interviews [32]. --- Data collection Three trained investigators (M.E.H., M.B. and T.C.J.V.) conducted face-to-face semi-structured, in-depth interviews between February 12th and May 23rd, 2019. One week before the interview, each healthcare professional received an e-mail explaining the purpose of the study and our specific interest in access to healthcare for children and adolescents with SCD. Interview questions were formulated to probe the healthcare professionals to elaborate and explain the challenges faced by their patients and to provide recommendations on how to solve these issues. An interview guide was used to ensure the four healthcare accessibility dimensions were covered and started with the question of how the participant would define access to health. The interview guide contained only open questions aiming to freely explore the participants' experience. Two examples of these questions (in this case focussed on the accessibility of information) were: What do you aim for when informing patients? What happens after you have informed the patients? The participants' initial response was often followed by a probing question, such as: Could you give an example? How is this different for patients with SCD compared to other patient groups? The Interviews occurred privately at the workplace of the participants. Interviews were conducted in Dutch and were audio-recorded. Field notes with initial thought were made by the interviewers after each interview. --- Data analysis The interviews were transcribed verbatim and coded, followed by a thematic analysis [33]. Based on field notes, initial codes were generated to collate data (both problems and solutions) to main themes (such as "transportation", or "telecommunication"). Through ongoing thematic analysis, definitive themes were formulated. The transcripts were analysed by three researchers (M.E.H., M.B. and T.C.J.V.). The results (transcripts, codes and themes) were subsequently discussed with experts in the field of healthcare accessibility or the field of clinical paediatric sickle cell care to confirm the accuracy of the analyses. The robustness of the research was increased by selecting quotations to highlight or illustrate the themes and link the reported results to the empirical data. To increase readability for the general public, the definitive themes have been reported as recommendations. --- Results --- Sample description Twenty-two healthcare professionals from five different academic clinic sites for comprehensive sickle cell care in the Netherlands participated in the study (Fig. 1). None of the potential participants declined to participate in the study. The participants' mean age was 37.0 (SD 14.5) years. Of the 22 participants, 19 were women and 21 where white. The average number of years of experience in their profession was 8.5 (SD 6.5). Interviews lasted on average 38 min (range: 15 to 58 min). Table 1 summarises study participant characteristics. Thematic analyses of the interview transcripts, revealed six themes, or recommendations, on how to improve healthcare accessibility for children with SCD and their families. In general, Western countries provide free public healthcare insurance for children to ensure healthcare access for everyone under 18 years of age. However, depending on countries and healthcare systems, some medical services are subject to a statutory personal contribution. In addition, direct nonmedical costs (i.e. travel expenditures and telephone calls to the hospital) and indirect costs (i.e. missed workdays for caregivers and childcare for siblings) are generally not reimbursed. Many participants reported that families had difficulties with costs. "Last month, we had a seven-year-old visiting our outpatient clinic on his own. We asked where his mom or dad was. 'In the car' he replied. His mother didn't have enough money to pay for the relatively high hospital parking fee." Participants felt that the government or insurance companies should ensure that caregivers are fully reimbursed for all extra costs, especially for lifesaving treatments such as antibiotic prophylaxis and vaccinations. "International guidelines recommend broad meningococcal vaccination for children with sickle cell disease due to their functional asplenia. As you know, they [children with SCD] are at much higher risk [compared to healthy children] for meningococcal disease. Unfortunately, these vaccines are not covered by health insurance. And for most parents, €25,-[the price of a vaccine] is simply too much. Now, we provide the vaccines in the hospital budget, but this simply cannot go on forever." "Sometimes a certain medication is all of a sudden not covered anymore by an insurance company. For example, for oral penicillin suspension [essential for infants who cannot take tablets], suddenly a very high personal contribution was necessary. We [paediatric haematologists] spent many hours together with the clinical pharmacist in order to solve this problem and to avoid these extremely high extra costs for patients. Fortunately, we were able to find a fully covered generic variant which could be imported from a neighbouring country." "To me, this is an issue of equal access to essential health services for all Dutch citizens. I cannot believe that in a developed country such as the Netherlands, we obligate parents to pay for their child's muchneeded care." "Apart from this I think access to healthcare is equivalent to access to medication and this is often difficult, as patients are obligated to pay additional fees for various medication types." Due to centralised sickle cell care, some families face high costs because of travelling large distances. In addition, long travelling times may have implications for caregivers' jobs, as caregivers are often unable to miss a shift or leave work without financial implications or even loss of employment. Furthermore, many children with SCD have siblings, and there is usually no provision for reimbursement of the costs of their care when the caregivers are expected in the hospital with the child who has SCD. In cases of single parent households, this may be even more difficult. "Sometimes nurses at the ward report that parents do not visit their hospitalised child often enough. It makes them a bit annoyed and worried about the child's social situation. While I understand their worries, I also understand that for some parents it's not always easy to take unpaid leaves of absence in order to visit their child in the hospital." "One mother ended up getting fired for missing too many days at work. She was on a fixed-term contract. She told her employer about her child with sickle cell disease. He had never heard of the disease before and said it was not his problem." "Some parents are already struggling every month to just pay the rent. They cannot afford many trips by public transport to the hospital." "Well, yes, we have a sort of special fund and thenyou have to see of course, because you cannot do it too often. It is an emergency fund -you have to estimate how urgent the need for help is, financially I mean. Therefore, we ask advice from our social worker. She is in charge of the fund. Patients can hand in their tickets and receive a reimbursement of the costs of, for instance, their train ride." Overall, despite universal coverage of medical care in Western countries, family borne costs of children with SCD could seriously affect the family's disposable income. These additional costs could increase inequality in the accessibility of healthcare between households that can easily afford them and those who struggle to make financial ends meet [34]. Reimbursements from government agencies are often insufficient to cover all costs and reimbursement procedures can be quite complicated, especially for individuals with lower health literacy. Previous studies evaluating the impact and financial costs for caregivers of children with other diseases such as diabetes and paediatric cancer show that risk factors of perceived economic hardship include single parenthood, lower socioeconomic status, and physical distance from the treatment centre [35,36]. The issue of single parenthood requires special attention, even more so because single-heads of household are common in the SCD population [37]. It is pertinent to recognize that many families struggle to meet the extra financial demands of caring for a child with SCD. Therefore, attention must be given to proactive interventions aimed at addressing all extra costs, including full coverage of medical treatment, support for housework and childcare, and access to charitable funding. Theme 2. Reducing the number of hospital visits: clustering of appointments on the same day SCD requires a versatile and comprehensive treatment protocol with frequent check-ups with healthcare professionals from various medical specialties [38]. In the Netherlands, patients visit their paediatric haematologist twice a year to discuss disease progression, treatment and preventative care. Additional hospital visits include check-ups with a nurse practitioner; examinations like transcranial Doppler ultrasound, echocardiography, and laboratory tests; or appointments with medical social workers or psychologists. Therefore, the patients are burdened with multiple appointments throughout the year. Almost all interviewed professionals mentioned that the frequency of hospital visits can present barriers for optimal treatment and that this might be an explanation for the relatively high no-show rates among the patient population. Apart from practical and financial barriers, high no-show rates were also attributed to the patient's inability to fully understand what different appointments types entail, and why so many hospital visits are necessary. "Many patients fail to show up at one or more of their check-up appointments. I think sometimes appointments are forgotten, but I also feel they have too many appointments throughout the year. Parents do not always understand the necessity of each appointment. They think I have already been there three times this year, I do not really have to attend this time." "For the majority of our patients, it seems difficult for them to fully understand their illness and that even when they are not facing symptoms of a sickle cell crisis; they still have to check-in regularly." Regular follow-up care is required for children and adults with SCD. When consistently followed by a health provider, some disease complications are avoidable. Patients lose vital opportunities for health monitoring and education when regular follow-up appointments are missed, increasing the risk of hospitalisation or mortality. "Recently, I saw a female patient of 23 years old who missed her check-ups of the last few years because she had few complaints. Well, now she has lost her sight in one eye, and there is nothing we can do. Even patients with few crises [vaso-occlusive crises] and few health issues can develop serious organ damage." A recurring remark in the interviews was the idea that scheduling visits to various healthcare professionals on the same day may be beneficial for total accessibility of care. Not only can this reduce the burden of traveling, it might also become easier to involve additional (para) medical experts such as psychologists to improve comprehensive treatment. "Appointments on the same day also make it easier to organise treatment more holistically; for example, adding a visit to a psychologist and a physiotherapist without obliging the family to visit the hospital more often." One participant saw an additional benefit for patients if multiple appointments were offered on the same day. During visits, the intervals between appointments could provide an opportunity for caregivers and patients to meet with other patients and their families. "Scheduling visits on the same day could offer an opportunity for children and their families to see and meet fellow sufferers, which could bring the relief of sharing the burden." Lessons in this regard can be drawn from care for children with cystic fibrosis, which is often organised in annual assessment days. On these days, patients and their families speak to a number of healthcare professionals including the specialised paediatrician, other medical specialists, the nurse practitioner, pharmacist, dietician and psychologist. In addition, multiple tests are conducted, such as imaging and lung function tests. Applying this approach in comprehensive SCD centres would address different barriers of healthcare accessibility and thereby help patients and their families to see all required specialists [39,40]. Theme 3. Specialised and shared care: bridging the gap Although care for patients with SCD is centralised, many families still visit their local hospital, because of large travelling distances to the comprehensive sickle cell centre. Almost all participants reported a knowledge gap with regard to SCD among primary care physicians and general paediatricians in local hospitals due to a lack of clinician training and continuing education. "Parents told me they took their child with a fever to the general practitioner and he said 'don't worry, it's just a fever. She will get better in a few days; she doesn't need any prescription medication'. By the time they arrived at my hospital, she [the child] had to be rushed into the ICU [intensive care unit] with a sepsis. I feel that the risk of bacteraemia and the need for prompt evaluation and treatment is a basic feature of sickle cell disease care." "It regularly happens that a patient with a crisis [vaso-occlusive crisis] visits the general practitioner with severe pain and that he or she then tells them to just take some paracetamol and then they'll be good to go." "General practitioners generally have a lack of knowledge of sickle cell disease, but in my experience, they are quite quick with their referral to a haematologist. I feel there is a bigger issue with haematologists in local hospitals. "Because he will think he can handle the patient and doesn't recognise the seriousness of the disease?" "Yes, that's what I think." However, some participants shared that they had an excellent working relationship with so-called'shared care hospitals'. Shared care is an arrangement between a sickle cell centre and a local hospital or general practitioner. "Paediatricians in our shared care hospital are educated to treat children with sickle cell disease. We [specialised paediatric haematologists in a sickle cell centre] support and supervise these local healthcare professionals. Whenever a patient does not respond to routine therapy or when there are complications, the patient is transferred to our centre. Communication is very effective." Many participants felt that shorter commutes to the local hospital would notably improve the compliance with attendance at outpatients' clinics, especially when compared to theoften longer-journey to the sickle cell centre. "Some patients travel more than 1.5 hours with public transport to reach our sickle cell centre. Local hospital visitswith consequently much less disruption to the child and family's everyday routine and without compromising qualityare, for me, an essential part of delivering good healthcare." Participants recommended identification of paediatricians in local (shared care) hospitals with an interest in SCD who could serve as a primary contact with the paediatric haematologist in the centralised sickle cell centre and who are able to disseminate knowledge to other local health professionals when needed. "Shared care is about creating a comfortable working relationship between paediatricians and paediatric haematologists. If, for example, all our [in the sickle cell centre of the participant] inpatients beds are full and I have a child in the ED [emergency department] with a crisis who needs IV [intravenous] pain medications, I call the shared care paediatrician with sickle cell disease expertise to discuss the possibility of transferring the patient. I know the child will be in good hands because they know how to treat a child with a crisis [vaso-occlusive crisis], and they will supervise nurses and other hospital workers." In the specific case of migrant children with SCD, several interviewees highlighted that the transfer from one temporary shelter centre to another can be counterproductive to treatment efforts. The geographical location of the shelter determines which general practitioner shared care centre and specialised SCD centre a patient has access to. A transfer to another centre, therefore, often means all healthcare professionals involved in treatment are replaced. Unlucky children switch between medical facilities multiple times during their asylum procedure and receive care from many different healthcare professionals. "Children and families in asylum centres are often transferred to other centres across the country. Sometimes I see a patient for the first time, I order laboratory tests, and make a treatment plan, but the next consultation the patient does not come as he or she has been transferred to another centre. That I think is very distressing." "The asylum centres are extremely badly organised. Caregivers have to take a lot of hurdles to make progress [... ] plus you don't have your own doctor, so that's really difficult." Centralised, comprehensive SCD centres have shown to significantly decrease morbidity and to improve quality of life in patients with SCD [41,42]. However, unfamiliarity with patients with SCD outside these specialised centres makes the patients more likely to receive inadequate care. Simultaneously, sole access to follow-up appointments, emergency care, and inpatient care in the specialised sickle cell centres can be a burden for families living at a large distance from a comprehensive centre. Shared care constructions have been applied in the management of paediatric patients with many (chronic) conditions such as diabetes, cystic fibrosis, idiopathic arthritis, and cancer and is based on a closed collaboration between general paediatricians and specialised paediatricians in centralised centres [43][44][45][46][47]. The shared care hospitals are linked with the specialised centre by a two-way referral and communication process. There are many theoretical benefits in terms of access and convenience. The overall goal is to deliver specialised services as close as possible to the patient's home without compromising quality. In the case of SCD, primary healthcare providers, including general practitioners, should be supported to improve their knowledge and understanding of SCD. Furthermore, shared care centres should have at least one paediatrician with interest and expertise in SCD and be able to treat mild complications, including vaso-occlusive crises requiring intravenous opioid pharmacotherapy as well as simple infections. Lastly, with special reference to children with SCD in shelter centres, it is important that these children are visible in the healthcare system and are able to be seen regularly by a healthcare professional with knowledge of their disease. --- Theme 4. Optimizing methods of verbal and written communication: enabling mutual understanding between patients and healthcare professionals Patients with SCD and their caregivers must perform a variety of tasks requiring adequate healthcare understanding, including communication with healthcare professionals, reading and understanding of health information, interpretation of acute symptoms, administration of medication, and making decisions regarding treatment options. Many parents of children with SCD are from ethnic and racial minority groups. Understanding critical information is particularly difficult with a language barrier. Most healthcare professionals interviewed felt that the available health information materials were often hard to read and that caregivers of children with SCD could benefit from having appropriate educational materials about SCD. "During the first consultation, we provide parents with an extensive, comprehensive guide to sickle cell disease. It has excellent information; however, I think that for a person without any medical background, it is very hard to understand." Participants reported a limitation in methods to confirm caregiver/patient understanding. "When I speak to them they always nod politely but do they really understand what I am saying?" Several participants noted a lack of written health information in multiple languages primarily spoken by the patient population such as English and French. "The mother was unable to read Dutch, and I was unable to provide any written materials in French." One centre created a visual decision-making educational tool as an aid to enhance communication between the physician and caregiver/patient during the decisionmaking process of initiation of hydroxyurea therapy. "Before [the educational tool was developed] I could only provide parents with the pharmacy leaflet on hydroxyurea. That leaflet is really very "scary"; it contains a long list of possible side effects. And the font size is quite small, which makes it more difficult to read. Now I use the visual tool, and I feel they [the caregivers] understand the necessity of the treatment much better, and it is easier to address safety concerns." Clear communication and accessible healthcare information is an important component to improve population health [48,49]. The WHO stresses the importance of understandable health information, reiterating the right of individuals to have access to health information and health systems that they are able to understand and navigate [50]. In addition, special consideration should be given to the development of educational materials for population groups with well-documented low literacy skills, i.e. members of minority population groups and members of immigrant populations. --- Theme 5. Building strong digital connections: improving the use of eHealth and telemedicine The interviewed healthcare professionals described the paradoxical ease with which caregivers handle their smartphone, while their low literacy competence interferes with fully comprehending, for example, an appointment letter from the hospital. Making use of a smartphone instead of written letters can improve communication between healthcare professionals and their patients. "Since we started inviting patients for their appointments by e-mail, text message and by admission letters instead of admission letters alone, our no-show rates have declined significantly. Also, it is much easier to remind patients one or two days in advance of the scheduled date." Almost half of the comprehensive sickle cell centres have established a mobile phone number by which caregivers and patients are able to directly call the sickle cell nurse practitioner. During office hours, this number, which bypasses the front desk of the hospital, facilitates a direct link between patients and the healthcare professional. The interviews suggested that caregivers' preference is to call the nurse directly when requiring support. "In contrast to the general hospital phones, our mobile number does not call anonymously. Patients can see it is the sickle cell centreand not a debt collector for examplewho calls them, which increases the chances they pick up the phone. We also use WhatsApp, which works even better than calling. To these messages, we often receive a response almost instantly, while phone calls are sometimes not answered or returned." A direct mobile phone number supports not only communication through phone calls, but it also enables the exchange of written and spoken communication using widely used day-to-day messaging applications. Three interviewees mentioned that the option of spoken messages seem to be particularly useful for caregivers with limited health literacy as no reading or writing is required. "Some parents always contact me by voice message. They send voice memos with questions and concerns like; "when is my child's next follow-up?", if they need a new prescription, or when their child is not well. I feel this works really well and lowers the barrier of access to a healthcare professional." Another advantage of direct calls to the sickle cell nurse practitioner is that patients and their caregivers know whom they can call for advice. They can call as soon as they feel the need to, thereby preventing the worsening of their child's condition. "If I explain during a regular follow-up consultation, what to do in case of a vaso-occlusive crisis, it can be difficult for parents to both comprehend and store the information for later use. In case of a stressful event like a painful crisis, it can be very helpful to talk to someone you know and who can give you instructions." However, some healthcare professionals mentioned the specific challenge how to provide caregivers with such a direct line of communication outside working hours. "Some caregivers do not really understand that they can only call the sickle cell phone during working hours. In the beginning, I worried caregivers would not know whom to call in case of an emergency outside office hours, so I sometimes answered my phone outside working hours. Currently, I turn my phone off and have a voicemail which provides the phone number of the emergency department." Participants mentioned the increased use of eHealth such as mobile applications to monitor and manage health symptoms and an online portal to access personal medical records. However, this necessitates a certain level of digital health literacy. "We send quality of life questionnaires to caregivers' e-mail addresses one week before the follow-up appointment of their child. Unfortunately, some caregivers never fill in those electronic questionnaires; I feel some don't really have the skills to use digital technologies." Accessible mobile contact between the SCD nurse practitioner and caregivers can increase caregivers' capability to manage their child's care. The use of eHealth services provides a successful way of helping patients to live more optimally with chronic conditions [51]. However, innovative technologies should to be tailored to users' health literacy skills, which often seems to be forgotten. Otherwise, these technological healthcare innovations may further increase disparities between patients rather than bridge them [52]. Theme 6. The patient in context: towards compassion and public awareness and a supportive environment Children with SCD benefit from preventative measures, which include daily use of prophylactic antibiotics, immunisations, ensuring adequate hydration by drinking plenty of fluids, the wearing of warm clothing to avoid chilliness and sufficient rest and avoidance of excessive stress. Although these measures do not seem difficult to safeguard, in a paediatric setting, their success depends heavily on the support a child receives from family, teachers, sports coaches, and many others. Multiple interviewees highlighted that a societal lack of knowledge about SCD often interferes with effective preventive treatment. "Some teachers do not allow children to drink from a bottle of water outside of the designated snack-and lunch breaks. This can be a big issue for patients and their families because they may be too shy to inform about the illness or simply not vocal enough to express the children's needs to drink regularly." Participants described the benefit of a social worker in the comprehensive care team who helps caregivers with the educational system. The social worker can, for example, educate school representatives or can attend school meetings. Keeping in close contact with the school of each patient proved to be an effective approach to increase awareness for better adherence with preventive measures. "When a child enters primary school, our social worker always plans a phone call with the teacher of the child to describe the child's medical needs. We feel this helps enormously in preventing crises because the teacher then understands how to help the child stay safe." "We use a 'checklist' to help parents prepare and remind them of what they need to discuss with their child's teacher, such as emergency phone numbers, signs or symptoms of pain, fever and fatigue." Increasing general knowledge among key stakeholders and the public is of importance to ensure that preventive and acute healthcare measures are taken in all settings. The participants mentioned the following parties as key stakeholders: the government, municipalities, hospitals and general practitioners (Theme 3), schools, and government authorities in charge of migrants and refugees. Community outreach and educational initiatives would be an important step to inform key stakeholders and society as a whole about the severity and impact of SCD. "When I tell people about my work with children with sickle cell disease, many claim they have never heard about the disease." "I am always surprised when people know about CF [cystic fibrosis] but not about sickle cell disease. Patient numbers in the Netherlands are the same. I don't understand." Despite the major advances in treatment that have occurred over the past three decades, SCD remains a lifethreatening disease that is associated with reduced quality of life. Broader societal awareness of the severity of SCD will increase the likelihood of future government and private financial support for research and the provision of comprehensive and tailored high-quality clinical care. --- Discussion When evaluating the performance of healthcare systems, national averages of performance indicators fail to acknowledge the individual child's rights as stated in the United Nations Convention on The Rights of the Child [28]. To complement current knowledge on healthcare accessibility in a high-income country, we performed a nationwide case study among Dutch healthcare professionals in the field of paediatric SCD. This qualitative study explored the intersecting vulnerabilities faced by patients and their families and how these vulnerabilities hamper access to healthcare. Rather than solely identifying barriers, best practices and lessons learnt were gathered from daily clinical practice, supported by existing evidence in the literature. Content analyses of the interviews with healthcare professionals revealed six themes with corresponding recommendations (Fig. 2). Together, the recommendations act on all four dimensions of healthcare accessibility: physical accessibility, financial affordability, accessible information, and no discrimination. Most recommendations fall into two or more dimensions of healthcare accessibility. For example, patient appointment reminders by mobile phone instead of long or complicated appointment letters improve the accessibility of health-related information. In addition, in line with the nondiscrimination principle, clear communication with patients regardless of their perceived health literacy skills prevents inequality in access between patient groups with different levels of education. Six themes emerged, all associated with best practices on topics related to improvement of accessibility of healthcare for children with SCD and their families. Firstly, cutting of invisible costs by fully reimbursing caregivers for all extra costs related to the disease of their child. Secondly, clustering of appointments on the same day to help patients seeing all required specialists without having to visit the hospital frequently. Thirdly, improving shared care in order to deliver specialised services as close as possible to the patient's home without compromising quality. Fourthly, optimising methods of verbal and written communication with special consideration for families with language barriers and/or low literacy skills. Fifthly, improving the use of eHealth services tailored to users' health literacy skills including accessible mobile telephone contact between healthcare professionals and caregivers of children with SCD. Finally, increasing knowledge and interest in SCD among key stakeholders and the public to ensure that preventive and acute healthcare measures are understood and safeguarded in all settings. Implementing any of the discussed best practices could lead to an overall improvement of healthcare accessibility. A holistic implementation of all six themes is necessary to adequately address the intersecting vulnerabilities faced by patients with SCD and their families. Some recommendations will be relatively simple to implement. For example, clustering of appointments on 1 day or developing easier to read appointment letters. While such measures are an important step towards improvement of access to care, accessible care cannot be Fig. 2 Six key themes crosscutting the four dimensions of healthcare access sustained without adequate financial support. For example, the structural improvement of knowledge of SCD among healthcare professionals or the providing of sufficient financial means to cover transportation to the hospital, are more costly. This qualitative study focuses on the experiences of (mainly white) healthcare professionals and not on caregivers' or patients' perceived barriers to accessibility of healthcare. Future studies on caregivers' perception will be an important extension to the results of this study [53]. In addition, follow-up (quantitative) studies | Background: In well-resourced countries, comprehensive care programs have increased life expectancy of patients with sickle cell disease, with almost all infants surviving into adulthood. However, families affected by sickle cell disease are more likely to be economically disenfranchised because of their racial or ethnic minority status. As every individual child has the right to the highest attainable standard of health under the United Nations Convention on the Rights of the Child, it is essential to identify both barriers and facilitators with regard to the delivery of adequate healthcare. Optimal healthcare accessibility will improve healthcare outcomes for children with sickle cell disease and their families. Healthcare professionals in the field of sickle cell care have first-hand experience of the barriers that patients encounter when it comes to effective care. We therefore hypothesised that these medical professionals have a clear picture of what is necessary to overcome these barriers and which facilitators will be most feasible. Therefore, this study aims to map best practises and lessons learnt in order to attain more optimal healthcare accessibility for paediatric patients with sickle cell disease and their families. Methods: Healthcare professionals working with young patients with sickle cell disease were recruited for semistructured interviews. An interview guide was used to ensure the four healthcare accessibility dimensions were covered. The interviews were transcribed and coded. Based on field notes, initial codes were generated, to collate data (both barriers and solutions) to main themes (such as "transportation", or "telecommunication"). Through ongoing thematic analysis, definitive themes were formulated and best practices were reported as recommendations. Quotations were selected to highlight or illustrate the themes and link the reported results to the empirical data. |
mobile telephone contact between healthcare professionals and caregivers of children with SCD. Finally, increasing knowledge and interest in SCD among key stakeholders and the public to ensure that preventive and acute healthcare measures are understood and safeguarded in all settings. Implementing any of the discussed best practices could lead to an overall improvement of healthcare accessibility. A holistic implementation of all six themes is necessary to adequately address the intersecting vulnerabilities faced by patients with SCD and their families. Some recommendations will be relatively simple to implement. For example, clustering of appointments on 1 day or developing easier to read appointment letters. While such measures are an important step towards improvement of access to care, accessible care cannot be Fig. 2 Six key themes crosscutting the four dimensions of healthcare access sustained without adequate financial support. For example, the structural improvement of knowledge of SCD among healthcare professionals or the providing of sufficient financial means to cover transportation to the hospital, are more costly. This qualitative study focuses on the experiences of (mainly white) healthcare professionals and not on caregivers' or patients' perceived barriers to accessibility of healthcare. Future studies on caregivers' perception will be an important extension to the results of this study [53]. In addition, follow-up (quantitative) studies might provide an even stronger foundation for future interventions to improve accessibility of healthcare. For example, how many families exactly face financial hardship? These quantitative studies are ongoing in the Netherlands in the context of the nationwide Dutch research consortium SCORE. The small-targeted sample in this study, although characteristic for qualitative research, limits the extent to which the findings reported can be generalised to other countries and healthcare systems. Nevertheless, the validity of this multicentre study is supported by the representative sample of healthcare professionals with different occupations caring for children, the internal coherence of the themes and its coherence with the background literature. For now, the six key themes provide recommendations for best practices in the care for paediatric and adolescent patients with SCD and their families. However, medical professionals working outside the field of (paediatric) SCD may recognise that some of their patients face similar barriers in accessing healthcare. Therefore, the recommendations we propose may be worthwhile to implement in other contexts as well. --- Conclusion This study presents the first overview of both the urgency and the possibility to improve healthcare accessibility for young patients with SCD from the perspective of healthcare professionals. Converged into six key themes, our analysis sheds light on barriers and potential solutions to accessing healthcare, which may serve as a clinically useful resource to improve care for patients with SCD. --- Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. --- Declarations Ethics approval and consent to participate Ethical approval for this study was obtained from The Medical Ethics Review Committee Erasmus Medical Center. Informed consent was obtained from all individual participants included in the study. --- Consent for publication N/A. --- Competing interests All authors declare no conflict of interest relevant to the contents of this manuscript. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Background: In well-resourced countries, comprehensive care programs have increased life expectancy of patients with sickle cell disease, with almost all infants surviving into adulthood. However, families affected by sickle cell disease are more likely to be economically disenfranchised because of their racial or ethnic minority status. As every individual child has the right to the highest attainable standard of health under the United Nations Convention on the Rights of the Child, it is essential to identify both barriers and facilitators with regard to the delivery of adequate healthcare. Optimal healthcare accessibility will improve healthcare outcomes for children with sickle cell disease and their families. Healthcare professionals in the field of sickle cell care have first-hand experience of the barriers that patients encounter when it comes to effective care. We therefore hypothesised that these medical professionals have a clear picture of what is necessary to overcome these barriers and which facilitators will be most feasible. Therefore, this study aims to map best practises and lessons learnt in order to attain more optimal healthcare accessibility for paediatric patients with sickle cell disease and their families. Methods: Healthcare professionals working with young patients with sickle cell disease were recruited for semistructured interviews. An interview guide was used to ensure the four healthcare accessibility dimensions were covered. The interviews were transcribed and coded. Based on field notes, initial codes were generated, to collate data (both barriers and solutions) to main themes (such as "transportation", or "telecommunication"). Through ongoing thematic analysis, definitive themes were formulated and best practices were reported as recommendations. Quotations were selected to highlight or illustrate the themes and link the reported results to the empirical data. |
This article is a contribution to opening up the conversation on gender and social work. It is concerned, more precisely, with the conceptualization and usage of ''gender'' within social work theory, research, and practice. Although a key feature of everyday life, within social work, gender has what sociologists sometimes call a ''seen-but-unnoticed'' quality. It is frequently overlooked and, perhaps more importantly, where it is considered, gender is theorized in a number of rather limited ways. For example, social work is often described as a female-dominated profession, but one in which men disproportionately occupy senior roles. Yet, McPhail has argued that ''social work is more correctly described as a female majority, maledominated profession'' (McPhail, 2004b: 325), because, although there are many more women than men in the field, they do not necessarily dominate. This is an important argument since, to describe social work as ''female-dominated'' suggests that, merely because they are far greater in number, women hold more power. Yet, this disregards some vital points. First, the smaller number of men in the profession may actually hold more institutional power, and, second, a profession like social work is, as with many fields involving the care of others, devalued. Third, the question of how power works within social work institutions, and how this relates to gender, is likely to be a lot more complicated. Discussions about challenging oppression and discrimination within social work theory and practice are some of the few occasions on which gender is openly acknowledged (Dominelli, 2002a;Mullaly, 2007;Thompson, 2012). Yet these, too, often rely upon limited accounts. Thompson's text, for example, describes gender as a ''fundamental dimension of human experience, revealing an ever-present set of differences between men and women'' (Thompson, 2012: 55). While he does go on to point out that social, rather than biological, processes produce gender, it is largely at the level of attitudes that his suggestions for change are leveled. This tends to individualize gender, to see it as a personal characteristic, and to see gender oppression merely as a form of personal behavior or values. In part, these points relate to the ways in which gender is defined. Second-wave feminism, for example, separated the concept of ''sex'' from ''gender,'' in order to show that ''gender'' refers to a set of social expectations that may be challenged (Oakley, 1972;Unger, 1979). However, sometimes this notion of gender as a set of cultural practices has been reduced to role or identity, so that gender is treated as a preexisting characteristic or property of the individual. Later feminist theories remind us, rather, that gender is a social relationship, based upon the promotion of hierarchy, and one that is reiterated through interactions in everyday life. This article pays considerable attention to this notion of gender as a form of practice, since it is my contention that much of social work theory actually treats gender as a rather static characteristic. After having reviewed some of the more familiar approaches to gender within social work, I will go on to open up debates via consideration of materialist, interactionist, and discursive accounts, before finally considering what social work theory, research, and practice might learn from these. --- How does social work think about gender? Where social work theory or research does think about gender, we see the influence of feminist and/or sociological theories. Orme's book, Gender and Community Care, argues that the ''gender politics of social work has to include the relationship between the helper and those who require help, and... between the individual and the state'' (Orme, 2001: 14). She highlights the disproportionate representation of women in mental health services, elder abuse, and those cared for in the community, pointing out that these are all areas in which gender is usually ignored or invisible or that, when it is noticed, the response is usually to suggest that men and women should be treated differently. Orme argues this ''categorisation of femaleness and maleness, femininity and masculinity as dichotomous opposites does not reflect the lived experience of users of community care services'' (Orme, 2001: 239). Scourfield points out that assumptions about gender difference ''permeate interventions'' in social work (Scourfield, 2010: 2), and he links these with heteronormativity. He makes a case for the analysis of gender as a social category, since the category relates to questions of social inequality (Scourfield, 2002). Christie similarly argues that, within discourses of welfare, persons are gendered, ''offering them specific gendered identities and subject positions'' (Christie, 2001: 9). In relation to men in social work, he notes that they are often seen as either good (e.g. ''male role models'') or bad (e.g. ''dangerous/abusers''). Sociological social work texts see gender as referring to a social or cultural set of ideas reflecting normative assumptions but, although such texts make reference to gender as a practice, they often work at the level of attitudes or values, encouraging social workers to reflect upon their own assumptions about gender (Llewellyn et al., 2008;Sheach-Leith et al., 2011). Treating gender concepts at the level of attitudes is a rather individualized approach, in which it seems to be an interpersonal characteristic only, although there are other texts that consider gender as a practice and insist on its contextualization within late or reflexive modernity (Dunk- West and Verity, 2013). There are attempts within social work to think about how gender relates to questions of race, class, disability, age, or sexuality, but more often gender is treated as a stand-alone issue. An example of this would be some feminist work on care, which argues that women need to be released from the burden of caring for dependents. Although this point about the effects of state and family reliance upon unpaid care is an important one, work by disabled feminists has pointed out that the category ''women'' includes those being cared for, and that these arguments position disabled women and men as a ''burden'' (Morris, 1991). Others have noted the heteronormativity of such arguments, based, as they often are, on an assumed heterosexual couple (Manthorpe, 2003). However, by far the most regular usage of ''gender'' within social work is where it is treated as ''already given'' (Smith, 1990: 159); that is, used as a label referring to an assumed characteristic. Here, the formula runs, ''gender causes x.'' An interesting example of this would be Failure to Protect: Moving Beyond Gendered Responses (Strega et al., 2013), which examines why, in professional responses to child sexual abuse, mothers are often held responsible via ''failure to protect.'' In one sense, this is vitally important: why does some social work practice tend to blame mothers and ignore fathers? Why are mothers often held accountable for men's abuse of children? But, in another sense, the book never really asks how gender works, or is made to matter, in these contexts, and instead frequently treats it as a mono-causal explanation. This kind of usage of gender is limited for a number of reasons: first, gender may take on a thing-like quality and appear to have agency-''gender causes x.'' Second, it treats a group (e.g. men) as homogeneous. But this doesn't ask if all men are therefore more likely to abuse children, for example. And it doesn't ask whether all men are equally powerful. Third, it doesn't really get to grips with just how gender works in a given situation. Fourth, it may lead to simplistic explanations. Of course, it is important to think about why men overwhelmingly commit most forms of sexual violence, but this does not mean ''gender causes abuse.'' And, lastly, this is a rather interior view of gender. The gender of the person seems to be some kind of characteristic that causes a problem or outcome. --- Woman-centered practice? Much of the feminist social work literature treats gender as a basis for similarity and shared purpose. Hanmer and Statham's text, Women and Social Work, for example, develops what they term a ''woman-centred practice,'' and makes the case that, since women are the majority of social workers and service users, a commonality of gendered experience, along the lines of ''being female, their relationships with men, children, living within the family, employment and working conditions'' (Hanmer and Statham, 1999: 18), forms the basis of social change through social work. Although the book does acknowledge differences along lines of race, age, disability, class, and sexuality, this notion of commonality, or what Dominelli and McLeod term ''non-hierarchical relationships between the social worker and the woman/women she is working with'' (Dominelli and McLeod, 1989: 38), has been critiqued for assuming that feminist social work means working with women; that empowerment is the only purpose of such work; that empowerment resolves power dynamics within relationships; and that women's shared experience means automatic rapport (Baines, 1997;Orme, 2003;White, 2006;Wise, 1990). Hanmer and Statham's text mentions lesbian, black, and ethnic minority women in relation to forms of diversity, but their description of women's commonalities relies upon the normative assumptions of whiteness and heterosexuality. This ''sameness'' problem has been the target of other social work writings. Lewis' research argues that both race and gender are mutually constituted, yet within social work they are often treated as separate spheres. She argues that gender and race are experienced differently according to context, and so may have different meanings and effects, even for the same person. So, just as the category of gender must be one that allows for differences, so race, too, must not be treated as already given, as referring to some kind of essential black or white ''culture.'' In relation to the black, female social workers in her study, Lewis suggests that '''racial' and ethnic categories are simultaneously occupied and resisted as a way of mediating a set of working lives which are overdetermined by 'race' and gender'' (Lewis, 2000: 205-206). Indeed, if gender is to be seen in its complexity, then this must not be taken solely to refer to women. For some theorists in social work, it is important to think about work with men and fathers, the complexity of men's position within social work, notions of ''masculinity'' and the category ''men'' (Christie, 2001;Cree, 1996;Featherstone et al., 2007;Scourfield, 2003). This also relates to how social work thinks about trans issues and transgender people, a point to which I shall return. --- Social work, gender, and intersectionality One response to this assumed gender sameness, and the treatment of gender in isolation, is to consider intersectionality theory (Mehrotra, 2010;Murphy et al., 2009;Wahab et al., forthcoming). Crenshaw's argument proposes that the consideration of subordination within single categories, like gender, prevents analysis of race and gender for black women, since the claims of sex discrimination within law are largely based upon experiences of white women in relation to gender only (Crenshaw, 1991). This has been taken up in Incorporating Intersectionality in Social Work Practice, Research, Policy, and Education (Murphy et al., 2009), which argues that social work should consider how oppressions intersect to form interlocking patterns of injustice. This means that attention to gender alone is insufficient, since race and class make a difference, and it also means that any individual might experience both oppression and privilege. While this goes some way to challenging supposed gender sameness, the authors accept Andersen's (2005) claim that sexuality does not occupy the same place as race, class, and gender, since it has largely to do with identity-cultural issues rather than political-structural ones. Andersen argues: sexuality has never been formally used to deny sexual groups the right to vote, nor has it been used in the formal and legal definition of personhood as is historically true of African Americans and other groups. Gays and lesbians have never been formally segregated in the labor market nor denied citizenship because of the labor they provide. (Andersen, 2005: 451) Murphy et al., while pointing out the need to consider questions of sexuality, accept this view and suggest that sexuality cannot be treated as equivalent to race, class, and gender. Here, then, is an obvious problem with some intersectionality theory. An argument against a hierarchy of oppressions is contradicted by establishment of another. And, as Schilt notes, this separate treatment of sexuality ignores ways in which citizenship is denied to lesbian, gay, bisexual, or transgender people, and also that ''gay men and lesbians who have nonnormative gender presentations, who are working-class, and/or who are racial/ethnic minorities are often those who end up being most excluded from legitimate avenues of employment'' (Schilt, 2008: 112). Given that authors, such as Collins, argue that ''what is needed is a framework that not only analyzes heterosexism as a system of oppression, but also conceptualizes its links to race, class, and gender as comparable systems of oppression'' (Collins, 2000: 128-129), this suppression of sexuality analysis in a social work text seems misguided. --- Poststructural and postmodern feminist social work Poststructural and postmodern theories have questioned the notion of identity or experience-based knowledge that features in some feminist work, because poststructuralist theories do not treat language as a reflector of reality, but rather a powerful way of constructing knowledge. Thus, any claims that feminist social work should be based upon validating the experiences of women are thrown into question because those experiences are not merely authentic, they are motivated, linguistic accounts, which aim to achieve certain effects, and they are open to different interpretations. Feminist poststructuralists also challenge the notion of women's shared experience, since the category ''woman'' is itself experienced differently and fractured along race, class, sexuality, disability, age, and other lines (Featherstone, 2001;Morley and Macfarlane, 2011;Pollack and Rossiter, 2010;Rossiter, 2000;Sands and Nuccio, 1992). Of course, this is not merely a poststructural claim. Earlier feminist debates also centered on potential exclusions of the category ''woman'' by race, sexuality, and so on, but here the concern is more with the powerful effects of language use. So, while Sands and Nuccio's (1992) arguments for a postmodern feminist social work, based upon difference, diversity, and recognizing the marginalized do not sound particularly challenging, their questions about the potentially oppressive nature of gendered or racialized categories used by social workers raise important concerns regarding the nature of social work knowledge. Dominelli has argued strongly against ''individualistic'' postmodern theory, which, she says, does not consider systematic patterns of discrimination along gender lines (Dominelli, 2002b: 34). She also claims that postmodern feminism assumes that power ''subsumes any form of opposition'' (Dominelli, 2002a: 169). This seems a rather limited reading of feminist postmodern theories, which are not based on notions of the individual subject at all, but are rather concerned with how subjectivity is produced through powerful discourses, interested in how dominant knowledge forms arise, and in how these may be opposed via various forms of subjugated, but not silenced, knowledge. Dominelli, however, argues for womancentered practice, which seeks equality based on empowerment, listening to the stories, and validating the experiences of women, a point that postmodern theories would reject as both naive and asserting a powerful claim about what kinds of knowledge count. What such debates demonstrate, of course, is that what constitutes feminist social work is not agreed. White's study argues, ''women social workers' anecdotal accounts of their experiences were of feminist identities that were fluid, sometimes fragile or even non-existent'' (White, 2006: 3). She is also critical of woman-centered practice because this seems largely based upon community and voluntary models that exist outside of state social work. While she is not critical of such feminist work per se, White argues that the woman-centered model of practice is largely ''isolated from an analysis of the features of the organisational regime of social work that are associated with its location in the state'' (White, 2006: 31). Postmodern feminist social work theories reject the notion of egalitarian power relations as a fantasy that does not engage with the power dynamics that always exist between social workers and clients, a point also made in earlier work (Wise, 1990). Power is not seen as a one-way street; that is, something always held by social workers over service users. There is no space outside of power relations, and so postmodern thinkers call for reflexivity about power within all practices. The feminist model of empowerment, for example, may be criticized because it sees power as somehow given to the (always) powerless service user by the (always) powerful social worker, but also because the notion of ''empowerment'' has been co-opted by neoliberal state welfare, so that it replaces any concern for wider structural change with individualized notions of ''choice.'' --- Queer and trans theory The influence of queer and trans theories on social work has been more limited to date, but where this has been addressed, then the notion of ''gender'' itself is challenged (Burdge, 2007;McPhail, 2004a;Nagoshi and Brzuzy, 2010;Wahab et al., forthcoming). The dichotomous view of gender is brought into question, as this is a powerful technology for the regulation of persons. Social work writings on trans people generally caution against the reification of gender categories, with phrases such as ''gender variant'' or ''gender nonconforming'' also being used (Davis, 2009;Hartley and Whittle, 2003;Kahn, 2014;Martin and Yonkin, 2006). Yet, at the same time, there may be a tendency, in some accounts, to theorize ''transgender identity'' based upon developmental stages, or gender as something fixed by the age of 3 (Mallon and DeCrescenzo, 2009). Spade, however, argues that the vulnerabilities of trans people, especially those marginalized due to poverty, are the result of ''legal and administrative systems of domination... that employ rigid gender binaries'' (Spade, 2011: 13). Queer and trans theories thus argue that the category ''gender'' should be questioned, and it is to this that I now turn. Opening up the debate on ''gender''... 'enough already with gender!' The reason for such exasperation has to do with the way gender has become operationalized in 'gender research projects'... In many of these instances, gender is taken for granted as the point of departure for a set of descriptions of social practices, understood as an adjective that qualifies established objects of social science: gendered work, gendered performance, gendered play. In fact, there is little inquiry on the production of difference... (Butler, 2011b: 21) Collins'(2000) Black Feminist Thought argues that feminist work on gender has largely reflected the experiences of white, middle-class women. Writing mainly about African American women's experiences, Collins argues that many arguments within feminist theory, such as the role of women as carers in the home or the oppressive nature of family life, do not consider black women's experiences of (often poorly paid domestic) work or of the positive role that black families might play in helping to challenge racism. This is not to valorize ''the black family'' or to deny the significance of sexism, but rather to insist that feminism, and any account of gender relations, must take questions about race on board. As well as this absence of race, black feminist writers also identify the construction of racial stereotypes (such as, ''more oppressed/in need of feminist help'' or ''strong, black women/who don't need feminism'') within some theories. In relation to questions of sexuality, too, feminist theories have been criticized for their heteronormativity. Lorde's work, for example, has asked not only why race but also why sexuality, and lesbianism in particular, has been missing from some feminist accounts (Lorde, 1996). Rubin, too, argues that feminism is not necessarily the preferred theory of sexual oppression and that, in some cases, feminists have proposed ''a very conservative sexual morality'' (Rubin, 1984: 302). Of course, this is a complicated picture, since Rubin's objections are, in some cases, toward forms of lesbian feminism that she found to be restrictive or hierarchical, but she is also making a case, not against feminism, but against theories that see sexuality merely as a derivative of gender. --- Material and structural accounts of gender Materialist or structuralist accounts focus on institutions, such as the family or the workplace, in order to examine how gender inequality is produced and reproduced within such settings. Connell's work, for example, describes gender as ''the structure of social relations that centres on the reproductive arena, and the set of practices that bring reproductive distinctions between bodies into social processes'' (Connell, 2009: 11). This is because she views gender as a pattern within wider social relations, and so is critical of any gender theory that does not consider issues such as education, domestic violence, or health, all of which are ''gendered.'' For Connell, then, societies exhibit a ''gender order'' (Connell, 2009: 73). Another example of structural theory is Risman's work on family relations (Risman, 1998). Risman argues that institutions, such as workplaces or the family, produce inequality between women and men. She makes a case for a focus on material constraints, which she sees as lacking from other theories. For Risman, gender is a structure that has consequences for people at individual, interactional, and institutional levels. Her study of single fathers is particularly interesting in this respect, as they were engaged in homemaking and caring for children. Indeed, Risman refers to single fathers' work as ''mothering'' (Risman, 1998: 52), since she found that responsibility for home and care is better explained by parental role rather than gender. Risman also says that single fathers ''described themselves as more feminine than did other men'' (Risman, 1998: 65). Thus, for Risman, a family structure in which there is one, male parent determines ''gender,'' in the sense that this results in a particular sense of self (''more feminine'') and in work usually associated with women. In heterosexual couple families, women were far more likely to do this caring work. It is possible to raise some questions about this perspective, not least in terms of methodology, because Risman largely tests for gender as a measurable variable (e.g. see ''Measurement of Parenting Variables'' or ''Gendered selves'' (Risman, 1998: 59 and 76)). This does not allow much space for the negotiation of gender within an interactional context or the role of language in that process. Indeed, Risman is rather dismissive of in-depth interviews, due to the distortions and failures of memory that she sees in such methods. However, it is also important to acknowledge that Risman's view of gender as a structure does not see this as determinative, since, in some cases, those structures and their consequences may be challenged. However, Risman's point is that institutional forms constrain ways of behaving; or, they have certain gendered consequences, such as inequalities between women and men. This approach to gender is often taken up in work on stratification of social work organizations. Here, it is argued that the gendered structure of social work, with a disproportionate number of men in senior and management positions, results in gendered inequality for women in terms of treatment and career prospects (Dominelli, 2002b;Harlow, 2004;Kirwan, 1994). Yet it would also be possible to argue that such explanations tell us little about how gender works in these settings. Are ''men'' and ''women'' treated differently, regardless of race, sexuality, disability, class, or other issues? If the explanation for inequality is merely ''gender difference,'' then how exactly do gendered ideas about persons arise within social work in the first place? How are dominant or oppressive ideas about gender resisted within social work teams or settings? Is gender the primary factor or point of identification for social workers? These kinds of questions, which structural explanations often avoid, bring us on to the question of how gender is produced through practices. --- The practice of gender For ethnomethodologists, a problem with structural accounts is that these assume an institutional form results in gendered consequences, but this does not ask how gender is achieved. What practices, for example, produce a gendered institution or society, and how are these, in fact, constitutive of something called ''gender?'' Instead, ethnomethodological accounts are concerned with how gender is achieved in everyday life; that is, with how all people ordinarily achieve a gender status. Garfinkel's study of Agnes, a person who presented as intersex but later revealed herself to be a transsexual woman, was undertaken not to demonstrate the special features of intersex persons or transsexualism, but rather to show that, for all people, ''sex status'' is an ordinary social achievement. Garfinkel argued that social life is ''rigorously dichotomized into the 'natural,' i.e., moral, entities of male and female'' (Garfinkel, 1984: 116), and so, in order to be taken for a ''normal'' person, one has to be taken for a man or woman. But this process involves various cues, to do with appearance, speech, biography, and so on, that each person (or ''member'') gives. So, for Garfinkel, ''members' practices alone produce the observable-tellable normal sexuality of persons'' (Garfinkel, 1984: 181). This work was developed further in Kessler and McKenna's study, which argued that the attribution of gender is a primary feature of everyday life, and that what they term ''gender role'' refers to a set of prescriptive characteristics or expectations (Kessler and McKenna, 1985: 11). Kessler and McKenna argue that this process of gendering persons into just one of the two categories (female or male) is fundamental to social life, and yet unremarkable. This allows, for example, for the presentation of gender as a social ''fact,'' in which some theorists or researchers account for certain behaviors as caused by gender (''gender causes x''). These arguments influenced the ''doing gender'' perspective of West and Zimmerman, which states that gender ''is the activity of managing situated conduct in light of normative conceptions of attitudes and activities appropriate for one's sex category'' (West and Zimmerman, 1987: 127). Crucially, this emphasizes the concept of accountability, because: a person engaged in virtually any activity may be held accountable for performance of that activity as a woman or a man... to 'do' gender is not always to live up to normative conceptions of femininity or masculinity; it is to engage in behavior at the risk of gender assessment. (West and Zimmerman, 1987: 136) In later work on ''doing difference,'' West and Fenstermaker have shown that similar processes apply to race and class (West and Fenstermaker, 2002). West and Zimmerman have also been critical of structural perspectives, which assume that gender may be undone in order to undo inequality. They argue that gender is not so easily abandoned, since all of everyday life is accountable in gendered terms (West and Zimmerman, 2009). Risman has suggested that the doing gender perspective is in danger simply of labelling any activity as masculinity or femininity and, along with others, argues that this may give the impression that nothing can change (Deutsch, 2007;Risman, 2009). In the sense, identified by Butler, of gender being treated as a given explanation for phenomena, Risman's point is important, but this would be a misreading of ethnomethodological claims. Ethnomethodologists explore what ordinary people count as examples of ''masculinity'' or ''femininity,'' and are interested in transformational possibilities. After all, they see gender as a moral, not merely practical, order. Thus, Deutsch's proposal to ''reserve the phrase 'doing gender' to refer to social interactions that reproduce gender difference and [to] use the phrase 'undoing gender' to refer to social interactions that reduce gender difference'' (Deutsch, 2007: 122) seems simplistic: how do we know when gender is being either reproduced or reduced? And isn't it possible that both are occurring within any interaction that appears to involve gender? Within social work, ethnomethodological perspectives on gender are rare, but there is research that considers gender as practice. Po <unk>so <unk>'s work, in which probation officers attempted to identify whether speakers in transcripts were female or male, demonstrates contradictory views of, and methods for identifying, gender. Generally, talk about emotions, relationships, or children were associated with women, and objectivity and reticence in speech associated with men. Po <unk>so <unk>argues that gender is ''situational and... case-specific'' (Po <unk>so <unk>, 2003: 175), and that more attention should be given to the ways in which it is practised. Scourfield's ethnographic study of a childcare social work team examines constructions of gender, and suggests ''an underlying dichotomy of men as abusers, and women as carers'' (Scourfield, 2003: 60). Women were primarily seen as responsible for children's welfare and they were expected to protect children from abusive men, with the ''failure to protect'' discourse a feature. Men were often described as dangerous, threatening, or absent/irrelevant, something that Scourfield sees as part of the continued overlooking of men, and blaming of women, within child protection. Thus, while there are ''multiple gendered discourses in the culture of the social work office that constitute the knowledge available to social workers,'' these are, at the same time, both powerfully limiting and open to challenge (Scourfield, 2003: 151). --- Butler and performativity Butler's work on gender echoes aspects of ethnomethodology and doing gender, since it is concerned with gender as ''a set of repeated acts within a highly rigid regulatory frame that congeal over time to produce the appearance of substance'' (Butler, 1990: 33). However, Butler's work also demonstrates the influence of poststructural theories and a concern with the heteronormative aspects of gendered practices, noting that the: heterosexualization of desire requires and institutes the production of discrete and asymmetrical oppositions between 'feminine' and'masculine,' where these are understood as expressive attributes of'male' and 'female.' The cultural matrix through which gender identity has become intelligible requires that certain kinds of 'identities' cannot 'exist'-that is, those in which gender does not follow from sex and those in which the practices of desire do not 'follow' from either sex or gender. (Butler, 1990: 17) Of course, this does not mean that other kinds of ''gender'' do not exist, and Butler uses the example of drag to show how gender is practised, but also, that it is always imitative. By this she means that drag is no mere copy of an original gender, but rather that in ''imitating gender, drag implicitly reveals the imitative structure of gender itself-as well as its contingency'' (Butler, 1990: 137). In Bodies That Matter, Butler clarifies this performative sense of gender, arguing that this is not about gender as an individual choice or mere play, since ''performativity must be understood not as a singular or deliberate 'act,' but, rather, as the reiterative and citational practice by which discourse produces the effects that it names'' (Butler, 2011a: xii). This is an important point because, while Butler's presentation of drag in Gender Trouble tends to suggest a challenge to traditional versions of gender, work by others, such as Bridges, argues that some forms of ''drag'' may be used as a temporary joke, actually to reinforce ''normal'' gender (Bridges, 2010). Indeed, Butler herself later noted that drag is not necessarily subversive (Butler, 2011a). Butler's argument is that gender precedes the individual; that is, that subjectivity must be taken up through gender, so one comes to be a person through being taken for a woman or man. When an individual does not appear to be gendered in a ''normal'' way, then it is that individual, rather than the gender order, that is questioned. In relation to social work, Green and Featherstone have analyzed Butler's potential, and have suggested that her work helps to challenge dogmatic and morally certain positions within anti-oppressive theory, which they describe as a ''project that believes in its own innocence and construct[s] social workers as disembodied carriers of a 'pure' project'' (Green and Featherstone, 2014: 32). --- Gender as discourse The emphasis in Butler's work on the question of discourse is taken up in a range of theories, influenced in part by the poststructuralist turn to language, which consider gender as discourse. These theories see gender as produced via social and textual practices, which regulate the ways in which we may think about men, women, and others. One important implication of this is that gender is not fixed, nor is it simply attached to individuals. Instead, people contest gendered meanings and subject positions, although, in order | This article contributes to the debate on gender and social work by examining dominant approaches within the field. Anti-discriminatory, woman-centered and intersectional accounts are critiqued for reliance upon both reification and isolation of gender. Via examination of poststructural, queer and trans theories within social work, the author then presents accounts based upon structural/materialist, ethnomethodological and discursive theories, in order to open up debates about conceptualization of gender. These are used to suggest that social work should adopt a focus on gender as a practical accomplishment that occurs within various settings or contexts. |
a woman or man. When an individual does not appear to be gendered in a ''normal'' way, then it is that individual, rather than the gender order, that is questioned. In relation to social work, Green and Featherstone have analyzed Butler's potential, and have suggested that her work helps to challenge dogmatic and morally certain positions within anti-oppressive theory, which they describe as a ''project that believes in its own innocence and construct[s] social workers as disembodied carriers of a 'pure' project'' (Green and Featherstone, 2014: 32). --- Gender as discourse The emphasis in Butler's work on the question of discourse is taken up in a range of theories, influenced in part by the poststructuralist turn to language, which consider gender as discourse. These theories see gender as produced via social and textual practices, which regulate the ways in which we may think about men, women, and others. One important implication of this is that gender is not fixed, nor is it simply attached to individuals. Instead, people contest gendered meanings and subject positions, although, in order to be taken seriously, they may well have to use familiar and expected ways of expressing themselves. Further, as Kessler and McKenna argued, and Butler acknowledged in her later work, the reception of a gendered claim, by audience or perceiver, matters. Smith's discussion of femininity as discourse suggests that the very concept ''femininity'' is produced through practices and their embeddedness in texts. So, gender is not merely a structure or ideology imposed upon un/willing subjects, but rather it is a ''complex of actual relations vested in texts'' (Smith, 1990: 163). This is an interesting point, as we hear here Smith's joint adherence to both a materialist and discursive account of gender, which she sees as mutually dependent, since gender is produced within both local and wider social relations. That is, a discourse of gender relates to people's actions within localized settings and the organization of their ways of thinking and talking. Like Garfinkel, Smith insists that gender is a moral order, which means that it is coordinated with wider social and economic relations, so that femininity is ''a textual discourse vested in women's magazines and television, advertisements,'' and so on (Smith, 1990: 163). The moral order attempts to position women and femininity only in relation to the, more valued, men and masculinity, and for women this implies the need to be considered ''attractive'' or ''desirable,'' ''a condition of participation in circles organized heterosexually'' (Smith, 1990: 194). Smith refers to play and interplay within gendered discourse, in order to argue that it does not prescribe action, and yet she also reminds us that social texts establish recognizable concepts and categories, so that what is done may (or may not) be recognized as an instance of what is authorized. Thus, to take up gender within discourse is to be recognized as demonstrating a proper instance of such, that is, a ''proper'' man or woman. --- Returning to social work and gender In my research, I have argued for an analysis of gender as a practical achievement within everyday social work contexts. Drawing upon the ethnomethodological and discursive theories discussed earlier, I have suggested that gender is neither a characteristic merely acquired and passed on through socialization or reproduction of structural forms nor something inherent in the person. Rather, social work processes involve the production of gender through practical means, which relate both to immediate, local, and wider, institutional contexts. An example of this would be my analysis of the ways in which notions about ''gender role'' are used within the assessment of lesbian or gay foster care or adoption applicants (Hicks, 2011(Hicks,, 2013)). Here, I have demonstrated how social workers and applicants draw upon and produce ideas about gender in order to categorize ''identities'' or ''lifestyles,'' and I have noted that, in most cases, the issue of ''gender role models'' has to be: addressed in relation to gay and lesbian applicants, and those applicants, as well as some social workers, who, in other contexts, are opposed to notions of gender role, must conform since they are held accountable. And while there is resistance to gender norms here, a standard and institutional discourse dominates, one in which adherence to a moral order that upholds expected gender roles is required. (Hicks, 2013: 158) This is confirmed in other research (Wood, 2013), and reminds us of the ethnomethodological point that, where any person is perceived to question standard gender in some way, then it is usually that individual or group category, rather than the gender hierarchy, that is held to account, since gender functions as a moral order. This approach to the theorization of gender within social work emphasizes its reliance on other categories, such as race or sexuality, and its active production via interactions involving powerful linguistic claims, moving us away from essentialist, functionalist and, to some extent, structuralist accounts. In using this article to review various theorizations of gender, my point has been to highlight ways in which social work may be limited in the versions that it prioritizes. The tendency to treat gender in isolation, critiqued in some accounts (Brown, 1992;Shah, 1989), or to take up a solely structural view indicates a reification of gender and an ignorance of its production through practice. My argument has been that, bar a few examples (Po <unk>so <unk>, 2003;Scourfield, 2003), social work rarely connects with gender as practice, ironic for a discipline so concerned with practical dynamics. This, then, is also an argument for attention to the ways in which gender is produced through social work, something that draws upon both the practical and the discursive, rather than starting with something termed ''gender'' and then looking for its effects. This may prove controversial in a field somewhat dominated by anti-discriminatory approaches; that is, where gender is considered at all; yet it is my argument that taking up Butler's ''inquiry on the production of difference'' (Butler, 2011b: 21) may open up possibilities for less restrictive accounts of gender within social work's various fields. | This article contributes to the debate on gender and social work by examining dominant approaches within the field. Anti-discriminatory, woman-centered and intersectional accounts are critiqued for reliance upon both reification and isolation of gender. Via examination of poststructural, queer and trans theories within social work, the author then presents accounts based upon structural/materialist, ethnomethodological and discursive theories, in order to open up debates about conceptualization of gender. These are used to suggest that social work should adopt a focus on gender as a practical accomplishment that occurs within various settings or contexts. |
Introduction Promoting condom use has been a key intervention in preventing the spread of HIV and other sexually transmitted infections (STIs). When used consistently, condoms can be up to 95% effective in preventing HIV transmission; consistent condom users are 10 to 20 times less likely to be infected after exposure to HIV than inconsistent or non-users [1]. However, condom use remains highly varied around the world. One of the strongest determinants of condom use is relationship type: condom use is typically high with commercial sex partners but exceedingly low with spouses or regular partners [2]. Thailand provides a case study that exemplifies such a pattern. Improving knowledge of the factors associated with condom use, particularly among regular and casual partners -in sex with regular partners and in sex with non-regular partners -is a critical step towards increasing condom use and decreasing transmission of HIV and other STIs. --- Factors that Affect Condom Use Demographic and relationship factors. Demographic factors such as age, gender, education, and urbanicity have been linked to condom use in multiple previous studies. In many countries, men are more likely to report condom use than women, in part because men are more likely to have sex with casual partners and/or sex workers with whom condom use is more common [3]. Age is also important in understanding patterns of condom use; younger people are more likely to have better knowledge of condoms and may be more likely to use condoms [4]. Urbanicity and higher education have also been associated with better knowledge of condoms [4]. Condom use in committed partnerships is often very rare; for example, marital status was the strongest predictor of condom use among women in Uganda, with currently married women least likely to report condom use at last sex [4]. Relationship characteristics, such as the duration of a relationship and the frequency of new relationships, also affect condom use; Ku and colleagues illustrated the'sawtooth hypothesis' of condom use, where condom use declines as relationships lengthen and successive relationships are less likely to begin with condom usage [5]. Further, previous research in Madagascar has explored the fluidity between paid sex interactions and personal relationships, with subsequent effects on condom use [6]. Condom factors. Other factors, such as social norms around condoms and condoms' impact on male pleasure, are commonly provided reasons for lack of condom use. Multiple studies from Thailand have documented perceptions among men that condom use reduces pleasure of sexual intercourse [7]. Therefore, use of condoms requires a compelling reason -such as fear of HIV infection -to override the loss of pleasure [8]. The availability of other methods of contraception, with fewer perceived drawbacks than condoms, may also explain unwillingness to use condoms. In 2000, the most common contraceptive method used in Thailand was the pill (26.8% of women), followed by female sterilization (22.6%); condom use was uncommon as a main contraceptive method (1.7%) [9]. Another common explanation for the lack of condom use in regular partnerships is the perception that condoms are primarily associated with disease prevention rather than contraception. In regular partnerships, condom use is typically higher when one partner is known to be high-risk than when neither partner acknowledges high-risk status [10]. Also, condom promotion interventions among sex workers and their clients tend to be more successful than condom promotion interventions for committed relationships [11]. More than one-third of respondents to a 1990 survey in Thailand agreed that asking to use a condom with a regular partner is insulting to the partner, due to the insinuation that condoms are only necessary when risk of disease transmission exists [8]. Access to condoms is a key prerequisite for condom use, but one that remains understudied. The 2006 National Sexual Behavior Survey of Thailand (the data used for this analysis) found that the second most common reason provided by men who did not use a condom at last sex with a casual partner was 'not prepared/could not find a condom at the time' [12]. It was found that increased access to condoms was associated with higher intentions to use condoms in a study of South African secondary students [13]. Condom access has been improved in many areas through the use of social marketing campaigns, which serve to increase availability and decrease stigma [14,15]. --- HIV and Condoms in Thailand Thailand provides a unique setting for the study of HIV and condoms due to the fact that the first HIV case in Thailand was identified in 1985, and the first indigenous transmission was documented in 1987 [16]. The Thai HIV epidemic was first identified in injecting drug users, but quickly spread to commercial sex workers [17]. Sex workers in Thailand are largely brothelbased; though sex work is illegal, it has a stable existence in Thai society [8]. In 1989, 3.1% of brothel-based sex workers were HIVpositive; by 1994, the proportion had risen to 31% [14]. In 1991, the Thai government implemented a national program to encourage condom use in all sexual encounters with commercial sex workers [17]. The 100% Condom Use program included provision of free condoms to commercial sex establishments, sanctions against establishments that did not use condoms consistently, and a media campaign to provide HIV education and encourage condom use with sex workers [18]. Additionally, multiple large research studies in Thailand have explored knowledge, attitudes, and practices in relation to HIV and sexual behavior [8]. As a result of the condom promotion program, condom use with sex workers in Thailand jumped from 14% in 1989 to 95% in 1993 [19]. From 1989 to 2000, the number of STIs in Thailand plummeted by more than 95% [20]. By 1996, the program may have prevented more than 2 million HIV infections [21]. The 100% Condom Use program has been lauded around the world as a model of a cost-effective intervention to prevent HIV and STIs [20,11]. While condom use with sex workers is common in Thailand, condom use is inconsistent with casual partners and extremely rare among married couples [22]. Only 21% of sexually active Thai high school students reported ever having used condoms [23]. For most recent intercourse, 27% of high school men but just 0.5% of high school women reported using a condom [24]. Qualitative research has found that the main barriers to condom use are interference with male sexual pleasure and the perception of condoms as prophylaxis for use with prostitutes [8]. Thailand's unique cultural and historical context contributes to a setting with varied levels of condom use despite the presence of HIV and substantial government intervention. As HIV transmission due to commercial sex declines, thanks to the success of the 100% Condom Use program, the relative importance of HIV transmission through casual and regular partners increases. Therefore, the research aim of this analysis is to determine which factors are associated with higher levels of condom use among heterosexual Thai males in sex with regular partners and in sex with casual partners. --- Methods The 2006 National Sexual Behavior Study (NSBS) provides the data used in this analysis. The data were collected by the Institute for Population and Social Research at Mahidol University in Bangkok, Thailand with support from UNAIDS and the UN Thailand Country Team. The 2006 NSBS is the third nationally representative cross-sectional study in Thailand to track sexual behaviors as well as knowledge and attitudes related to HIV/ AIDS. The respondents were between 18 and 59 years of age. The consent form was read and explained to them by the interviewers. If the respondents agreed to participate in the survey, the interviewers would sign their name in the informed consent form for the record, indicating that they had informed the respondents and the respondents had verbally given informed consent to take part in the study. The respondents would not sign their name in the informed consent form because in Thailand the respondents would not be comfortable to sign document. The study protocol including all these data collection and consent procedures were reviewed and approved (on condition that the study would not involve respondents under 18 years of age), by the Institutional Review Board of the Institute for Population and Social Research, Mahidol University, Bangkok (in which the authors of this study were not a member of the IRB committee.). --- Data Collection The national probability sample ensured equal participation from men and women, young adults (18)(19)(20)(21)(22)(23)(24) and older adults, and residents of rural areas, non-Bangkok urban areas, and Bangkok. To recruit non-Bangkok urban and rural participants, 14 provinces (out of 75 in the country) were selected randomly, with selection probability proportional to population size. Within each selected province, two districts were selected; within each district, rural and urban areas were enumerated. Fourteen of the enumerated urban areas were selected, from which four election districts were randomly selected, with nine interviews per age/sex stratum completed within each election district. Among the enumerated rural areas, sub-districts were identified and three villages in each sub-district were selected (proportional to population size). For each village, a complete household listing was obtained, and three interviews were completed for each age/ sex strata. In Bangkok, 63 election districts were randomly selected, and four households were systematically selected from each district. The survey was completed in-person with sex-matched interviewers. Interviewer teams composed of two male and two female interviewers, with one interviewer assigned to each age/sex stratum (young male, young female, older male, older female), were sent to each geographical location for data collection. In all geographical areas, the interviewers were sent to different households; all household members were listed by sex and age group. The interviewer then recruited a household member in the age/sex strata assigned to that interviewer. If such a person lived in the household but was not immediately available, the interviewers made appointments to come back; if there was no appropriate person in the household, the interviewer moved to the household to the left. A total of 6,048 surveys were completed, of which exactly half (3,024) were completed by men; each strata of age (18-24 and 25-59), gender (male and female), and location (Bangkok, non-Bangkok urban, and rural) contained 504 responses. The overall response rate was 81%, with higher rates among young adults (89%) than older adults (73%). --- Variables of Interest Condom use. There are two outcomes of interest for this analysis: reported condom use by males in relationships with regular partners and reported condom use by males in relationships with casual partners. The survey instrument assessed frequency of condom use separately for the most recent regular partner and the most recent casual partner in the past 12 months. The response options provided were 'never,''sometimes,' 'about half the time,''mostly,' and 'always.' For analysis, reported condom use was condensed into a three-level variable with categories of never, sometimes/about half, and mostly/always. Using a three-level outcome preserved statistical efficiency, especially with the smaller sample size of men with casual partners, while preserving the distinctions between different amounts of condom use. However, logistic regression analysis was conducted (results not shown in this study) using dichotomous outcomes of condom use, designed according to the difference of condom use distribution among the two groups (regular and casual partners). We used dichotomous outcome of any condom use vs. never use condom for regular partners, and we used condom use with half or more sex acts vs. use condom sometimes or never use condoms for casual partners. The differences between these analyses and the analysis with three level categories mentioned above, were found to be small and generally do not change the interpretation of the results. We decided that in order to simplify the analysis, the logistic regression analysis mentioned here will not be presented other than to describe the few differences. Partner type. Regular partners were defined as a partner with whom the respondent had sex for a period of one year or more, or, if the relationship was less than one year, a sexual relationship expected to continue in the future. Casual partners were not regular partners who were not sex workers. Demographic and socioeconomic status. The demographic and socioeconomic status variables includes age, geographical location, education, occupation type, and marital status. For analysis, age was included as a continuous variable. Geographical location (used in the sampling frame of the NSBS) was collected as Bangkok, other urban, and rural. Respondents from Bangkok and from other urban areas demonstrated similar patterns in condom use as well as other variables of interest, so geographic location was reduced to urban and rural categories only. Education level was recorded as the highest level of schooling completed. Education levels were condensed in order to maximize power. Occupation type was categorized to professional, sales/ service, skilled technical, labor, and unemployed. Among men with casual partners, occupation categories were reduced to highskill (professional and skilled technical) and low-skill (labor and sales/service) due to their similarities in association with condom use and in order to maximize statistical efficiency. Marital status was collected as unmarried, married and registered, married but not registered, and widowed/divorced/separated. Due to low numbers of widowed, divorced, or separated men, marital status was re-coded to three groups: single, married and registered, and married and not registered. Marital status was not included in the analysis of men with casual partners since 84% of these men were unmarried. When marriage is registered the partnership is believed to be more serious than just living together where men usually will still consider themselves as single. Marriage without registration is mostly custom marriage which is considered to be also more serious than single persons living together as regular partners. The attachment of marriage with this degree of seriousness is hypothesized to be more related to trust and fidelity. Access to condoms. The survey included the question, ''In your community or workplace, is there a place to distribute free or low-price condoms?'' For analysis, responses were dichotomized to 'yes' vs. all other responses ('no,' 'not sure,' and 'don't know'). Condom knowledge and attitudes. Condom knowledge was measured by asking an open-ended question about which actions could prevent someone from contracting HIV. Respondents who mentioned using a condom without prompting from the interviewer were considered to have knowledge of the HIVpreventive benefits of condoms. Attitude towards condoms was measured by asking participants to select the AIDS-prevention strategy they would choose, among reducing sexual activity, using condoms consistently, or both. Respondents who chose using condoms consistently or using both strategies were considered to have pro-condom attitudes. Partner characteristics. The total number of partners (regular, casual, partners with whom things or favors are exchanged for sex, and partners with whom money is exchanged for sex) reported in the past 12 months was calculated and dichotomized into categories of 'one partner' and'more than one partner.' The duration of the most recent relationship in the past 12 months was categorized as 30 days or less, 31 to 90 days, and more than 90 days. Men were asked if they had ever given money in exchange for sex; responses were categorized as never, more than a year ago, and within the past year. --- Analysis This analysis included only males, because a very high proportion of Thai females report just one lifetime sexual partner (their spouse) with whom condoms are rarely used, while males report more variation in type of partners and in condom use [12]. Of the 3,024 men participating in the survey, 377 were excluded for reporting no history of sexual activity, 28 for reporting sexual attraction to males, and 313 for not having a casual or regular partner in the past 12 months. An additional 24 cases were excluded due to missing information relating to condom access, knowledge, or attitudes. The analytic dataset includes 2,281 men, of whom 1,998 contribute to the analysis of regular partners and 520 contribute to the analysis of casual partners. Two hundred thirty-seven men contribute to both analysis sets. All analyses were completed using SAS version 9.2 [25]. Since the sampling design intended to capture a nationally representative sample, the data were weighted to national demographic characteristics. Chi-square statistics were used to evaluate the differences between proportions among levels of each covariate. Bivariate regressions provided the crude association between condom use (with regular partners and with casual partners) and each of the predictor variables. The proportional odds models were built by examining socio-demographic factors and condom/partner factors separately and then together; variables were eliminated from the full model based on statistical significance and tests of the difference of -2 times the log likelihood. The score test for the proportional odds assumption was used to check the fit of the proportional odds models; the validity of the proportional odds assumption was also verified by manual calculation of odds ratios using different dichotomous cutpoints in the categorization of condom use (analysis not shown here). Possible interactions between socio-demographic factors and condom/partner factors were examined, and collinearity between variables was evaluated; no notable results were found. For casual partners, models were constructed predicting both high condom use and low condom use. Since most men with casual partners reported some level of condom use, predicting low condom use provided slightly more power and smaller confidence intervals, but did not change the magnitude of effects or the variables included in the final model. Therefore, for clarity of presentation, we present results for predicting high condom use among men with casual partners and with regular partners. As for the conceptual framework for the analysis, the independent variables considered will be classified into two groups as well as demographic characteristics as control variables. The two groups are 1) ''Risk perception'' -factors related to perceptions of the risks of HIV/AIDS and STD, and 2) ''Condom motivations'' -factors related to the motivation to take preventive action by using condom. Risk perception variables include education, marital status, number of partners in the past 12 months, duration of relationship or the newness of partners, and the experience of giving money for sex. Condom motivations variables include unprompted knowledge of condom effectiveness in HIV prevention, attitude of condom use as chosen strategy to reduce HIV risk, and self report access to convenient and cheap condoms. As for the control variables which are also used to address the possible bias due to the selectivity of men engaging in regular or casual relations, these variables are age, location (urban/rural) and occupation. --- Results The study population consisted of men with an average age of 32 (median 28; mode 18), ranging from 18-59. Overall, 15% of the sample had less than a fourth grade education, while 13% were educated beyond high school. Almost all men with less than a fourth grade education were over 35. More than one-fifth of men (22.2%) reported having more than one sexual partner in the past year. Employment in a skilled technical field was most common (30.1% of men), while 17.6% were unemployed. More than half of the sample (59.5%) were married (35.1% registered and 24.4% not registered). Descriptive analysis (see Table 1) revealed differences between men with regular partners and men with casual partners. The weighted mean age among men reporting having regular partners was 33.7 (SD: 11.2); men with casual partners were younger, with a mean age of 27.4 (SD: 6.8). Almost half of men with casual partners were aged 18-24 (46.9%). Men with casual partners had more partners in the past 12 months than men with regular partners (casual, 3.6 [SD 3.7]; regular, 1.6 [SD 2.09]). Among men with regular partners, the majority (91.8%) had just one regular partner and no casual partners in the past 12 months. Few men (5.8%) had one regular partner and one or more casual partners, while 2.4% had more than one regular partner. Among men with casual partners, slightly more than half (55.2%) had no regular partners, and 53.1% had just one casual partner in the past 12 months. More than one-third (39.2%) had one regular partner and one or more casual partners in the past 12 months. --- Men with Regular Partners In bivariate analysis, all socioeconomic factors and condomrelated factors that we considered were associated with condom use (p,0.05; see Table 2). Increased age was associated with decreased use of condoms, while urban residence was associated with increased odds of reporting higher levels of condom use. Increasing education displayed a strong trend (Cochran-Armitage Trend Test, p,0.0001) with increasing levels of education associated with increased use of condoms, such that compared to men with less than four years of education, men with post-high school education were 13 times more likely to reported higher levels of condom use. The professionals are found to use condoms more than any other occupation except for the unemployed. Especially those who were employed as labor, skilled and technical, and sales/service workers were less likely than the professionals to report the higher levels of condom use. The attachment of marriage with the higher degree of seriousness is found to be related to condom use. Being married and registered was associated with a ten-fold reduction in the odds or reporting higher levels of condom use; being married but not registered was associated with a six-fold decrease in the odds of reporting higher levels of condom use. Men in regular partnerships who reported having access to condoms were slightly less likely to report using condoms than men without access to condoms. Condom knowledge and pro-condom strategy choice were both associated with more than double the odds of higher levels of condom use, as was having more than one partner in the past twelve months. Duration of relationship did not have a significant effect on condom use. While recent (within the past year) payment for sex was associated with increased likelihood of reporting higher levels of condom use, payment for sex in the long past (more than a year) decreased the condom use. Adjusting for socio-demographic factors in the multinomial proportional odds model moderated the effect of condom and partner factors (see Table 3). The final multinomial model retained age, education (with a trend still clear and as expected, but not statistically significant), occupation, marital status, condom knowledge, pro-condom strategy, and relationship duration. It should be noted that recent payment for sex (in the past year) remained to increase the likelihood of condom use when logistic regression analysis was conducted (Tables not shown here) using dichotomous outcome of any condom use vs. never use condoms. The largest magnitude of effect is observed for education level, particularly with high levels of completed education. Being married retained its substantial association with reduced levels of condom use. Shorter relationships (30 days or less, compared to --- Men with Casual Partners Due to the smaller sample size, many fewer factors were associated with condom use among men with casual partners (see Table 4). Age had a very small positive effect on condom use, while urban residence did not affect condom use. Moderate levels of education increased the odds of reporting higher levels of condom use, reaching significance for men with a senior high school education compared to those with less than a seventh-grade education. The relationships are also found among the junior high and BA to be positive as expected, but not statistically significant. This may be because of the small number of cases among these two groups. It is also possible that apart from formal education, ''skill'' and ''informal training'' may also be important in condom use behavior. Contrary to the finding among regular partnerships, men employed in labor, sales, or service jobs are more likely to use condoms with causal partners than the professional/technical occupation counterparts. Unprompted knowledge of condom effectiveness in preventing HIV transmission was associated with higher levels of condom use. Shorter relationships were also associated with higher levels of condom use than relationships lasting more than 90 days. Finally, paying for sex more than a year ago was associated with lower likelihood of using condoms in a current relationship. Condom access and having more than one partner in the past year had moderately positive effects on condom use, but did not reach significance. The most parsimonious proportional odds model contained age, education, occupation, condom knowledge, relationship duration, and history of paid sex (Table 5). However, these variables explained less of the variation in condom use (R 2 = 0.14), compared to the model on regular partners. --- Discussion This analysis, utilizing data from a national survey of sexual behavior in Thailand, emphasizes the importance of education in determining condom use in regular partnerships and in casual partnerships; among men with regular partners and men with casual partners, higher levels of education are associated with higher levels of condom use. However, condom-specific knowledge is also found to have an impact distinct from years of schooling, particularly for men in casual partnerships. Self-reported condom access was not associated with condom use among men with regular partners, but may have a moderate effect on condom use among men with casual partners (though the effect did not reach significance in this analysis, which was constrained by limited sample size). Employment type and duration of relationship were important in explaining condom use in both men with regular partners and men with casual partners, but their effects were different in the two groups. The finding that access to condoms, as measured in this study, was not relevant to patterns of condom use is interesting; in bivariate and multivariate analysis among men with regular partners and men with casual partners, having cheap and convenient access to condoms had very little effect on condom use. This result is somewhat contrary to expectation, as lack of access to a condom was the second most commonly cited reason for non-use of condoms at last sex with a casual partner in the same study used for this analysis [12]. It is possible that relevance of condom access may not have been captured by asking about community locations for cheap and convenient condoms, i.e. that asking about condom access in the community does not correlate with having a condom available prior to sexual activity. This analysis also found that a relatively low proportion of men reported having access to a convenient location for cheap condoms (34%), compared to limited previous research in Africa that found that 82.5% and 63.5% of young men could locate condoms within a ten-minute walk [26,27]. However, the importance of education in explaining condom use patterns is underscored by the fact that it was the variable with the largest magnitude of impact in the final multivariate models among men with regular partners and men with casual partners. The significance of education in explaining condom use patterns has been established by previous research [28,29]. Similarly, among men with regular partners, being legally married was associated with much lower condom usage, a finding that is consistent with previous research [4,30], where to use condom use with a regular partner is viewed as an insult to the partner [8] or representing infidelity [30]. In this analysis, condom-specific knowledge did not fully align with years of formal education; after adjusting for schooling, knowledge of condom effectiveness was found to be significantly associated with increased odds of reporting higher levels of condom use among both groups of men. The effect of knowing the condom's role in HIV prevention was stronger among men with casual partners than among men with regular partners, which may be related to higher perceived HIV risk among men with casual partners. On the other hand, the higher level of condom use among men with regular partners had to be induced by a procondom strategy. To promote their condom use, one may have to change their condom attitudes first. This is not found in the case of men with casual partners. Employment in lower-level jobs such as labor, sales and services, compared to professional jobs, was associated with decreased odds of reporting higher levels of condom use among men with regular partners. Particularly, laborers were found to be significantly associated with the lowest level of condom use. These results are similar to previous research establishing lower levels of condom use among laborers, farmers, and factory workers [28]. However, among men with casual partners, occupations in sales, service, or labor were associated with increased use of condoms, compared to the professional/technical occupations. This difference is intriguing; further research is warranted, and should also explore additional characteristics of the men's partners in addition to selected characteristics of the men themselves. The study of the selectivity of men who were engaged in casual relationships would shed light to this discrepancy. Among men with casual partners, shorter relationships were associated with more condom use, consistent with previous research and supporting the sawtooth hypothesis [5]. These men would be more aware of the risk of disease and concerned with pregnancy prevention with their ''new'' casual partners. In contrast, among men with regular partners, shorter relationships were associated with less condom use. On the one hand, these men and their partners may be selective of the most faithful, honeymoon period couples. On the other hand, they may have fertility intentions and want to start a family. In contrast to casual partners, and opposite to the sawtooth hypothesis, regular partners were perhaps more committed and probably had to employ trust strategy, even at the very beginning of their dedicated relationship. However, this finding deserves additional research. Men with casual partners who had never paid for sex tended to use condoms more frequently than men who had paid for sex in the past. These men who never paid for sex may be more selective of those who were more conscious about safe sex and avoiding sexual risks. Among men with regular partners, a history of paying for sex within the past year did not reach significance in the proportional odds model, but was significant in the logistic model (analysis not shown here). This suggests that men with regular partners who recently paid for sex are more likely to use condoms in sexual relations with their regular partner. Further studies are needed to test whether it is possible that these men may still visit sex workers and/or may be aware of their possibility of infection. Findings from this study help to formulate a framework for future studies of the dynamics of condom use among different partners. First, although the characteristics and motivation of engaging in casual sex is not a public health policy issue, understanding the selectivity of those who have extramarital and/or casual relation may provide important insight on the subsequent condom use behavior. Future studies should address all possible demographic and socioeconomic status, as control variables, when analyzing the dynamics of condom use patterns. At least age, urban/rural residence, and occupation should be investigated. Second, ''risk perception'' factors (factors related to perception of risk of HIV/AIDS and STD) may be more relevant to condom use behavior than condom motivation factors. Risk perception is associated with formal education in general, but if data are available, the life skill knowledge and other informal training, in particular, should be included in the investigation as well. Most importantly, risk perception is also related to the perceived nature and type of relationship and partner characteristics. Under the theoretical framework of trust and fidelity, these factors include the degree of attachment in marriage and partnership, and the newness of relations. For regular partners, the higher level of attachment in marriage is associated with trust and fidelity and consequently less condom use. As for the newness of relation, again, according to the fidelity assumption, condom use is rarely seen during the honey moon period. The dynamism of condom behavior is that, for the casual partners, according to the sawtooth hypothesis, condom use is seen to be high during the first-meeting period of casual relation and will decline with duration and strength of relationship. How to keep risk perception of casual relation long-standing is the challenge of the intervention design. Lastly, perceived risk is also related to previous or current sexual experience and the primary person to protect from infection, self or partner. For regular partners, current experience of visiting sex worker (perceived probability of self infection) or perhaps having multiple partners, is associated with more condom use probably to protect their married or regular partners. In contrast, sex with casual partners was found to be more protected among men who did not have experience with sex workers. The protection is probably meant for these men themselves rather than for the protection of their partners. Third, condom motivations or factors related to motivation to take preventive action by using condom should also be highlighted in the condom behavior framework. In this study knowledge of condom effectiveness in HIV prevention and attitude of condom use as a strategy to reduce HIV risk are found to be associated with higher levels of condom use. However, further studies on access to convenient and cheap condom sources are still needed. This is especially important since the public health intervention with appropriate and effective heath information messages, even in population where the majority of people are aware of condom effectiveness in preventing HIV, are still to be carefully designed. The strengths of this analysis include the substantial sample size, drawn from a national probability sample of adults in Thailand, a country with substantial variation in condom usage due in part to a unique history of condom promotion messages. However, there were relatively few men who reported having a casual partner in the past year; this limitation hindered our ability to determine the true association between condom use and many variables of interest. Notably, less of the variance in condom use among men with casual partners was explained by the factors considered in this analysis. However, important results were drawn from the analysis of men with regular partners, confirming previous findings relating to the impact of marriage and education. Clearly, more research is needed on the use of condoms during encounters with casual partners in Thailand. Particularly since HIV transmission through commercial sex has plummeted following the government's 100% Condom Use program, HIV transmission through non-commercial partners is of increasing importance. Additionally, future studies should explore additional dimensions of condom access that may be more relevant in explaining condom use patterns. Exploring the determinants of perceived access to condoms may also be fruitful in identifying populations at risk and effective interventions to increase access to condoms. Apart from the issue of access, one should also take into account the dynamics of men's decision or strategy to use or not to use a condom with different types of partners, with different stages of relationship, and in the family and non-family context. Selfperception of own risk of infections related to their previous or recent relationship with sex workers or other casual partners also shaped their condom use strategy with their current partners. Continued effort towards determining the factors that are associated with condom use among Thai males with their different types of partners, and in a variety of partnership circumstances, is crucial for designing appropriate and wide-ranging interventions to increase condom use and decrease transmission of HIV and other STIs. Lastly, the findings from this study suggest that policy and interventions to promote condom use to prevent HIV/AIDs and STDs in Thailand need to take into account both the demand and supply side. That is, not only the availability and accessibility of condom information and services, but, in contrast to campaign on condom use with sex workers, the dynamics and sensitivities of condom use with more intimate partners have to be addressed. It is especially important to distinguish regular partners who can be just living together or more attached to each other by registered marriage. Casual partners who are not paid partners but have intimate relationship need to be delicately attended. Risk perceptions of HIV/AIDS and STDs and motivation to preventive action among these partners are not straight forward and interact with partner intimacy and fidelity issues. First, the national HIV/AIDS prevention campaign should start with the fact that everyone has the risk, and that there are no specific risk groups, regardless of age and sex and inside or outside of marriage. Second, condom promotion should be desensitized by including the broader perspectives of health. The focus should be on total health issues including reproductive health and healthy family planning method for spacing, healthy childbirth, prevention of STIs where symptom of disease may not show. Condom campaign should also incorporate prevention against BV and HPV, where sexual relation (or at least current sexual relation) may not be involved. Third, the program should, at the same time, tackle the political, religious and community barriers concerning the sexual stigma in general and on casual and multiple partners. Intervention should | Objective: This study aims to determine factors associated with levels of condom use among heterosexual Thai males in sex with regular partners and in sex with casual partners.The data used in this study are from the national probability sample of the 2006 National Sexual Behavior Study, the third nationally representative cross-sectional survey in Thailand. A subtotal of 2,281 men were analyzed in the study, including young (18-24) and older (25-59) adults who were residents of rural areas of Thailand, non-Bangkok urban areas, and Bangkok. Two outcomes of interest for this analysis are reported condom use in the past 12 months by males in relationships with the most recent regular and casual partners who were not sex workers. Chi-square statistics, bivariate regressions and the proportional odds regression models are used in the analysis. Results: Condom use for men with their regular partner is revealed to be positively related to education, knowledge of condom effectiveness, and pro-condom strategy, and negatively related to non-professional employment, status of registered marriage, and short relationship duration. Condom use with casual partner is positively determined by education, condom knowledge, non-professional occupation, short relationship duration, and lack of history of paid sex.The national survey emphasized the importance of risk perceptions and condom motivations variables in explaining condom use among men in Thailand. These factors include not only education and knowledge of condom effectiveness and pro-condom strategy but also types of partners and their relationship context and characteristics. Program intervention to promote condom use in Thailand in this new era of predominant casual sex rather than sex with sex workers has to take into account more dynamic partner-based strategies than in the past history of the epidemics in Thailand. |
tivities of condom use with more intimate partners have to be addressed. It is especially important to distinguish regular partners who can be just living together or more attached to each other by registered marriage. Casual partners who are not paid partners but have intimate relationship need to be delicately attended. Risk perceptions of HIV/AIDS and STDs and motivation to preventive action among these partners are not straight forward and interact with partner intimacy and fidelity issues. First, the national HIV/AIDS prevention campaign should start with the fact that everyone has the risk, and that there are no specific risk groups, regardless of age and sex and inside or outside of marriage. Second, condom promotion should be desensitized by including the broader perspectives of health. The focus should be on total health issues including reproductive health and healthy family planning method for spacing, healthy childbirth, prevention of STIs where symptom of disease may not show. Condom campaign should also incorporate prevention against BV and HPV, where sexual relation (or at least current sexual relation) may not be involved. Third, the program should, at the same time, tackle the political, religious and community barriers concerning the sexual stigma in general and on casual and multiple partners. Intervention should address the gender bias especially on woman virginity and the family values that might overly stigmatize extra marital relations. Lastly, in general, condom campaign should be expressed in the terms of sanitation and health, intimacy, human relationship, family and caring rather than related to sexual diseases. --- Author Contributions Analyzed the data: PK AC. Contributed reagents/materials/analysis tools: AC PK. Wrote the paper: PK AC. Planning study design, acquisition of data, execution, literature review, analysis, interpretation and discussion: AC PK. Statistical analysis: PK. | Objective: This study aims to determine factors associated with levels of condom use among heterosexual Thai males in sex with regular partners and in sex with casual partners.The data used in this study are from the national probability sample of the 2006 National Sexual Behavior Study, the third nationally representative cross-sectional survey in Thailand. A subtotal of 2,281 men were analyzed in the study, including young (18-24) and older (25-59) adults who were residents of rural areas of Thailand, non-Bangkok urban areas, and Bangkok. Two outcomes of interest for this analysis are reported condom use in the past 12 months by males in relationships with the most recent regular and casual partners who were not sex workers. Chi-square statistics, bivariate regressions and the proportional odds regression models are used in the analysis. Results: Condom use for men with their regular partner is revealed to be positively related to education, knowledge of condom effectiveness, and pro-condom strategy, and negatively related to non-professional employment, status of registered marriage, and short relationship duration. Condom use with casual partner is positively determined by education, condom knowledge, non-professional occupation, short relationship duration, and lack of history of paid sex.The national survey emphasized the importance of risk perceptions and condom motivations variables in explaining condom use among men in Thailand. These factors include not only education and knowledge of condom effectiveness and pro-condom strategy but also types of partners and their relationship context and characteristics. Program intervention to promote condom use in Thailand in this new era of predominant casual sex rather than sex with sex workers has to take into account more dynamic partner-based strategies than in the past history of the epidemics in Thailand. |
Introduction Individual-and area-level measures of socio-economic status (SES) are independent factors influencing major diseases and health outcomes [1,2]. In many developed countries, composite measures of SES and socio-economic deprivation such as SEIFA (Socio-Economic Indices for Australia) in Australia, and Carstairs index in United Kingdom have been created [3,4]. Such indices are useful for geographically targeted resource allocation, research and health education/interventions, and can be used to determine funding formula for primary healthcare services, social services, relating SES with health outcomes and risk factors/behaviours, as well as aid community-based service providers in terms of pricing and pitching the appropriate services for communities with different SES. Visual impairment (VI) is a worldwide problem with huge socio-economic consequences [5]. Individual low SES measured as low income, education or social class has been shown to be associated with VI in several studies [6]. At a population level, distribution of VI may be related to socio-economic factors [6]. This is particularly true in Asia where there is rising income inequality in many newly developed countries, such as China, Taiwan, Singapore [7]. Both individual and areal level SES were reported to have independent predictive power in capturing community wide health disparities [8]. In Singapore, we have previously reported an association between VI and individual-and area-level measures of SES such as low income, education, and occupation among Indians and Malays [9,10]. No study to date however has looked at the relationship of a composite socio-economic disadvantage index (SEDI) which includes several socio-economic measures with the presence and severity of VI in Singapore. We recently created a socio-economic disadvantage index (SEDI) to measure area-level SES that reflects the composite socio-economic circumstances (household and personal income, housing, education, occupation) [11]. A single composite index would be more meaningful in understanding areal level factors which allows comparisons between groups and useful for geographically targeted resource allocation, research and health education/interventions for communities with different SES. The aim of the current study was to investigate the independent association of individual and area-level SES parameters with the presence and severity of VI in a large and multi-ethnic Asian population in Singapore using the individual level SES and the recently created SEDI score representing area-level SES [11]. --- Materials and Methods --- Study population and setting Singapore is an island state with a total land area of 700km 2[ [12]. Based on the latest census data, Singapore's total population was 5.08 million as at end-June 2010, of which 3.77 million were Singapore residents [12]. The three major ethnic groups in Singapore are Chinese, Malay and Indian with the majority of migrants from across Asia. Most Chinese in Singapore are ethnic descendants of immigrants from the outlying provinces of china (Fujian and Guangdong) with several different dialect groups consisting of Hokkien (41%), Teochew (21%), and Hainanese (5%), Cantonese (15%), Hakka (11.4%) and other minority groups [13,14]. Singapore Indian residents encompass persons with ancestry originating from the Indian subcontinent, including India, Pakistan, Bangladesh, Sri Lanka and Nepal [13,14]. Singapore's Malay residents include all people of Malay or Indonesian origin (e.g., Javanese, Boyanese, and Bugis) [15]. 15 --- Individual-level SES and covariates data Data on individual-level SES, covariates and VI outcomes were derived from the Singapore Epidemiology of Eye Diseases (SEED) Program comprising of population-based cross-sectional data including the three major ethnic groups (Chinese, Malays and Indians) in Singapore: The Singapore Malay Eye Study (SiMES, 2004(SiMES, -2006)), the Singapore Indian Eye Study (SINDI, 2007(SINDI, -2009)), and the Singapore Chinese Eye Study (SCES, 2009(SCES, -2011)). These studies followed the same study design and sampling areas as previously published [13,15]. They used age-stratified random sampling to select participants in each ethnic group and recruited 3280 ethnic Malays, 3400 Indians, and 3353 Chinese aged 40-80 years residing in the South-Western part of Singapore, including 8 development guide plan (DGP) areas (Bukit Batok, Bukit Merah, Bukit Timah, Clementi, Jurong East, Jurong West, Outram and Queenstown). Sampling areas of these studies were chosen in the south-western part of Singapore due to a fair representation of Singapore resident population in terms of age distribution, housing types and socio-economic status [11,12,16]., Written, informed consent was obtained from each participant in both studies and the studies adhered to the Declaration of Helsinki. Ethical approval was obtained from the Institutional Review Board at the Singapore Eye Research Institute. Education status, monthly income and housing status were used as measures of individuallevel SES. Information on these SES measures was obtained using a standardized questionnaire. Persons were classified by educational level into three categories: 1) primary or lower (6 years), 2) secondary (7 to 10 years) and 3) post-secondary (11 years, including university education). Income was based in Singapore dollars (SGD) and three income categories were created: 1) low (1000), 2) middle (1001-$2000), and 3) high (>2000). Housing type was classified as follows: 1) small size public apartments (1-2 rooms), 2) medium size public apartments (3 rooms), and 3) large public apartments (>4 rooms) or private housing. We created a composite 'low SES' variable defined as primary or below education, monthly income less than 2000 SGD and residing in 1 to 2-room apartments [17]. Information on covariates were obtained from a standardized interview questionnaire (demographic, life-style, medication and medical history), physical (anthropometric and blood pressure) and laboratory examination (blood glucose, and lipid profile). Diabetes mellitus was defined as random blood glucose of 11.1 mmol/l, use of diabetic medication or a physician diagnosis of diabetes [11,18]., Hypertension was defined as having a systolic blood pressure 140 mmHg and diastolic BP 90 mmHg, or the use of anti-hypertensive drugs [11,19]., Hyperlipidaemia was defined as total cholesterol 6.2 mmol/l or the use of lipid lowering medications [11,20]., Cardiovascular disease (CVD) history was defined as a self-reported history of angina, heart attack or stroke [21]. Smoking was categorized into current, past and never smoker and alcohol drinking was categorized into drinkers and non-drinkers. --- Area-level SES data An area-level SES was assessed using a SEDI created using 12 variables from the 2010 Singapore census through a principal component analysis [12,22] Details of the process derived socio-economic indices were mentioned in the previous study [11]. Out of initial 23 area attributes from the census, the following 12 area attributes were included; primary education and below; not literate; unemployed; construction industry; hotels and restaurants industry; clerical workers; service and sales workers; plant & machine operators & assemblers; cleaners; laborers & related workers; monthly personal income less than SGD 2,500; monthly household income <unk> SGD 4000, A high SEDI score indicates a relatively poor SES. --- Assessment of outcomes Visual acuity (VA) scores were measured by logarithm of the minimum angle of resolution (log MAR) charts [23]. VI was defined based on presenting VA (PVA) to take into account VI due to uncorrected refractive error which could reflect low SES. Based on PVA in the better-seeing eye, presence and severity of VI was categorized into no VI (PVA 20/40 or better, logMAR0.30), low vision (PVA worse than 20/40 but better than 20/200, logMAR>0.30-<unk>1.00], blindness (PVA of 20/200 or worse], logMAR 1.00] [24][25][26]. Any VI was defined as low vision/blindness in the PVA of better-seeing eye. In addition to defining VI based on the better-seeing eye, we also defined VI based on PVA in the worse seeing eye into six mutually exclusive categories: bilateral normal vision (reference), unilateral low and normal vision, bilateral low vision, unilateral blindness and normal vision, unilateral blindness and low vision and bilateral blindness [27]. --- Statistical Analyses Data analysis was performed using Stata version 13.0 (Stata Corp, College Station, Tx, USA) and level of significance was set at p<unk>0.05. We combined all three ethnic groups for the main analysis (n = 10033). Age-adjusted prevalence rates of VI and blindness were calculated by the direct method using the year 2010 Singapore census population as the standard population [12]. We used a multi-level mixed-effects logistic regression to identify an independent association between individual-level SES and area-level SEDI and the presence of any VI by taking into account the clustering of individuals within DGP areas [28]. Generalized linear latent and mixed models (GLLAMM) package was used to analyse different multi-level mixed effects models for the multinomial outcomes of presence and severity of VI, low vision and blindness [29,30]. Statistical assessment of interaction between individual-and areal-level low SES was performed by fitting models containing cross-product terms. Associations were examined after adjusting for individual demographic (age, gender, ethnicity), medical (hypertension, diabetes, hyperlipidaemia, history of cardiovascular disease [CVD]) and life-style (alcohol and smoking status) risk factors. Finally, we performed sub-group analyses stratified by age groups (40-65, 65-74 and 75 years), gender (male, female) and ethnic (Chinese, Malay, Indian) groups. We also examined the association of low SES and SEDI with characteristics of participants by multivariate logistic and linear regression models. --- Results Out of 10,033 participants, 9993 (99.6%) were included for the final analysis after excluding those with unknown outcomes and DGP areas. The crude and age-adjusted prevalence of any VI were 27.96% (95% confidence interval [CI], 27.08-28.84%) and 19.62% (18.8-20.4%), respectively and that of low vision and blindness were 19.00% (18.18-19.82%) and 0.62% (0.47-0.77%) respectively. VI data were assigned to 8 DGP areas only since the sampling area of SiMES, SINDI, and SCES was located in the South-Western part of Singapore. SEDI scores of the included DGP areas ranged from 79.8 to 120.1, Bukit Batok (100.6), Bukit Merah (110.1), Bukit Timah (79.8), Clementi (100.6), Jurong East (99.9), Jurong West (101.6), Outram (120.1), and Queenstown (106.9) (Table 1). Compared to participants with normal vision, those with low vision and blindness were more likely to be older, female, Malays, had lower SES and higher prevalence of smoking, diabetes, hypertension, hyperlipidaemia, and CVD. Under corrective refractive error accounted for the majority of any VI (54.9%) and low vision (48.5%) and cataract represented a large proportion of blindness (61.5%) (data not shown). The association of both individual and area-level SES with selected participants' characteristics is shown in Table 2. Low individual SES was associated with older age, female, Malay and Indian ethnicity, current and past smoking, diabetes, hypertension, CVD, and higher SEDI scores. Ever consumption of alcohol was inversely associated with low SES. Increasing age, diabetes mellitus, Malay, Indian, and low SES were associated with higher SEDI scores. Table 3 shows the associations of both individual and area-level SES with the presence and severity of VI. Individual low SES was associated with the presence of any VI, low vision and blindness. Area-level SEDI score was positively associated with the presence of any VI and low vision. The odds ratio/OR (95% CI) of any VI was 2.11(1.88-2.37) for low SES and 1.07(1.02-1.13) per 1 standard deviation increase in SEDI. When stratified by unilateral/bilateral categories, low SES showed significant associations with all severity categories, in particular with bilateral blindness (OR = 2.97, 95% CI = 1.60-5.47) and unilateral blindness and low vision (OR [95% CI] = 3.82 [2.69-5.37]). SEDI showed a significant association with bilateral low vision only (1.09, 1.02-1.15 per 1 SD increase in SEDI). There was a significant interaction between individual and areal level SES for the presence of any VI, low vision and blindness and all severity categories. In sub-group analyses, the association between individual low SES and any VI remained significant among all age, gender and ethnic groups and majority of the DGP areas (Table 4). Although a consistent positive association was observed between area-level SEDI and any VI, the associations were significant among participants aged between 40 and 65 years, male and individual low SES. The results from interaction and sub-group analyses showed that the effect of areal level SEDI on VI differed with individual SES and the effect of individual low SES on VI differed in geographic areas. --- Discussion In this large population-based multi-ethnic sample of Asian adults, we found both individualand area-level SES to be associated with the presence and severity of VI independent of demographic, medical, and life-style risk factors. In addition, we found that the associations between area-level SEDI score and VI to be more pronounced in certain subgroups such as adults aged 40-65 years and males. To our knowledge, this is the first study to use both individual-and area-level disadvantage indices to assess socioeconomic disparities in visual outcomes in Asia. Importantly, our findings show that although Singapore has the third highest life expectancy in the world and a low infant mortality rate (2 per 1,000 live births) in 2013 [31]; the association of socioeconomic disadvantage with VI suggests that a similar or worse event may be evident in other developed countries worldwide. The SEDI score created in our study may provide a methodology for the assessment of the impact of area-level SES on VI in other Asian communities. Previous studies examining the association of disadvantage index with VI in US, Europe, South Africa and Australia have shown inconsistent results [25,[32][33][34][35]. Neighbourhood SES was found to be associated with low vision [25], late presentation of glaucoma [36] and severity of glaucoma at presentation [37] but few studies have reported no association of SES with VI [33] or presenting VA in those with age-related macular degeneration [38]. Our findings are consistent with the EPIC-Norfolk Eye Study in Europe reporting both individual and arealevel disadvantage index to be associated with VI and extends the findings to Asian populations [25]. However, the effects of neighbourhoods are small in comparison with the individual-level effect of being in a low SES group. Several studies have shown area-level socioeconomic disadvantage to be associated with major risk factors of VI including diabetes, and hypertension and adverse health outcomes including depression, CVD and mortality [39][40][41][42][43][44]. Neighbourhood environment impacts health outcomes through mechanisms such as availability of healthcare services; physical and financial access to health care; infrastructure facilities (for e.g. parks and exercise facilities) that support healthy lifestyle; environmental pollution; and attitude towards health behaviour [2,45,46]. Studies that reported an association between neighbourhood SES and visual outcomes suggested access to care as one of the mediating factors, for example those living in areas with fewer eye care services [35,47]. or those with no insurance coverage [48] were more likely to have adverse visual outcomes. In Singapore, most areas are well-connected to health care offering vision services and therefore, physical access to care is unlikely to explain socio-economic disparities in vision related outcomes. The Singapore health care financing system comprises of means-tested government subsidies ranging between 20% and 80%; and the balance paid by patients out-ofpocket (for out-patient) or from Medisave (for in-patient) [49][50]. The reason for the socioeconomic disparities in VI is therefore not clear in Singapore. Cataracts accounted for the major cause of blindness in this study. An earlier report that showed low SES to be significantly associated with cataract (for out-patient diagnosis) but not with cataract surgery which is readily affordable to most citizens in Singapore through government subsidy and Medisave payments [51,52]. Socio-economic disadvantage has been suggested to influence one's ability to access refractive error correction [53][54]. As under-corrective refractive error accounted for the majority of VI and low vision in this population, the out-of-pocket costs to correct undercorrective refractive error, an out-patient service, could explain the socioeconomic disparity in VI in this population. Inadequate literacy was found to be associated with VI among Singaporean Malays and those with limited literacy were more likely to be elderly and had lower income [9]. Therefore, poor health literacy and lack of awareness could have contributed to blindness among those with low SES in Singapore. In addition, those in the low SES could have poor dietary habits or poor metabolic profile leading to increased prevalence of major blinding eye diseases such as age-related macular degeneration, or Diabetic retinopathy [55,56]. In the current study, consistent with other studies, females had a higher prevalence of blindness than males [57]. That could possibly be explained by longer life expectancy[58], lesser education [59], greater biological susceptibility to ocular conditions leading to blindness [57], lower prevalence of cataract surgery [52], and poorer visual outcomes following cataract surgery [52] among females in Singapore. As the need for eye care services such as annual eye examination, refractive correction and cataract surgery in Singapore is expected to be substantially higher in future due to rapid aging of the population, urbanisation and increasing prevalence of diabetes and hypertension, more targeted public health interventions such as providing free eye screening services and glasses and increasing subsidises for cataract eye surgery are needed to reduce socioeconomic disparities in vision health. The strengths of this study include a large, representative, and population-based design and the use of multi-level mixed effects model to adjust for potential individual confounders. Our study has some limitations though. First, we derived our SEDI score using the socio-economic indices from the 2010 census data and it might not entirely reflect SES of participants at the time since outcome data were collected from 3 different periods. Second, due to the cross-sectional nature of the study design, causal inferences cannot be made, for example, we may not be able to determine if those residing in low SES areas develop VI or those with VI move to low SES areas. Third, findings from this Asian population in Singapore might not be generalizable to other Asian population in the region due to differences in health care systems, prevalence of eye diseases and composition of ethnic groups. Additionally, it should be noted that SEDI scores reflect the disadvantage of areas that individuals reside in, rather than the individuals themselves. Not all individuals who live in an area with high SEDI scores are disadvantaged, and similarly a person who lives in an area with low SEDI score may be disadvantaged. Finally, a large-scale study comprising of a nationally representative population is needed to confirm this socio-economic association with VI in Singapore. In conclusion, we found an independent positive association between individual and arealevel SES with the presence and severity of VI. Our findings, if confirmed in future prospective studies, may have implications for developing targeted public health interventions aiming to reduce the burden of visual loss in those living in low SES areas in addition to individual SES. --- As the study involves human participants, the data cannot be made freely available in the manuscript, the supplemental files, or a public repository due to ethical restrictions. Nevertheless, the data are available from the Singapore Eye Research Institutional Ethics Committee for researchers who meet the criteria for access to confidential data. The request can be sent to Singapore Eye Research Institute --- This study was funded by Biomedical Research Council (BMRC), 08/1/35/19/550 and National Medical Research Council (NMRC), STaR/0003/2008, Singapore. The funding agencies had no role in the research presented in the paper, and the researchers were fully independent in pursuing this research. All authors contributed to the intellectual development of this paper. EL and AE conceptualized the study. AE and CS designed the analytical plan. WW analyzed the data and wrote the first draft. AE, CS, CYC, MO, TYW and EL provided critical corrections to the manuscript. TYW supervised data collection. --- Author Contributions Conceived and designed the experiments: EL AE CS. Performed the experiments: WW CS. Analyzed the data: WW. Wrote the paper: WW. Provided critical corrections to the manuscript: AE CS CYC MO TYW EL. Supervised data collection: TYW. | To investigate the independent relationship of individual-and area-level socio-economic status (SES) with the presence and severity of visual impairment (VI) in an Asian population.Cross-sectional data from 9993 Chinese, Malay and Indian adults aged 40-80 years who participated in the Singapore Epidemiology of eye Diseases (2004Diseases ( -2011) ) in Singapore. Based on the presenting visual acuity (PVA) in the better-seeing eye, VI was categorized into normal vision (logMAR0.30), low vision (logMAR>0.30<1.00), and blindness (logMAR1.00). Any VI was defined as low vision/blindness in the PVA of better-seeing eye. Individual-level low-SES was defined as a composite of primary-level education, monthly income<2000 SGD and residing in 1 or 2-room public apartment. An area-level SES was assessed using a socio-economic disadvantage index (SEDI), created using 12 variables from the 2010 Singapore census. A high SEDI score indicates a relatively poor SES. Associations between SES measures and presence and severity of VI were examined using multi-level, mixed-effects logistic and multinomial regression models.The age-adjusted prevalence of any VI was 19.62% (low vision = 19%, blindness = 0.62%). Both individual-and area-level SES were positively associated with any VI and low vision after adjusting for confounders. The odds ratio (95% confidence interval) of any VI was 2.11 |