I’ve (literally) just finished* my evidence review for the
JRF on poverty and social networks that I’ve blogged about here,
here
and here
and sent it off for their feedback (which I’ve just received some of, all
generally quite positive *phew*). Because it’s still drafty, I’m not going to
release it to the world for a while yet, but I did want to blog about the
methodological thoughts it’s led me to have, and this links to the title of the
post. The phrase “it’s not what you know but who you know” really demonstrates
the “folk” interest in the links between social networks and socio-economic
mobility. However, now I’ve read a load of this evidence, methodologically, it
actually seems to be more about how you know about these social networks. This
is for two specific reasons: methodology, methods and disciplinary contexts;
and bigger epistemological experiences about experience and how this is
“collected” by the research process.
On the first point, the review was the first time in a while
to read a good chunk of North American social science. I’ve read about the “methods
wars” between qualitative and quantitative social science in America, and the
feeling that the latter had won; I’d also listened to American delegates at the
International Interpretive Policy Analysis conference bemoan the dominance of the
rational actor model and quantitative methods. But I’d really not realised how
dominant quants where. Successive were based on big datasets, with endless
methodology sections explaining how survey instruments were used, with big
scary equations that, as a qualitative researcher, I didn’t have a clue about.
What struck me about them was they usually ended with, firstly, a lot of
uncertainty about their findings and secondly a lot of unanswered questions.
So, for example, some regression models would support theories, others would
contradict them; when you controlled for different factors the overall
conclusion was “meh”. And the unanswered questions didn’t help – invariably
there would be the conclusion “but we cannot understand causation here”.
Which brings me to one of my favourite methodology books
I’ve read for a long time, Mike Savage’s Identities
and Social Change in Britain since 1940. It’s essentially a history of
social science methods in the UK from 1940 and it’s the historical approach
that I love. To go back to the title of this whole blog – urbanity and history
– that I chose three years ago, it’s always struck me how social scientists
don’t get history, or even have much of a sensitivity to time. Savage sort of
makes this point – the technocratic sociology that emerged in the 1950s was
a study of the now. I often find it funny – and I’m guilty of doing this myself
– how the strictures of social science writing mean that people will present
evidence that is decades old in such a way as to suggest its contemporary,
without any reflection on how the social world might have changed since then,
and reflexively, how that knowledge has since changed the world that is now the object of study. The
presumption is the knowledge is scientific and presented in the linear,
“standing on the shoulder of giants” way. What I love about Savage’s book is it
highlights the social context where the methods of modern British
sociology/social science, particularly the survey and semi-structured
interview, emerged from.
So, why does this all matter, well for me it brings us onto
the second bigger epistemological question. The best way to talk about this is
how it emerged in the evidence review itself. In terms of that folk idea of “it’s
not what you know, but who you know” is the idea that if poorer people know
wealthier people then they will do better in life. It also leads to the more
negative idea, which I’m deeply uncomfortable with for its use in “underclass”
discourses, of “cultures” of poverty – that experiencing poverty around others
in the same situation lessens your chances of leaving poverty because you
behave like them and this excludes you from wider society and opportunities. A
lot of this sort of theorisation lies behind the “neighbourhood effects”
literature I read for the evidence review; I discussed the problems with this
in this
blog post. However, I want to go back to a striking theme running through
the neighbourhood effects literature: basically, most quantitative studies that
try and find a neighbourhood effect either find a very small impact, or none at
all, or the data is too messy to form consistent conclusions. Qualitative
studies tend to find there is an impact, people recognise it when they work or
live in the neighbourhoods. This is a contrast discussed in a very well-title
article by Atkinson and Kintrea – Opportunity and despair
it’s all in there (£) – and in
the critique of neighbourhood effects by Tom Slater (who critiques the distance
of the quants researchers from their research subjects).
In this
quants study of neighbourhood effects in Scotland, van Ham and Manley suggest
that it might be down to the geography that we measure neighbourhood effects
on. Usually we have to do statistical analysis at the level of census super
outputs areas or datazones, or similar administrative boundaries that have
quite large populations (1000+). Like any “contagion”, if neighbourhood effects
are diluted among a large population then they will have less of an impact. Van
Ham and Manley suggest that we might need geographies of just a few hundred
people to find a neighbourhood effect. However, this leads me to question (prompted
by a point someone made at a Poverty Alliance panel in Glasgow on Friday 28
February) whether this then is a
neighbourhood effect – surely if the geography becomes really small then we’re
talking about a sociological or socio-psychological impact; residence is just
by-the-by as the social network could be anywhere? All points raised in the
quants neighbourhood effects literature.
However, I think we are talking about a bigger
epistemological point here that we need to recognise fully and debate and play
with. For example, this
study for the Joseph Rowntree Foundation found no quantitative neighbourhood
effect on young people’s employment opportunities from postcode discrimination,
a neighbourhood effect, whereas this
qualitative study also for the JRF, found that young people who live in
deprived neighbourhoods that are distant from local labour markets and
geographically isolated have reduced geographical horizons for their job search
strategies, a neighbourhood effect. These different findings, I think, can only
be explained by differences is how we know, no what we know. These are
different experiences of young people in deprived neighbourhoods understood in
different ways.
As a qualitative researcher I use my research to illuminate
how socio-structural factors – particularly socio-economic inequality and
status – have an impact on people’s lives. However, a problem for me is that a
broader focus on “lived experience” of conditions such as poverty can easily
lapse into the personal and focus on individual deficiencies or strengths,
effectively blaming the poor, rather than emphasising the socio-structural
forces that limit people’s opportunities and psychological reactions in any given context. This is a criticism that is levelled quite often at neighbourhood effects researchers, but most I know put the
socio-structural first and want to understand neighbourhood effects as
something that makes the situation that much worse. At its most problematic
though, this individualising of experience and “witnessing” of negative
experiences does lead into the sort of nonsense the Centre for Social Justice
come up with – the
production of ignorance – and deflects us from an emphasis on
socio-structural causes. This is very apparent with the recent debate in the UK
on changing the
definition of child poverty and the conflation of cause and impact in the
proposed measure (as well as initially coming up with something that is
unmeasurable). I'm also always left with a nagging feeling that without the devolution of taxation and welfare matters to the Scottish Parliament and Government, policy-makers up here falls way to easily onto person-blaming, behavioural policy answers in areas like public health and social work, rather than the more obvious fact that if a few people had a lot less income and wealth and a lot of people had a little more, then a lot of problems we have would be far less severe. We get away with this victim-blaming because we can frame it in our lovely, fluffly, Scottish social democratic discourse.
And I’ve just realised I’ve written myself into a corner.
What am I saying? It seems that a lot of methodological choices in the social
sciences are actually deeply circumscribed by the disciplinary and cultural
context you find yourself in. If I was in the US my career would’ve probably
have done nowhere because of my aversion and inability to wrangle big datasets
and produce equations that are penis extensions. However, in the UK, with a
methodological pluralism that welcomes qualitative research at the ethnographic
end, I need to be careful that my research does highlight the structural, and
doesn’t presume that cultural changes are all that are needed in society to
produce more positive outcomes.
* I finished the review on Thursday 27th, started writing this on Friday 28th and then got way-laid by a gin and tonic. I have now finished writing it.
No comments:
Post a Comment