As the West moves into the 21st century, it is experiencing a crisis of belief. Little by little, and in no particular order, we have lost confidence in political leaders, religious leaders, business leaders, bankers, journalists, civil servants, broadcasters, economists, lawyers and celebrities of all types. Perhaps our parents and grandparents didn’t believe everything they were told, but they took much more on trust; they assumed that, normally speaking, people in authority weren’t actively out to deceive them. Nowadays, many people trust practically nothing they hear from official sources. Perhaps this could be called paranoid or cynical, but it’s based on the sound notion that the established media outlets tell us only what is considered acceptable for us to hear. At best that means giving only a partial truth or just not reporting inconvenient stories.
As I’ve said before, this realisation is largely due to the Internet removing the elite’s monopoly on news and allowing ‘facts’ to be checked and, all too often, found to be deceitful. The authorities have tried to stem the tide of doubt by co-opting Google, facebook, twitter, etc, to filter out unacceptable news and views and, in an attempt to brand non-official sources as unreliable and mischievous, the term ‘fake news’ has recently been coined. This expression struck an immediate chord – and the speed with which it rebounded on the elite has been astounding. ‘Fake news’ has become the everyday term for the constant stream of deceptive and carefully massaged news stories we are fed by the official channels. It’s as if the elite has handed people the exact expression they were searching for.
Of course, some people still accept what they’re told – and often defend it shrilly, but the pool of the uncritical, ‘low-info’ faithful is becoming increasingly stagnant and shallow. All of us can see things and hear views that just aren’t reported in the MSM; indeed, it takes a conscious effort to avoid or ignore them. Once we realise that we’re being misled in one area, we’re much more likely to suspect what we’re told about other things, too. We then tend to do a bit of research of our own…
The problem for the mainstream media and the elite they defend is that the alternative sources of information are now difficult for them to ignore, but whenever the MSM mention some ‘appallingly fake’ blog or ‘extreme and hate-filled’ personality, you can almost hear the clicks of their readers and viewers going to check them out – and normally finding out they represent perfectly reasonable viewpoints. In short, there’s no putting the genie back in the bottle and attempts at doing so often make things worse.
Things have moved very quickly, as social changes sometimes do, and have now got to such a pass that the present-day elite has lost the essential cover it needs from the media. As a result it is exposed like a slug caught in the sunlight and is in the process of drying out and losing its legitimacy. A decade ago the very idea of an ‘elite’ was seen as fairly fanciful, even though every culture in history has been ruled by a self-perpetuating, self-serving group of people. But nowadays it seems obvious that one still exists and that it doesn’t have our best interests at heart.
But of course, the uncontrolled mass of the Internet itself is hardly a font of crystal-clear truth, and non-official news sources are just as suspect as the mainstream ones. A high percentage of stories to be found out there are indeed ‘fake news’ and often of a rather cruder type than the official ‘fake news’. This is the jungle we have to hike through; we seem to be surrounded by a verdant undergrowth of deception and counter-deception. Who on Earth should we believe?
* * *
Perhaps we should believe the ‘experts’ – we are often told we should; but that seems to depend very much on what your definition of an ‘expert’ is.
During last year’s EU referendum campaign, pro-leave politician Michael Gove was soundly ridiculed for opining that “the people of this country have had enough of experts”. But it seems he had at least half a point; despite every ‘expert’ you could think of being wheeled out to tell the electorate what to do, the majority did the opposite and voted for Brexit. After the event, the grim prognostications of the ‘experts’ for the immediate aftermath turned out to be embarrassingly false. The Bank of England cut interest rates and printed money in response to an economic slowdown that turned out to be imaginary. Indeed, economic growth appears to have accelerated after the vote.
People do start to notice when ’experts’ are more often wrong than right. Take the example of economist Nouriel Roubini, dubbed ‘Dr Doom’. Roubini is famed for predicting the crash of 2007-08, but that’s hardly an amazing achievement since ‘Dr Doom’ has been living down to his gloomy moniker for many a long year. He predicted recessions in 2004, 2005, 2006 and 2007 before the stopped clock finally showed the correct time in 2008.
His record then continues with predictions that the price of oil would stay below $40, when in fact it rose to over $80, and dire predictions of a hedge fund panic which never materialized. Through 2009 Roubini issued a succession of five stock market calls warning of falls, even as the market continued to climb. As Nadeem Walayat wrote in The Market Oracle:
‘The problem is that Nouriel Roubini and other academics WILL eventually be right, i.e. he will eventually get his drop in stock prices. BUT those that followed his calls will have missed out on one of the greatest bull runs in history or worse lost money shorting the market, as basically where the stock market is concerned, Nouriel’s actual track record shows that he does not have a clue!’
Most recently, Roubini has been predicting a disastrous downturn in the stock market after the (unpredicted by him or anyone else) ‘Trump Boom’. His doom-laden warning is being lapped up once again by outlets such as The Guardian and the Huffington Post, partly perhaps because he’s saying exactly what they want to hear. Could it be time to go on a share-buying spree?
To most people it’s puzzling, not just that the same ‘experts’ keep popping up, shamelessly unfazed, with false predictions time and again, but that politicians, the mainstream media and others in authority seem to retain their trust in them no matter how many times they’ve been proved wrong. Is it any business of an expert to make predictions and issue warnings, anyway? Well, perhaps it is, but in my experience real experts are generally reluctant to make specific predictions; they know there are too many variables.
Surely an expert is someone with great experience in a particular area and whose cautious judgement is therefore far more reliable than the average punter’s. The suspicion is growing that a lot of so-called ‘experts’ are people who have smarmed their way into the top jobs by networking with the right people and always being on-message. These ‘experts’ are probably more adept and focused on politics and self-publicity than on their supposed area of expertise. They are, in fact, fully paid-up members of the elite.
* * *
But those who we feel to be truly expert in their field are still viewed with respect. This IPSOS-MORI poll of people’s trust of different professions in the UK is informative:
Politicians are predictably low on the scale, with only 21% of respondents generally believing what they say, and journalists are not far ahead, level-pegging with estate agents on 25%. But at the other end of the scale there are some groups which remain trusted. Doctors come out top on 89%, then teachers, judges and scientists at 86%, 80% and 79% respectively; interestingly, the police do quite well on 68%. I would suppose that this is largely because these groups tend not to pronounce on areas outside their narrow field of expertise and are very cautious in their prognostications. We might trust our GP when it comes to our aching back, but not necessarily take his financial advice or listen respectfully to his political views. These are also people we meet face to face and we tend to trust people more when we know them.
The less trusted groups can see our faith in them is slipping away, so it becomes more and more tempting for them to try to co-opt the trust we still have elsewhere. It’s nothing new; remember those people in white coats and thick-framed glasses doing earnest experiments in the background of soap-powder commercials? That’s right, they were scientists, busy formulating a new whiter-than-white product. In reality, we all knew they were just under-employed actors happy to do half a day’s work, but it says it all that advertisers thought it productive to involve ‘scientists’ or ‘doctors’ in their push to sell. Perhaps today’s advertising is just a bit more subtle, but the ‘scientist’ is still a favourite figure of authority and what ‘the science’ says is generally accepted by most people to be true.
But this enviable position of trust is something that had to be earned. What separates a scientist from a high-priest, alchemist or magician is more than just a white coat; what makes the difference is the way that science is done. The basis of this is the ‘scientific method’ which rests on a series of steps laying out how to conduct science.
As the diagram shows, a dispassionate observer forms an idea (or hypothesis) to explain what he’s seeing then, in the light of repeated experiment, adapts the original hypothesis over and over – or quite possibly, abandons it as false and goes back to square one. If it does ever reach the stage where the hypothesis matches the observations accurately, then it can, tentatively, be considered a theory.
But this isn’t all there is to it. The correct and ethical practice of science, developed over the centuries, places enormous importance on clarity. Comprehensive documentation of data, computer code, experimental methods and results must be kept. These records must be made freely available to other scientists, who will then be able to repeat the experimentation and tease out any flaws or false conclusions. These scientists could well be rivals, perhaps even hostile towards the hypothesis, but all to the good – if the hypothesis is sound, it will stand up to any amount of critical examination. In fact, this sort of unremitting examination will fine-tune and strengthen it enormously. The formal way in which the review process is done before a scientific paper can be published is known as peer review, but the review of science is open to all and is never concluded.
There are two further key words in the conduct of science. The first word is scepticism. London’s Royal Academy has the motto ‘Nullius in verba’, roughly meaning ‘take nobody’s word for it’. The physicist Richard Feynman put it equally starkly: ‘It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong’.
Scientific truth is based, therefore, not on the opinion of those in authority or ‘experts’ in the field, even if they all agree 100%. No, it is based solely on the evidence. Nothing else matters – at all.
A scientific theory is never ‘proved’. It is, at best, generally accepted because it has survived everything that has been thrown at it – and everything should be thrown at it, first and foremost by the original proponents of the hypothesis and then by anybody else who cares to pick holes in it. Criticism should not only be tolerated, it should be welcomed.
The second key word is reproducibility; the reason why all your notes, data, code, methodology and results should be clear, full, well-organised and openly available to scrutiny. If an experiment or observation of a natural phenomenon cannot be successfully reproduced, then it is worthless. It should be possible to replicate an experiment by merely following the method like a recipe. It follows that if you lose, delete or otherwise fail to keep the full documentation; if you change, edit or weed data without sound and clearly explained reasons; or if you don’t produce it promptly and fully on request, then your science – if it ever was science – has basically ceased to exist.
This is powerful, rigorous stuff; the idea behind it is to elevate science to a scrupulously objective search after the truth. But the reason that these safeguards were found to be necessary in the first place highlights their obvious weakness; that weakness is commonly known as people.
* * *
Yes, the problem is that scientists tend to be people and, as we all know to our cost, people can’t be trusted. It isn’t always that people intend to be dishonest, often it just sort of happens. Very often, in fact, the person fooled most comprehensively by someone’s dishonesty is that person him or herself.
Why this should be so is an interesting question, but perhaps the best short answer is that we’re suckers for tidiness and wishful thinking. Once we get on one line of thinking, we find it harder and harder to jump off; we want a comprehensive answer that we can feel comfortable with – and once we’ve invested in that mentally, emotionally and possibly financially, we really, really don’t want to be told we’ve been wrong all along. The more the contradictions build up and more obviously flawed our beliefs are shown to be, the more any questioning of them will be ignored or waved angrily away.
So science is working against some very powerful and ingrained instincts. The scientific method is a tribute to our rational mind and the fact that we know we can’t trust ourselves. But let’s not for a moment think it’s a solid bulwark against the irrational. We should certainly trust science – it’s the best thing by far we have to hold on to. But we should never, even for a moment, unquestioningly trust scientists.
Perhaps you are thinking that scientists are a breed apart – rational, precise and highly trained to be objective. Surely only a passion for seeking the truth could propel them into this particular, not noticeably well-paid, career path. Well, perhaps to some extent that’s true, but the fact remains that in the modern world science is a career path, much like any other. Indeed, a career in science arguably carries with it more danger of deception and self-deception than most others.
Because, by its very nature, science can see years of work – perhaps an entire working career, nullified in the light of new findings. Just like that. The heartbreak of finding yourself up a long, involved and lovingly decorated blind-alley must be more or less intolerable. How much more psychologically comfortable to stand firm in defence of what you have come to believe, to find holes in the mounting evidence against you and to refuse to see the gaping holes in your own; to vilify and misrepresent your opponents; to use your position to undermine their standing and wreck their careers. After all, these people are not only wrong but clearly malicious – they could even be described as ‘anti-science’. Then it’s but a short step to start tweaking your results to reflect ‘reality’ more clearly, at first just here and there but then necessarily more and more, until it becomes an almost routine procedure to ‘correct’ the results.
The most common way of faking results falls perhaps a little short of outright fraud, since the perpetrator may have fooled himself that he’s doing nothing wrong; it’s known as confirmation bias. The way confirmation bias works is simply that the data to be used is selected – perhaps by excluding ‘atypical’ data or ‘outliers’, perhaps by carefully choosing the start and finish points on a graph, or by ‘cherry-picking’ data which is ‘particularly clear and unambiguous’. What is happening is that the entire scientific process is subtly turned on its head; the assumptions of the scientist have become the starting point and the data are merely used to illustrate what the scientist believes to be the truth. A vicious circle is set in motion in which the corrupted data lead to more and more certainty of ‘what the evidence shows’ and increasing temptation to disregard what disagrees with the ‘known facts’.
These dangers have always existed in science, but today they are stronger than ever. Enormous pressure is on to publish papers, to get results – the right results – and to get them quickly. The situation has been summed up as ‘publish or perish’; your funding and so your career and mortgage payments depends on getting those papers out. Partly because of this pressure, the peer review process can often be little more than a rubber stamp, in which the pool of reviewers is small, they all know each other and they all share very similar preconceptions. You scratch my back, I’ll scratch yours – I certainly won’t make your life harder by asking difficult questions. Mavericks and trouble-makers have long ago been excluded from the peer review process – they’ve probably been excluded from any job in the field at all.
And inevitably, unsurprisingly, the elite are in on the act. Back in 1961 President Eisenhower gave a famous address in which he introduced the idea of the ‘military-industrial complex’, something that today might equate to the ‘deep state’. His speech contained the following passage:
“Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields… ,” Eisenhower warned. “Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity.”
In one way, the President was warning about scientists getting too much influence, but he was also saying that government was becoming the main paymaster of an increasingly expensive activity – and that government had certain priorities and expectations of what research was worthwhile and what results were desirable. It probably wasn’t a good career move to come up with results that were inconvenient to the powers that be. We can reasonably expect that this situation has become ever more pronounced over the intervening 56 years. It applies not only to governments, but also to multinational companies.
* * *
So perhaps we shouldn’t be too surprised to find some problems in the conduct of science nowadays. However, stories like the ones below (and they’re just a few examples) still come as something of a shock:
Science is in a reproducibility crisis: How do we resolve it? phys.org: 20/09/2013
1,500 scientists lift the lid on reproducibility Nature: 28/07/2016
Most scientists ‘can’t replicate studies by their peers’ BBC: 22/02/2017
If anything, these articles tend to downplay the problem. If something is not reproducible, it’s not science, so what we are being told is that something like 50% of what’s passed off as science is basically little better than guesswork. This is surely astonishing; the predictions of a medieval astrologer might be as accurate as half of what we accept today as science.
The problem was becoming more and more apparent in the years before Phys Org asked, in 2013 ‘what has to be done?’ In the years since then, it’s clear that nothing much has been. The very worst offenders tend to be in disciplines like psychology and economics which might be described, generously, as ‘soft’ science, but every area of science is affected. For example, a 2012 paper in Nature found that the researchers could only reproduce 6 of a total of 53 “landmark” cancer studies. That’s a reproducibility rate of just 11%.
The percentage of scientific papers which are not really scientific at all is, of course, hard to pin down exactly. However, many scientists put the figure as high as 50%.
We could certainly turn this around and say that 50% of science is rigorous and dependable. But which 50%? How do we separate the sheep from the goats without conducting the sort of painstaking investigation described in the above BBC article on every single scientific paper published over many decades. As things stand, nearly all science must be regarded as suspect – and this is an absolutely disastrous situation. The pseudo-science (and it’s hard to call it anything else) has already escaped into the wild and can’t now easily be tracked down and identified as useless. It now goes unquestioned and is often used as a basis for further research.
This sort of pseudo-science is not only useless, but often dangerous. There have been a number of fairly high-profile cases of fraud in recent years. A good example is this one concerning Chronic Fatigue Syndrome. A sufferer asked for the data behind an influential 2011 study claiming the illness could be successfully treated by exercise and counselling. The researchers were extremely reluctant to hand over their data, but when they were finally forced to do so, it revealed that they were counting as ‘recovered’ many patients who clearly weren’t. On the basis of this piece of quackery, many thousands of people were told their symptoms were ‘all in their head’ and subjected to exhausting, pointless and possibly damaging treatment.
Such cases are clearly the tip of a very big iceberg and usually only come to light due to the determination and persistence of one or more concerned individuals. Most stay undetected – and how much more difficult to find the truth when the pseudo-science is backed and protected by really serious players or has some political importance? Combine science with political activism and a truly toxic brew emerges.
It should be said that it’s not at all clear that the situation today is much worse than it was in the past; as with everything else, the internet ensures that there’s far more chance of science’s skeletons coming rattling out of cupboards. But this, perhaps, is cold comfort; it merely means that the problem encompasses even more science going back over even more years.
To a large extent, Western civilisation is built on science; we have precious little else that we can rely on. But as people begin to wake up to the fact that everything in the scientific garden is not rosy, so that trust is in danger of crumbling. The pseudo-science practised by qualified scientists is far more corrosive than straight-out pseudo-science such as intelligent design or chakra balancing. The latter are easily exposed and debunked; the former is clothed in the garments and phraseology of real science and its practitioners remain respected, protected and unquestioned. But it’s these charlatans who threaten to pull down the whole edifice of science.
So what can be done? The idea that science has a self-correcting mechanism in the form of peer review, as some still claim, is clearly far too sanguine; peer review has itself been badly undermined. However, it is a hopeful sign that scientists themselves have recognised the problem and are attempting to tackle it. The journal Nature, for example, has introduced a ‘reproducibility checklist’ to try to improve standards. Websites like Retractionwatch highlight papers which have been published then withdrawn, for whatever reason. Journals are now beginning to demand complete data and methods be supplied with papers submitted to them.
This last move will perhaps be the most effective because, in the end, a return to the full rigour of the scientific method is the only solution. Sunlight, as they say, is a great disinfectant; total openness must be a prerequisite in science with no excuses about privacy or copyright. If something needs to be kept secret for security or commercial reasons, then fair enough, but we shouldn’t be asked to accept it as scientifically reliable. As the example above about CFS demonstrates, secrecy and obstructiveness should be a flashing red light alerting us that something very fishy is being hidden.
Even more difficult to achieve will be a disconnect from funding. Gone are the days of the wealthy gentleman amateur, but scientists must be given the room to achieve nothing of much value and to go off on fairly eccentric paths without risking their careers. Universities are the traditional route to this, but they too are more and more concerned with immediate results and profit.
We must fervently hope that science is still saveable, because the alternative to science is magic and we’ve been in that place before. We really can’t afford to slip back into magical thinking, because at the end of that path lies poverty and ignorance.
* * *
But in the meantime, let’s come back to the question, ‘who on Earth should we believe?’ It’s tempting to answer that question by saying ‘nobody’. Perhaps, after all is said and done, we should only believe what we have personally investigated to our full satisfaction.
However, that proposition is obviously absurd. We can’t possibly become experts in every field and it would be impossible to live our lives doubting everything that we don’t fully understand. I get in my car and I expect the brakes to work. I arrive at the office and I don’t cower in dread of concrete beams crashing down on my head. I go to the dentist and allow someone to put a screeching drill in my mouth that could irreparably damage my teeth and rip my cheeks to shreds.
The reason I do these things is because I have no reason to doubt the designers, engineers, architects and medical men in society. I trust that they have been properly trained and that what they do is properly regulated; I’ve also experienced these things a thousand times before with no ill-consequences. Stories of collapsing office buildings and maniac dentists are few and far between.
In short, I live in a state of rational ignorance. That means I’ve made judgements of trust that I rely on every day – and apart from very occasional unpleasant surprises, I am vindicated in doing so. Rational ignorance is certainly the most rational approach; in fact, I couldn’t possibly operate in a modern society in any other way. However, like anything else involving trust, rational ignorance is a habit we should be wary of. Give unconditional trust and you won’t have to wait very long before someone unscrupulous abuses it.
The answer, as I see it, is to keep our antennae attuned for anything that doesn’t seem quite right. Our instincts don’t always lead us in the right direction, but they can very often alert us that something’s amiss, even when we can’t quite put our finger on what it is. That’s step one; once our antennae twitch, we should take the next step, jettison our rational ignorance and focus our intellect on the issue.
Notice I say focus our intellect. There’s absolutely no use in following our ‘gut feelings’ off into the blue – the very next stop will be a conspiracy theory. What it does mean is finding out about the issue calmly and logically and looking for evidence to support any and all assertions that you’re not happy with. It also probably means asking a lot of questions. We can’t inform ourselves about everything, but it should be our right, even our duty, to inform ourselves about anything we suspect is incorrect. We should also be very pleased if we find that our suspicions were in fact wrong and that everything is as it’s claimed to be.
As I said above, when it comes to science, lack of clarity should be a glaring warning sign that something is wrong. But really that applies to everything we are asked to believe in. Obfuscation and angry hand-waving in the face of reasonable questions can only really mean that the Emperor is naked. As far as I’m concerned, if someone fails to give you a reasonable answer, you should rephrase your question in case you have been misunderstood; if you still receive no answer – or the answer to a question you never asked, you have every right to draw your own conclusions.
One common excuse for not providing information and answering questions is that nobody outside the field could possibly understand it. I disagree with this most strongly. I have this belief; that any concept that is well-founded can be explained to a reasonably intelligent lay-person. The specialist probably won’t need to go into the nuts and bolts of the methodology and the statistical techniques employed, but if needs be he should be able to do just that, clearly and plainly. I consider that explaining something step by step is a useful way of testing your own understanding of what you’ve done. I would suggest that if an expert finds he can’t explain something to an outsider, then he should go away and ask himself why not.
* * *
By way of conclusion, I would say that the crisis of belief is perhaps a positive rather than negative phenomenon. Being aware that the truth is a very elusive commodity is perhaps an uncomfortable position to be in, but it’s infinitely better than falling for every snake-oil salesman who comes along.
Our hike through the jungle of deception should be fascinating and fun rather than frustrating or frightening. Perhaps our best route map is the application of something like the scientific method to our own judgements. We should get into the habit of asking questions – we’re in a better position than ever before to check up on things. We should also constantly question our own assumptions, to be certain they aren’t based on wishful thinking. We should be highly intolerant of the arrogant shrug-off and have confidence that we are capable of analysing and judging the worth of what we are told.
Trust is something that has to be earned and it should always remain conditional. That’s a message that still needs to be understood in certain quarters and it’s something we shouldn’t be shy of insisting on.