I’ve just been reading some news articles about ‘useless’ degrees. They include a two-year foundation degree in heavy metal (the music, that is) at New College Nottingham, a BA in Comedy Studies (though technically it appears to be listed as a BA in ‘Performance’), Enigmatology (i.e. puzzle setting and solving – however only one person ever graduated from the one course offered, by Indiana University in the 1970s), and assorted qualifications in areas such as puppetry, parapsychology and Viking re-enactment.
The thing I’ve been asking myself is this: in the 1980s, Thatcherite policies demanded that degrees should be ‘relevant’ to career choices and employers’ demands for skills. Various degree courses disappeared, for example language degrees in Farsi and some African languages – ironically enough leading to later problems within the intelligence community when groups speaking some of these languages came to be considered as security threats.
However, the current spread of ‘weird’ degrees isn’t all that strange. We live in a knowledge-based economy and in the UK at least, much of our economic output comprises cultural rather than physical goods. So having a supply of graduates with specific expertise in science fiction, horror, comedy, different genres of music and all the rest is very likely a good and useful thing for the economy as a whole, in addition to the ‘generic’ skills they offer in terms of academic research and practice, and the interests they offer for students. Of course these things don’t need to be learned through degree type studies, there are many other forms of learning. But there’s also no reason why they shouldn’t be degree studies if enough people want to learn in that format.
And if that isn’t a good enough argument, bear in mind that the University of Derby’s MA in Horror and Transgression, which covers diverse forms of film and literature alongside the works of writers such as Nietzsche, Bataille, Foucault, Kristeva, and Deleuze, and transgressive writers such as Burroughs, Ballard and Burgess, lists a number of possible post degree careers. Among these is public service administration. Which is as clear a sign as I can think of about what the experience of public service administration will actually be like in the future.
[Edited to add: of course there will always be a need for degrees in traditional subjects - medicine, engineering, maths, history, biology, languages, computing and the rest. But consider the needs of, say, a computer gaming company or a movie production company that needs to find a new and credible way to develop a fantasy, scifi or horror conceptual world. Consider the needs of, in fact, almost any company looking for its 'next big thing'. The people with the design and production skills, etc., are clearly necessary to that process. Some of them may even need to make puppets, re-enact Viking dramas, tell good jokes or write and perform music to get to the point at which a product is made and marketed. But no product does well unless it links with human fantasies. Successful products also need their dreamers. Postmodernity (are we still in a 'postmodern society'?) has sometimes been described as the society in which the old modernist order of narratives has been corrupted, and that's a reasonable if overly general assessment. But that makes the ability to weave old narratives and create new ones all the more significant in contemporary society. Hence the need for studies that appear niche, marginal, or just plain odd.]
Yes, I’ve been working on other stuff and neglecting this blog. But yes, I also have some stuff in hand I’ll be sharing in the next few weeks. Meanwhile, here’s a random memory that occurred to me while I was working in the garden (making a new garden gate, as it happens, but I don’t think that’s significant).
In 1980, I think it was, I went on a trip around Europe. I met a couple of Americans in France who were on a European tour. We exchanged addresses and about a month later they pitched up at my house in the UK. We spent a fair bit of time talking. In the course of this they explained something that hadn’t occurred to me. I’d assumed they were visiting Europe ‘just because’ – but it turned out they had a specific reason. They were convinced nuclear war was likely within the next year or two, and wanted to see the major centres and museums of Europe before they were destroyed. ‘We’re visiting,’ one put it, ‘the theatre of war for world War Three. We want to see it before it’s a nuclear wasteland.’
I’d grown up during the early phase of the Cold War. I still remember the leaflet my parents had on how to build a nuclear fallout shelter (they never did) and my childhood was influenced by concepts such as the Four Minute Warning. We’d watched news on the Cuban Missile Crisis, understanding for the first time that war could not only be truly global but that the first we might know about it would be a mushroom cloud.
However, for these Americans 1980 was the year they’d come to grips with all this. Detente had broken down, The Soviet Union had invaded Afghanistan, Carter had deployed Pershing II missiles in Europe (and cancelled the US Olympic team participating in the Moscow Olympics). We were still three years away from Reagan branding Russia the ‘Evil Empire’ and announcing the Strategic Defense Initiative. But this pair had read the way the political situation and concluded there was a more than small chance that 1980 would be the last year they’d be able to see the major European museums.
I’m glad they were wrong. But the ‘what if’ and the idea of ‘last chance to see’ will maybe, one day, become a story I’ll want to write.
I know, I’ve been quiet on here for a long time. That happens when I’m busy, and I’ve been busy in a bunch of different ways. Some of them have been to do with looking at theories of customer service (yes, they exist) and others have revolved around me expecting that every time the phone rings or I get a text, it will be a message that X (a family member) has been self-harming again, and we have to go round the cycle of getting them stable one more time.
But in between the professional and personal stuff, I’ve lately been bemused by the morality and values projected by adverts. I always have, really, but there are a few recent ones that stick in my mind. For example, I’ve learned that:
- it’s considered acceptable in crowded urban environments to drive fast and perform stunts in your car while waving a coloured smoke flare out of your window. Fortunately I live near a chandlers so I can probably just get some marine flares for the purpose.
- if you don’t want to go home after a holiday, the travel company won’t mind if you sabotage the coach (I don’t know if their tolerance extends to sabotaging their aircraft, but you can always argue they’ve established the principle with the coach and you’re just being creative about applying it).
- if you buy a packet of crisps (potato chips, if you’re reading this in the US) the crisps will love you forever. Even if you eat them.
- otherwise, if you need to buy love it’s available in all good department stores at around £58 for a small bottle.
And apart from that, in relation to the ongoing horsemeat (and donkey meat) scandal it occurs to me that this is far from the first time companies have, let’s say, allowed their sources of meat products to be compromised. It happened with some regularity in Victorian times and was epitomised by Dickens in the character of a pieman who used cats “for beefsteak, veal and kidney, ‘cording to the demand” (in Pickwick Papers), and also of course in the story of Sweeney Todd, whose customers ended up in Mrs Lovett’s pies.
OK, I’m off to have an emotional relationship with a small packet of low-calorie potato snacks. If it’s as exciting as I expect, I may be some time…
You may, or may not, know that periodically I write stuff about customer service issues – and on the whole I try to write properly researched stuff rather than fiction (though I have moments where some of the materials I have to read on the topic seem to come from a world I don’t recognise). The latest round of this relates to something known by the acronym ‘customer relations management’ or CRM.
The idea of CRM is that if a company has a customer, they have a relationship with that customer that is ongoing and over the course of time may result in the customer purchasing more services or products. Lots of business gurus have been advancing this idea for a number of years now.
But I’ve been taking soundings from friends about this idea and, while the phraseology is mine rather than theirs, I wonder if customer relations management has been undermined by the postmodern economy, and dropped into a black hole.
Consider the following examples:
- someone who used to work in a bank recalling a speech by the CEO that was supposed to motivate staff, though to do what, exactly, wasn’t entirely clear. What he said was, apparently, that the important thing was that the customer should ‘have the perception’ that they’d been served well. That phrase could mean a number of things, but the context was that customer service is about perception management, with a customer believing they’d been served well while the underlying reality might be somewhat different. After that, the bank started to get itself into a bunch of problems that, to be fair, paled into insignificance after the onset of the financial crisis of 2010.
- the comment from someone who knows about call centres that the basic problem is driving down the cost per call. The net result is, in their understanding, that call centres have become increasingly overloaded and offer a poorer service than they did, say, 5 years ago because offering a good service costs more. I don’t know if this is universally true. But the last time I actually tried to call a call centre, earlier this week, it involved a 25-minute wait with alternating adverts and a music track that appeared to have got stuck. The very fact I was waiting because ‘all our agents are busy’ didn’t exactly lend credibility to the services being advertised between the Stockhausen-type sounds that resulted from what was probably supposed to be a pop song of some description. The guy who answered the phone was polite and efficient. But then he had to admit that while he’d proposed a solution to my problem (which I tried in real time and it worked, fortunately), he wasn’t actually sure it would work because it only works occasionally.
- someone who works in a dual environment in an urban and a rural area, in a public service. The customer setup in the urban area is highly automated and enables people to get in and out of the place quickly. That in the rural area is highly automated which means almost everyone (it being a more elderly population etc.) needs to have a staff member on hand to make the automated systems work for them. It would be faster to decommission the systems and let staff deal with customers directly – and result in fewer frustrated customers.
I can see the value of CRM if you’re selling, say, large and complex things that cost millions or billions of pounds, because these projects take a long time and the context changes as the project goes along. If you’re building a dam or making turbines for a power station, stuff happens along the way and you need to know that you have to alter your designs because tab A on the original plans won’t fit into slot B when you come to install the thing, since tab B has been replaced by gizmo C. Or whatever.
But for most ordinary everyday purposes, the idea of customers who stay with a particular company and want a ‘relationship’ with that company doesn’t seem to fly any more. People ideally want stuff that doesn’t go wrong and doesn’t need much customer service, and if they do have to make that phone call then a good proportion of the time it will be because something’s broken and needs fixing – in other words, because they need help now and something’s frustrated them.
Meanwhile, the financial realities of the postmodern economy are such that reducing cost and increasing productivity is paramount, and if that means cutting corners on customer service which is often perceived as not being a ‘profit centre’ for the company, then so be it.
Are my friends right? Is my analysis of this credible? Have business gurus got this one wrong?
Just had this link forwarded to me: an article in Techdirt, relating to discussions taking place within the EU about internet censorship. Deja vu, I think. Does anyone outside of a few law enforcement officials serious think these proposals are (a) acceptable to the public (b) workable and (c) likely to be effective?
Just a quick thing to throw up here, following my previous post on music: this, from the BBC. A rock and alternative music festival in Kabul, which given the range of problems and the level of conflict to be found in Afghanistan is about the least likely place you might expect to find one. But then, maybe such a festival is kind of important for precisely those reasons? Most of the bands playing are from the Kabul area, and it’s equally impressive that in that context there are people who have the resources – instruments, rehearsal space, time – to even pull bands together and play their music. Marvellous.
And it’s the second festival of alternative music that’s been held there.
If you don’t live in the UK, you probably won’t appreciate what a major step this is: the BBC have just reported that the red tape around the licensing of live music has been lifted from small venues (capacity up to 200). This means some 13,000 pubs and clubs no longer need to go through difficult bureaucratic hurdles and pay substantial licence fees in order to have live music.
It’s a big deal for small bands, people trying to get known in the music business and so forth, because when the current regime was introduced it resulted in many thousands of smaller venues closing their doors to live music. It also, of course, made it more difficult for more ‘experimental’ artists to get going – because who was ever going to take a chance on putting them on stage in a large venue that needed to recoup substantial licence costs? And by ‘experimental’ in this context I don’t just mean weird electronic and suchlike, but a range of styles and a range of performers who are trying to do something a little bit unusual and distinctive with their music, and trying to see if there’s an audience out there for them.
So now a lot of musicians, aspiring bands and so on can return to the traditional route of building up a loyal following in their home town, and then around the country, going on the road to build their ‘tribe’ of followers a little at a time. And there will be, I hope, a lot more live music in a lot more styles and genres somewhere near you.
So, for once, I can report some good news and a sensible government policy!
Here’s a slightly weird story.
Philip Roth – who’s pretty well-known as an author – wrote a novel, The Human Stain, published in 2000. No, I haven’t read it, but that’s not important.
Wikipedia has a page about the novel – not surprising, because it has pages on many novels. The page was generated in 2002 by a contributor and has been revised and added to on many occasions since. The page mentioned speculation by various critics that the principal character was based on the life of a literary critic, Anatole Broyard. Roth approached Wikipedia to offer a correction: despite the critics’ speculations, he’d drawn the events surrounding the principal character, and character elements, from the experiences of his friend Melvin Tumin.
The Wikipedia administrators refused to amend the entry on the basis that there was no second source to support this claim and he ‘was not a credible source’.
The entry has now been amended to reflect this exchange, but it raises interesting questions.
To what extent is any author a ‘credible source’ when discussing their own work? I’m not talking here about slips of memory or deliberately misleading statements – though those can happen – but the extent to which any literary work draws on material from a writer’s unconscious and perhaps touches on matters of which the writer was not consciously aware. I can give an old example from a piece I wrote and performed as a student: I was pleased with it, but the feedback I got after the event was that it was an interesting retelling of a Biblical story. One that wasn’t in my mind when I wrote it, and that I’d never consciously paid attention to since religious education classes in primary school. (I might add that I never kept a copy of the piece and couldn’t now tell you which Biblical story.)
What (or who) constitutes a ‘credible source’ anyway for a work of imagination? And with the passage of time, is it really possible for anyone – even with access to an author’s personal manuscripts and notes, etc. – to tell what was really in their mind when they wrote something? Does it even matter? Because meaning is context-bound and if the book survives and is read years later, does the meaning even remain intelligible within the context of the time it was written?
As you may have seen in previous blog posts, I recently self-published a short collection of horror stories. And I’d hate to think what kinds of stuff people would find in there that I wasn’t aware I was writing, because a lot of my stories start from a single mental image, a fragment of life, or as much of a dream as I can remember when I wake up, and I try to re-imagine their contexts and consequences.
If you want to read the whole BBC story, here’s a link.