A few days ago, I celebrated a personal triumph. On February 28, the registry office at University College London rolled out its latest monthly list of graduates and my PhD in Information Studies (Publishing) took full official effect. Meet Dr Greenberg: author, editor, scholar, teacher, and occasional blogger.
Although insignificant in the bigger scheme of things, the triumph counts as major news on this blog, started four years ago to help me lithify stray thoughts about a new scholarly identity.
The odd thing is that so far, I have posted hardly anything about the doctoral thesis. Some reflection about a previous life as journalist, and news about other projects; but when it came to the thesis itself, I felt protective. Until it was written, I was not entirely sure what it all meant. The final title, ‘The Hidden Art of Editing’, only emerged in the last few months.
Hopefully, the work will not remain hidden any longer. One book is already contracted with Peter Lang, a set of fascinating interviews with practitioners. Another is now in the proposal stage: a countervailing view of editing as a way of opening up the text, rather than closing it down.
There is a lot of mystique about doing a PhD. Often, that is not helpful. It is stubborn persistence and hard-headed planning that allows a student to come up with the goods, and steady, predictable support from the host institution. But there is something magical about the doctoral journey and its strange intensity.
Perhaps all major projects have this quality; the difference is that the doctoral thesis, at its best, must combine both imagination and solidity; originality, and the stepping stones that allow the reader to retrace one’s steps.
In my case, the journey was taken later in life, alongside a full-time job. And by the nature of its subject matter, the research marked not just the start of a new professional life but the culmination of a previous one. On one level, there is a surprising constancy in the concerns pursued over the years. On another, my thinking has gone through a complete transformation.
Meanwhile, life happens. During the seven years since the start I began a new career, faced the end of a marriage, said goodbye to a parent and fought a life-threatening illness. It is not surprising that finishing became an act of defiance. It feels a big deal for the whole family too. Only a fraction of the extended family has a first degree, and I am only the second person with a doctorate; the first was won nearly 40 years ago.
Although personally, I have met with only warm support and affirmation, our wider culture’s attitude to academic achievement is mixed, and sometimes contradictory. People demand qualifications from professionals as a mark of worthiness and trust, but they are also sometimes attacked as ‘credentialism’. When a policy advisor was found to have lied about having a PhD, participants in the public debate were obliged to offer a basic defence of why that mattered.
For me, the PhD was never just about the qualification. It was about making discoveries that could be recognised by others; joining a conversation that might survive the tests of time.
It also feels valuable for its own sake. The last year of doctoral work, during a wonderful period of leave, provided dreamy freedom, perhaps the first since those long exam-less summers of girlhood. Not dreaming as in ‘ivory tower’, but dreaming as discovery. After a lifetime of short-term demands I was able to think things through; things that really mattered to me. The kind of dreaming that puts your feet on firmer ground.
The kind of dreaming that changes everything.
I came across an interesting debate last week about whether the digital humanities (DH) could be deemed responsible for a number of unpleasant trends in higher education, because of its supposed pro-industry bias. Which involved, in turn, a debate about definitions of the digital humanities.
It is a recurring debate, but the latest round started with a blog post by Daniel Allington, which was answered by Stephen Ramsay. This led to more interesting posts and links, such as this and this, plus a lively discussion in the comments thread of the original post.
I do not know all the ins and outs of the debate, nor the people involved –I take a strong interest in DH but my own work is currently at one remove. However I recognise some patterns in the argument, which it seems helpful to share. Although possibly foolhardy, given the strength of feeling; so I apologise in advance for any errors or misunderstandings.
The main pattern is a tension between disciplines that can be described as practice-led, and those who perceive themselves as the champions of ‘theory’. I had to cut my teeth on this one around eight years ago, when first changing careers.
I discovered that on top of the usual snobberies about tomorrow’s fish and chips, journalism had difficulty cutting it as a university subject if it took practice seriously because it was deemed to lack a theoretical framework, and because practitioners were perceived as ciphers representing industry interests. This perception framed discussion about ‘skills’, which were often belittled in importance and regarded suspiciously as the thin end of an industry wedge.
Trying to make sense of it all, I wrote a paper which attributed some of the tensions to the fact that Journalism Education’s institutional host – in the UK, most frequently Cultural Studies or related disciplines – had historically defined itself against journalism, setting itself the task of deconstructing practices and the tacit theories believed to lie behind them. In response…
practitioner-academics have often fought on two fronts, arguing with industry for more theoretical context and with academic colleagues for more practice-based content.
The paper also noted that while universities were understandably happy to profit from the popularity of practical media courses, the classroom experience would become a cynical exercise unless the interpretive framework could allow for a positive vision of the practice. On that occasion it was the ‘theory’ bods who were taking advantage of the subject’s popularity (because they had seniority – i.e. because they could) and not the ‘practitioners’.
I am myself in the happy circumstance of teaching in a creative writing programme. However, although literature attracts more respect than journalism, it turned out that creative writing as a discipline came in for more or less the same stick. Writers in the academy also find themselves fighting on two fronts, to make space for an approach that conceives practice ‘not as a branch of some other subject but as a thing in itself; not a corpus of knowledge, but a living experience.’*
The parallels with DH, a hands-on approach to scholarship that is also accused of being the thin end of the industry wedge, seem to be worth exploring.
As it happens, doing theory is a practice as well, with its own institutions and ethos suitable for analysis and interpretation, and its own fight for scare resources. And as is proper for critical engagement and debate, people have different ideas about what theory is. What is maddening is when a party in the debate recognises only one definition of theory as theory.
I hope to post more on such matters on another occasion, and limit further comment now to a particular contribution in the latest DH debate, which had me sitting bolt upright:
Defenders of the digital humanities might ask themselves, with a bit more commitment than seems evident in public discourse, where such unwelcome associations keep coming from — apart, that is, from an utterly fantasized pure and/or personal malice — if they are really so thoroughly and consistently mistaken.
Lack of commitment and unwarranted suspicions – strong accusations indeed. But the fantasy of malice seems to work both ways, because the comment only makes sense if one assumes bad faith on the part of the opponent. There is also something a little creepy about holding the party that is the subject of an attack responsible for that attack.
The danger here is of taking the ‘hermeneutics of suspicion’ to such an extreme that one ends up telling one’s opponent what she or he really means. Because then, the debate really does get stuck in an Escher drawing.
* Myers, D. G. (1994) ‘The Lesson of Creative Writing’s History’, AWP Chronicle 26 (February): 1, 12-14
A nervous morning today, braving the public for the first time at the London Book Fair. Every year there are dozens of fascinating talks, panels and seminars. Last year, I noticed that although the place was swarming with lecturers and students, there didn’t seem to be any talks specifically about the university and its place in publishing. So I proposed one – it seemed like a good idea at the time.
The workshop panel will introduce some examples of innovation in this field, and put it into context. Then participants will have a chance to talk together and respond to the question, ‘How could a university help you?’; and for those already based at a university, ‘What would you want to offer?’ to outside organisations, either by way of research or practice. We were also curious what people thought about the new generation of university imprints, and how they might distinguish themselves. We will collate the responses and disseminate them. The hashtags are #HEpublish and of course, #lbf13.
The panel includes a colleague who is starting up a new imprint, Fincham Press, at the University of Roehampton Department of English and Creative Writing, a colleague from UCL’s MA Publishing programme, and a researcher at UCL who is running the open access Ubiquity Press. The workshop is supported by the National Association of Writers in Education (NAWE).
My own short presentation considers the invisible support that universities provide to publishing and peer networks; recaps on the role of universities in book history; and takes some examples from the classroom that illustrate how a writing degree can help look at a manuscript from the inside out.
Updated March 7
The Times Higher Education magazine has quoted my colleague Dr Louise Tondeur in an article, and has mentioned the conference she is organising this April for people interested in the practice-led disciplines in higher education.
The conference is being held at the University of Roehampton April 11 to 12, under the auspices of ReWrite, the Centre for Research in Creative and Professional Writing. The title is ‘Practice, Process and Paradox: Creativity and the Academy’.
To declare an interest, I will be among the contributors, talking about ‘The Poetics of Editing’.
A Twitter colleague put out a call for help recently:
I know how he feels – I had to clear my diary for at least a week to read The Rhetoric of Motives **. It took a long period of entry, and then re-entry back to normal life. But it turned me into a fan of this quietly influential philosopher, of the ‘why-didn’t-I-know-about-it-years-ago’ type.
In response to the call, I had offered to provide a potted summary of Burke’s ideas, so here is a hard-won distillation. It is taken from a work-in-progress of my own about editing (a PhD thesis, and later a book with the working title: The Hidden Art) so in the unlikely case that anyone wants to use it, the usual caveats apply about referencing (him and me, as appropriate). Let me know if there’s anything that needs more explanation.
* * *
The Rhetoric of Motives is distinctive in developing the subject far beyond its traditional boundaries, towards a philosophy of rhetoric. It does so by exploring the ‘intermediate area of expression that is not wholly deliberate, yet not wholly unconscious’ (Burke, 1950: xiii) and charting a spectrum of persuasion, from the ‘bluntest quest of advantage’ to a pure form ‘that delights in the process of appeal for itself alone, without ulterior purpose’ (ibid: xiv).
In doing so, Burke helps to show ‘how a rhetorical motive is often present where it is not usually recognised, or thought to belong’. (xiii) As he elaborates, rhetoric has power not only over action but also over attitudes, when freedom to act is constrained for any reason. This ‘permits the application of rhetorical terms to purely poetic structures; the study of lyrical devices might be classed under the head of rhetoric, when these devices are considered for their power to induce or communicate states of mind to readers, even though the kinds of assent evoked have no overt, practical outcome.’ (50)
The work makes the case for the study of language in itself, as an example of ‘the autonomy of fields’; it is valuable methodologically ‘because it gives clear insight into some particular set of principles’, and is ‘helpful as a reaction against the excesses of extreme historicism’ (28). In Burke’s day, the two main competing frameworks from which autonomy was being declared were a reductionist pro-market ‘scientism’ on one hand, and reductionist historical materialism or Marxism on the other. But a similar declaration of independence can be made now from other frames of reference, for example the more totalising versions of post-modern constructivism, or from biological determinism.
Burke is giving permission to focus on language as a thing in itself, rather than focusing on its contextual aspects. But reflecting on his ideas, one sees that ‘context’ does not disappear – it is relocated inside the rhetorical process. Persuasion depends on motive, and motive within language is in no way obvious – context is vital to its meaning. As in a joke, the same words can have very different meanings, with or without animus, depending on how the tone and context are understood. (6) Burke writes:
A motive introduced in one work, where the context greatly modifies it and keeps it from being drastically itself, may lack such important modifications in the context of another work. The proportions of these modifications themselves are essential in defining the total motivation, which cannot, without misinterpretation, be reduced merely to the one ‘gist’, with all the rest viewed as mere concealment or ‘rationalization’ of it. (6)
Context includes the order of the thing being communicated; for example, the motive of a narrative can be indicated by the storyteller’s choice of ending, since ‘a history’s end is a formal way of proclaiming its essence or nature’. (13) It also includes a relationship between transient and permanent factors of appeal – the dimension of time – because ‘topical shifts make certain images more persuasive in one situation than another’.
Just as Walter Ong later underlines that the marks made in writing are not a representation of a thing itself, but the representation of an utterance about a thing (Ong, 1997), Burke identifies the ‘reflexive pattern’ of language, which is ‘not merely speech about things […] but speech about speech.’ (178)
Writing is not the product of thought but its dramatisation; it is an act of thought in itself. Symbols are not just reflections of the things being symbolised, or signs for them: ‘They are to a degree a transcending of the things symbolized. So, to say that man is a symbol-using animal is by the same token to say that he is a “transcending animal.” Thus, there is in language itself a motive force calling man to transcend the “state of nature” (that is, the order of motives that would prevail in a world without language).’ (192).
Another way of putting this is that although the world contains nonverbal actions, these actions also persuade by reason of their symbolic character:
Paper need not know the meaning of fire in order to burn. But in the ‘idea’ of fire there is a persuasive ingredient. By this route something of the rhetorical motive comes to lurk in every ‘meaning’, however purely scientific its pretensions. Wherever there is persuasion, there is rhetoric. And wherever there is ‘meaning,’ there is ‘persuasion’. (172)
The link between persuasion and rhetoric is clear. But what of the connection between persuasion and meaning? According to Burke, the link comes via identification. Persuasion is a kind of communication, and communication is by definition between distinct, different beings: ‘But difference is not felt merely as between this entity and that entity. Rather, it is felt realistically, as between this kind of entity and that kind of entity’ (177). [SG: my emphasis]
The motives for linguistic persuasion emerge out of this generic divisiveness, a formal sense of classification within humans that exists prior to any specific social, economic or gender divisions.
Such divisiveness allows for a process of identification, in both a positive and negative sense: ‘Partition provides terms; thereby it allows the parts to comment upon one another. But this “loving” relation allows also for the “fall” into terms antagonistic in their partiality, until dialectically resolved by reduction to “higher” terms.’ (140) It is not merely the differences between individuals and groups that drive them apart, it is also the elements they share, ‘since the same motives are capable of both eulogistic and dyslogistic naming’ (141).
Communication between kinds amounts to an abstract form of ‘courtship’, a communion of estranged entities that depends on the mystery of strangeness. Even if the communion is snapped by hatred, ‘it can be socially organized only by the building of a counter-continuity; hence the mystery of persuasion is not categorically abolished, it is transformed.’ (177)
For the speaker to ‘court’ the spoken-to continually, distance is necessary: ‘For if union is complete, what incentive can there be for appeal? Theoretically, there can be courtship only insofar as there is division.’ (271) Distance is created through the rhetorical technique of ‘interference’ or standoffishness, which has the sacrificial quality of denying or postponing union.
At one level identification depends on knowing the audience in a literal fashion. But there are also purely formal patterns in a text that ‘readily awaken an attitude of collaborative expectancy in us’, and therefore a more participatory role for the reader in the interpretation of meaning. Identification takes place when the listener has been persuaded to participate by formal means, based on a universal appeal. Commenting on classical texts about rhetorical techniques, Burke notes a reference…
…to that kind of elation wherein the audience feels as though it were not merely receiving, but were itself creatively participating in the poet’s or speaker’s assertion. Could we not say that, in such cases, the audience is exalted by the assertion because it has the feel of collaborating in the assertion? (58)
Burke goes further to draw parallels between the persuasion of an external audience (the preoccupation of traditional rhetoric) and a more internal, psychological process of identification: ‘You become your own audience when you become involved in subterfuges for presenting your own case to yourself in sympathetic terms.’ (39).
He acknowledges the multiplicity of potential meanings, especially those arising from historical traces. Since only the ‘ideas’ survive in relics of the past, there must be uncertainty about how a text can be interpreted and ‘the people who used it may have been quite aware of many other meanings subsumed in it, but not explicitly proclaimed […] because it was so obvious to them that it did not need mention.’ (110-11)
This raises problems for interpretive frameworks that depend on a concept of ‘unmasking’ the latent meanings lying behind images and symbols. If the human mind depends on the use of symbols, then ‘every aspect of his “reality” is likely to be seen through a fog of symbols. And not even the hard reality of basic economic facts is sufficient to pierce this symbolic veil’ (136).
This emphasis on symbols is suggestive of post-structuralism and the literary critic Wayne Booth, in a 2001 collection of essays* about Burke, says that in some sense he can be understood as ‘the first full-fledged deconstructionist’.
But there are important differences: ‘Burke was distressed by any thinker who reduced all reality to language’ and expressed annoyance about deconstructionists ‘who, in Burke’s reading deny the plain fact, the hard substantive reality, that a child learns to distinguish real tastes before he or she learns any words for distinguishing tastes.’ (Booth, 2001: 198)
** Burke, Kenneth (1950) A Rhetoric of Motives, Berkeley: U Cal Press (republished 1969)
* Booth, Wayne C. (2001) “The Many Voices of Kenneth Burke, Theologian and Prophet, as Revealed in His Letters to Me” in Henderson, Greig and David Cratis Williams, eds, Unending Conversations: new writings by and about Kenneth Burke, Carbondale ILL: Southern Illinois University Press, , pp 179 to 201
A new book on literary journalism is now on sale; a collection of essays by scholars from around the world. It contains two chapters by me.
One chapter, ‘Slow journalism in the digital fast lane’, examines narrative journalism in the age of the internet. It picks up where I left off in a 2007 Prospect article (see Item 3) and includes references to this blog. If you are new to Oddfish and interested in knowing more about the updates on the meme provided on the blog, please look here, here and here.
The new work’s advance on those earlier contributions is twofold. It attempts to map the emerging publishing platforms and relationships that will determine, in future, whether and how high-quality nonfiction storytelling reaches an audience. And it puts forward an argument in the long-standing debate about what makes a piece of writing ‘authentic’.
A set of conventions has developed for digital genres, around the normative ideals of raw vs cooked; artisan vs industrial; provisional vs complete. These qualities are invoked as a guarantee of authenticity but the assumptions behind the ideal are often tacit and therefore unexamined.
Once scrutinised, the ideal of a pure, raw text begs many questions. For writers it is the ability to achieve some measure of distance from raw feeling that can leave readers free to find their own emotional response. And one only has to press ‘send’ on an email to know that, because of some trick of the brain, a text must be ‘finished’ before one can know, to the fullest extent, what needs changing. We need both change and constraint. These questions matter for narrative nonfiction:
Literary journalism represents an attempt to offer considered, original and documented writing that recognises that subjective experience needs verification to stay real. However the move into a digital environment puts it in potential conflict with a form of nonfiction that makes a virtue of its raw and instantaneous qualities. The challenge in this environment is to find important new ways of delivering the luxury of slow journalism’s reflection and documented discovery, and make creative use of the tensions at play, to allow for a further evolution of writing forms.
The other chapter, on Poland, reflects a long-standing preoccupation with central Europe. But at one level, the main discovery made in the writing related to problems of long vintage, rather than anything specific about a country or region. In both essays, reporting is understood as a form of expanded consciousness – a personal experience that is deliberately turned outward and tested by verification. And in both, there is an exploration of the ways it can fall foul of the ideals of ‘committed speech’, in one form or another. About Poland, I write:
The experience of East-Central Europe seems to indicate that when committed reportage is on the outside it can function as literature, albeit one that is not to everyone’s taste. But when it is on the inside, this becomes impossible – it cannot sustain itself because it is simply unbelievable.
I went to pay the paper bill the other day. The shopkeeper consulted rows of perforated pink delivery slips; both she and I were surprised to discover it was nearly six months since my last visit. I muttered something about ‘a busy period’ and paid the bill.
Later I remembered that the same period had passed since this blog was updated.
Death has a way of knocking time out of its daily orbit. This time, the shift began on Valentine’s Day, when my father went into hospital for a check-up. By the evening he had become an in-patient; he passed away a few weeks later. In the intervening period, every day contained a multitude of dramas. He was elderly and had been struggling with chronic disease, but in life the precise details of a story’s end cannot be known, and so death always comes as a surprise.
This blog is dedicated to work-in-progress, not personal life, but sometimes progress cannot be resumed until we do something to note, in public, the shift in life’s orbit.
Father was a sharp-witted, well read man who – much to his later regret – dropped out of a PhD and teaching job at Rutgers University to support a growing family. Even in unpromising conditions the learning instinct remained. Sorting out the personal effects left behind, we found the letter of appointment from the university, a treasured document. A box of photographs included one of a man in his prime, standing at the blackboard, delivering a lecture to his staff.
At the memorial gathering I recalled standing in the kitchen – a young woman trying to hold her own – while father challenged me to defend my views and come up with clear arguments, clearly put. It reminded me of Lewis Carroll’s verse: ‘In my youth [...] I took to the law and argued each case with my wife. And the muscular strength that it gave to my jaw has lasted the rest of my life.’