Stewart Wills http://www.stewartwills.com Scholarly Communicator, Writer, Editor, Web Professional Tue, 31 Oct 2017 16:52:03 +0000 en-US hourly 1 https://wordpress.org/?v=4.6.14 The Zeitgeist Just Ain’t What It Used to Be http://www.stewartwills.com/2014/08/02/the-zeitgeist-just-aint-what-it-used-to-be/ Sat, 02 Aug 2014 19:00:34 +0000 http://www.stewartwills.com/?p=567

Zeitgeist coffeehouse, corner of Jackson Street and Second Ave. S., Pioneer Square, Seattle, Washington. Photo by Joe Mabel.

Early last month I ran across a post called “Beware the zeitgeister” by Seth Godin, the senior member of the Godin/Shirky/Joel axis of totally bald-headed media/marketing gurus. Godin’s characteristically delphic utterance (commendably shorter than the bloated post you’re reading now) warned against personalities who care only “about what’s trending now,” and who will “interrupt a long-term strategy discussion to talk urgently about today’s micro-trend instead.” And he closes with this:

The challenge, of course, is that the momentary zeitgeist always changes. That’s why it’s so appealing to those that surf it, because by the time it’s clear that you were wrong, it’s changed and now you can talk about the new thing instead.

However much I sympathize with the post’s core idea, I must confess that Godin hung me up a bit with the term “momentary zeitgeist.” Is such a thing even possible?, I wondered. Then, almost in real time, I ran across another instance of the z-word in a post by Catherine Taylor, the witty social-media gadfly of MediaPost, who, in a rumination on a tweet dreamed up by Proctor & Gamble to take advantage of the recent, temporary mass insanity surrounding the domicile decisions of LeBron James, noted that all P&G had done was “to appropriately piggyback on something in the zeitgeist that’s been brewing over the last few days.”

Disturbed, I ran to fetch my dog-eared copy of Scribner’s five-volume 1973 classic Dictionary of the History of Ideas, a decidedly analog relic of my nerdy young adulthood (and now, somewhat to my amazement, available complete online at a moderately difficult-to-navigate site at the University of Virginia). Not surprisingly, the entry for “Zeitgeist — written by Nathan Rotenstreich, a professor of philosophy at Hebrew University — is the last entry in the dictionary, and I here take the liberty of quoting its first paragraph in full:

Along with the concept of Volkgeist we can trace in literature the development of the cognate notion of Zeitgeist (Geist der Zeit, Geist der Zeiten). Just as the term Volkgeist was conceived as a definition of the spirit of a nation taken in its totality across generations, so Zeitgeist came to define the characteristic spirit of a historical era taken in its totality and bearing the mark of a preponderant feature which dominated its intellectual, political, and social trends. Zeit is taken in the sense of “era,” of the French siècle. Philosophically, the concept is based on the presupposition that the time has a material meaning and is imbued with content. It is in this sense that the Latin tempus appears in such phrases as tempora mutantur. The expression “it is in the air” is latently related to the idea of Zeitgeist.

The good Dr. Rotenstreich — who also wrote the Ideas dictionary’s entry for “Volksgeist and appears to have intended the two articles as a pair — walks readers at length through the philosophical pedigree of both terms: the connotative ties of the German Geist to the Hebrew ruah, the Greek pneuma, and the Latin spiritus; the early emergence of the concepts of a “spirit” of nations and of eras in the work of Enlightenment thinkers such as Montesquieu, Voltaire, and Kant; the solidification of the two concepts in the thought of Johann Gottfried von Herder, and their flowering in the world-historical philosophy of Hegel. The main point for now is that, as the quote above suggests, the understanding of “Zeitgeist” is that it denotes the spirit of an era or an age, and that the time itself “has a material meaning and is imbued with content.” That seems very far away from the notion of the two commentators quoted at the outset, who apparently find “zeitgeist” a convenient catchword for what’s hot at the moment — a river of ever-changing “trending” topics that one can mine for a quick reading in the way one reads the car’s oil level from the dipstick.

So when did “Zeitgeist” lose its original historical usefulness — when, if you will, did it lose its initial capital and its italics — and become a buzzword for whatever is trending on the Web at the moment? As with so many of the mixed blessings of the modern age, the temptation is to lay this one at the doorstep of Google, which, in 2001, first released Google Zeitgeist, an online tool that allowed visitors to see what users were predominantly searching for at any given time. The real-time version of Zeitgeist was replaced in 2007 by Google Trends, but Google still releases a yearly review of hot search terms under the Zeitgeist brand. It seems reasonable (at least for a toffee-nosed snob like me) to suspect that this feature may have marked the first encounter of many in the population with any form of the term “Zeitgeist.”

Fortunately, we can test the hypothesis with the new and sublimely time-wasting NYT Chronicle toy dreamed up by the Web wizards at The New York Times. NYT Chronicle lets you enter a word or words in a search blank, and spits out a simple line graph showing the percentage or number of Times articles in which the word has appeared, across the Paper of Record’s entire printed history since 1851. (Nota bene: With NYT Chronicle, we can finally bring some statistical rigor to the unsupported proposition of Keats that “Beauty is Truth, Truth Beauty.” The answer from NYT Chronicle is a qualified yes — Beauty is Truth, but only for a limited period from 1927 to 1965.)

I dropped “zeitgeist” into NYT Chronicle, and here is what I ended up with:

The number of occurrences of "zeitgeist" in the NY Times

The number of occurrences of “zeitgeist” in the NY Times

Conclusion: The use of the term did see a big jump after ~2003 that could plausibly relate to the Google product —  but it’s also clear that “zeitgeist” was trending up in the pages of the Times long before then. (The rise after 1988 may also be partly due to the formation of the motion picture distributor Zeitgeist Films, the name of which appears frequently in the credits section of Times film reviews.) The apparent, precipitous decline at the end of the series simply reflects the fact that we’re only a bit more than half done with 2014.

Of course, the number of articles on the Times chart is rather small; apparently there’s only so much zeitgeist the Gray Lady can handle, even in the bumper-crop year of 2010 (117 articles). The broader sample raked through by Google News, meanwhile, finds, in the past month or so, more than 850 articles containing “zeitgeist” (including, it must be said, many duplicates). And a quick, pseudo-random trawl through a few of those comes up with some real humdingers:

Does our fashion photography look good enough? I trust Tomas Maier’s opinion on this more than almost anyone’s. And there are many a modern swan who I’m zinging emails at to make sure the mag touches that zeitgeist: Alexandra Richards, Liz Goldwyn, Kick Kennedy.—From a New York Observer interview with Jay Fielden, editor of Town & Country

[M]any of these records captured, as most truly great albums somehow do, the very essence of the time, its aura, its being, its, for lack of a better word, zeitgeist.—“Ed Kowalczyk to Celebrate 20 Years of Throwing Copper at Paramount“, Long Island Press

It is all about pastels, possums! In what is chalked up as a throwback to the 1950s, purple hair has re-entered the zeitgeist with more gusto than the twist at a sock hop.—“Hair styling hits a purple patch as kooky becomes cool“, Sydney Morning Herald

There is even — yes, you saw this coming — a recent article on Sports Illustrated‘s SI.com titled “The Zeitgeist of LeBron James.”

The popularity of “zeitgeist” is easy enough to understand; it’s one of those words that make one seem just a little smarter than they actually are, and thus are catnip for insecure writers like me. And, of course, given the ceaseless deluge of screen media and the nanosecond attention spans it’s engendering, some sort of term was needed to cover the Spirit of the Instant. What a shame that the arbiters of media taste didn’t just create a new one — “InstaGeist,” perhaps, or, even better, “Augenblickgeist,” the “Spirit of the Eyeblink.”

Because what’s most troubling about the cheapening of “Zeitgeist” is not so much its extension to ever more trivial twists and turns on social media, as it is the nagging feeling that we don’t really have any use for the original sense anymore. When we think, if at all, about how future historians will catalog the current age, we have a vague sense that the rise and prevalence of technology will constitute the commanding theme. But how will the era stack up culturally — indeed, will there be a definable cultural “era” at all — and what will be the unit of time that ultimately “has a material meaning and is imbued with content”? Are we approaching asymptotically, with each passing, media-saturated day, a situation where that characteristic time equals zero — where the content is so overwhelming, and the time to which it’s attached so short, that the Augenblickgeist is the only Geist that has any relevance?

“How shall a generation know its story,” wrote Edgar Bowers in his poem “For Louis Pasteur,” “if it will know no other?” It would be ironic indeed if the rise of the “momentary zeitgeist,” so rooted in the immediate present, ultimately marked the main defining theme of our real … Zeitgeist.

]]>
Post-Work, New Work, No Work: Some Thoughts on Automation and Employment http://www.stewartwills.com/2014/06/14/post-work-new-work-no-work-some-thoughts-on-automation-and-employment/ Sat, 14 Jun 2014 16:09:52 +0000 http://www.stewartwills.com/?p=331 Robots only eat old people

Robots only eat old people (Photo credit: Mark Strozier)

Many years ago, I first read Ray Bradbury’s story “There Will Come Soft Rains,” anthologized in The Martian Chronicles. The story, if you haven’t read it, offers a day in the life of what we now might call the ultimate “smart home” — the last house standing in a California suburb after nuclear devastation — as its robotic enhancements blandly and automatically go through their daily tasks in the absence of their living human overlords. The house even selects the poetry reading for the evening, the six couplets by Sara Teasdale that give the story its name.

When the story first appeared, in 1950, its most arresting message no doubt lay in its post-apocalyptic imagery: the silhouette of a man mowing his lawn, captured by the nuclear flash; the “radioactive glow” given off by the town at night. Far more resonant today, however, seem to me the pictures of automation replacing common human activity: the armies of robotic mice and “scrap rats” that scurry around to clean the house, alert to any detritus that blows in; the bathtub, automatically drawing a bath for the children; the house as “an altar with ten thousand attendants,” blithely serving, unconcerned that “the gods had gone away.”

While the spectre of automation, computers, and robots taking over human activity, and human jobs, has long been a staple of science fiction, mainstream discussion of the possibility — arguably the most pressing economic concern of our times — seems only to have really picked up steam in the past couple of years. A watershed moment was a breathless Bloomberg report earlier this year on a 2013 study from Oxford’s Martin School that suggested that 45% or more of the U.S. labor market might be vulnerable to replacement by automation within the next 20 years. The Bloomberg report, aided by the usual spiffy infographic, immediately went viral, and sent people stampeding to the appendix of the Oxford paper to see where their own occupations stood on its spectrum of doom.

The results — while no doubt the source of considerable schadenfreude among audiologists, choreographers, personal trainers, and others whose jobs scored low on the “computerizable” index of the Oxford study — were surprising and sobering for the many others for whom the handwriting apparently is on the wall. Most striking, perhaps, is the wide range of positions thought to be at risk: not just factory jobs, so many of which have already fallen prey to robotic replacements, but other positions such as insurance underwriter (99% probability of replacement by automation), restaurant host/hostess (97%), restaurant cook (96%), compensation and benefits manager (96%), budget analyst (94%), and accountant (94%). Persons holding these latter jobs must find the conclusion especially vexing, as they no doubt have tended to view themselves as among the “knowledge workers” to whom the future is supposed to belong.

And, of course, there are professional drivers of various types, all of whom score high on the vulnerability index. Indeed, the incredible and rapid progress of self-driving cars serves as a metaphor for the larger societal dislocations that seem so suddenly to be coming into view. Only four years ago, when the project was first announced, the Google self-driving car struck one as a fanciful, utopian experiment. But as that experiment has proceeded, the supposedly innumerable and higher-order human capabilities involved in piloting a vehicle have been falling so rapidly to automated solutions that a time when self-driving vehicles will not only be permitted, but required, appears to be in sight. And while there seems little doubt that self-driving vehicles will carry many benefits to society, it seems equally clear that they are going to throw a lot of people out of work.

The driverless car is only the most visible example of the general trend: The rapid advances in processing power, storage, software and sensor development, and, yes, human talent have reached the point of mutual self-reinforcement, putting the previously distant dreams of artificial intelligence on the table for all to see and grapple with. And the question is, what does it all mean for the quotidian reality that most of us inhabit, so much of which is defined by our work?

The usual answer from Silicon Valley (which, perhaps not coincidentally, is the place that benefits most from the current trends) is that we will enter some sort of post-work utopia, or that the encroachment of automation on ever-increasing swaths of human value creation will “free us up” to do “higher-order tasks,” whatever those might be. (At least, presumably, until machines develop to the point where they take over the higher-order work as well.) The most enthusiastic of the techno-Pollyanas (in this, as in so much else) is Kevin Kelly, the “senior maverick” at Wired magazine, who writes, at the end of a long essay on why we should embrace our robotic overlords:

We need to let robots take over. They will do jobs we have been doing, and do them much better than we can. They will do jobs we can’t do at all. They will do jobs we never imagined even needed to be done. And they will help us discover new jobs for ourselves, new tasks that expand who we are. They will let us focus on becoming more human than we were.

Let the robots take the jobs, and let them help us dream up new work that matters.

For me, there are several problems with this vision of a future of robotically enabled navel-gazing about “work that matters.” For one, there seems little evidence that post-industrial capitalism (and particularly the Silicon Valley elites that appear to hold the reins right now) has much interest in it. There’s at least some reason to believe that the march to automation has underlain the sluggish return to pre-recession employment levels and the troubling increase in income inequality that has characterized the past 15 years. The vested interests who have benefited have little reason to change things for the betterment of society at large, irrespective of the “change the world” rhetoric that gushes from the communications offices of Silicon Valley firms. The proverbial “one percent” might have the luxury to “dream up new work that matters,” but it’s hard to be as sanguine for the rest of us.

(An interesting variant of Kelly’s vision, by the way, comes from Jaron Lanier, the author of Who Owns the Future? To his credit, Lanier is willing to envision a potential future of “hyper-unemployment, and . . . attendant political and social chaos” that might ensue if no steps are taken to align the current drive toward “software-mediated productivity” with human needs. His apparent “solution,” however, is that in the future, all of us will receive a stream of thousands of nanopayments from the Facebooks of this world, who will, for some reason, start compensating us for the reams of information we all now happily provide for free. This answer, in addition to seeming highly unlikely to come to pass, also strikes me as a rather barren view of human value-creation.)

Another problem with the proposals of insouciant futurists like Kelly, and many others who envision a radical reshaping of human work for the better, is that, while they invariably invoke the likelihood of a “painful transition” between the current, flawed now and the utopian later, they provide little information on how we might negotiate it. For most of us, the “work that matters” is the work we have now, which supports our families and our futures, and we won’t have the option of waiting for a better future when that work dries up. Even if Kelly’s vision were a plausible one, we face an economic version of the “uncanny valley” of robotics: We stand at the edge of an economic precipice, looking at the distant other side, with little indication of how to cross to it.

The economist’s answer to the quandary attempts to take comfort from history, pointing to the Industrial Revolution’s impact on existing jobs and the new jobs that it ultimately created. Similarly, according to these voices, the raft of new technologies will destroy many jobs, but also create new ones. In an interview, Andrew McAfee, the coauthor of The Second Machine Age, made the following observations about the impact of the industrial robot Baxter:

Baxter is taking away some routine manual work in factories. At the same time, he is going to need people to repair him, to add functionality, to train him. So Baxter and his kin are absolutely going to create labor.

This statement, of course, ignores some basic mathematics: the number of people required to support Baxter will be far smaller than the number of people “he” will displace; otherwise there’d be no need for a Baxter at all.

In the end, the optimism that McAfee expresses in the interview that we will, as with the Industrial Revolution, “wind up in another happy equilibrium,” rings a bit hollow. The current disruption appears to be operating across a swath of human endeavor unprecedented in its breadth, potentially leaving large numbers of “ordinary” workers without many options. And each new advance in labor-displacing technology seems only to create new ideas on how to take things further and extend automation to new and previously off-limits areas of human endeavor.

We are left, then, with a somewhat darker view: a largely unmanaged replacement of human capital with automation, proceeding at once-unimagined speed, and without a political context equipped to deal with it. A recent posting by Nicholas Carr notes work from a number of economists suggesting that we are actually starting to witness a decline in the demand for workers with high cognitive skills — the “higher-order tasks” that automation is supposed to free us for — and that this decline “has indirectly affected lower-skill workers by pushing them out of jobs that have been taken up by higher-skilled workers displaced from cognitive occupations.” As computers increasingly take on more analytical tasks and tasks formerly requiring human judgment, Carr suggests, it’s possible that workers across the board “are being pushed down the skills ramp,” not up.

While hardly an uncontroversial figure, the economist Lawrence Summers seems to have captured the crux of the baleful situation rather succinctly just a few weeks ago, at a “Conference on Inclusive Capitalism” organized by the Financial Times. Unencumbered by future visions dreamed up by an entitled Silicon Valley elite, Summers candidly stated that “We are seeing less and less opportunity for what average people — people lacking in certain skills — are going to be able to do.” And, he added, part of the answer lies in “channeling technological change so it reinforces the abilities of all, not just levers the abilities of those who are most able.”

“That is going to be the largest challenge for capitalism going forward,” Summers concluded. “And the first step is to recognize it.” Let’s hope that someone is listening.

 

]]>
Millennial Expectations http://www.stewartwills.com/2014/03/27/millennial-expectations/ Thu, 27 Mar 2014 20:43:38 +0000 http://www.stewartwills.com/?p=284 Cropped screenshot of Loretta Young from the t...

Cropped screenshot of Loretta Young from the trailer for the film Employees’ Entrance (Photo credit: Wikipedia)

Periodically, articles cross my desk providing “expert guidance” on how to attract, and keep, young employees — the needs of whom, owing to the hyperkinetic, information-saturated environment in which they came of age, presumably differ radically from those of members of the less dynamic ancien régime.

One of the most fatuous such service articles in recent memory (which came to me via LinkedIn, always a reliable source of shallow “thought leader” material) is entitled “How to Work with Young Employees,” and appears on the blog of New Brand Analytics. The author of this profundity identifies five “simple tips to keep Millennials engaged”:

  1. Listen to their ideas for how to improve the organization.
  2. Make work fun.
  3. Give feedback often.
  4. Provide frequent, public praise.
  5. Don’t bore them with repetitive tasks.

Mirabile dictu! However, shortly after the scales had finished falling from my eyes (indeed, while they were still clattering on the floor at my feet), I found myself wondering whether there was any employee, irrespective of relative burden of years, who wouldn’t want these things. “Millennial employees absolutely love receiving praise and want public recognition,” the post’s author gushes, as if sharing the inside-est of inside dope. I answer: Well! Who doesn’t?

Granted, I may succeed here only in further burnishing my already shimmering credentials as a Cranky Old Fart. But it does seem reasonable to raise, as an alternative working hypothesis, the possibility that the current crop really isn’t that different from the generations that preceded them, and thus shouldn’t be encouraged to entertain radically different expectations about the long-term trajectory of their careers. At some point, the dull, repetitive tasks still have to get done; indeed, doing them has traditionally formed part of the hodgepodge of corporate realities roped together under the tired but still-apt cliché of “paying one’s dues.”

Also, we might ask ourselves whether we do young employees any real favors by pandering to them. A January entry on the Washington Post’s “On Leadership” blog cited a survey commissioned by Bentley University that suggested a profound disconnect between the expectations of young workers entering the job market and of the older workers who actually will be hiring them. Among the perceptions of the survey’s older respondents, said the article, “were that recent college graduates are harder to retain, lack a strong work ethic and aren’t as willing to pay their dues as previous generations were.” And those perceptions, correct or not, have real-world consequences for the members of this demographic:

Perhaps as a result of this, half of the business professionals who responded to the study said that their companies tend not to invest in young workers’ career development, out of a sense that they will change jobs quickly and aren’t worth the investment.

Based on my own very positive interactions with Millennials on the job, I have strong doubts that any of the perceptions about their supposed lack of work ethic are actually true. But feeding the idea that this generation is somehow a breed apart, and needs to be constantly praised and protected from the occasional boredom endemic to any job or they will jump ship, can only reinforce the notion that they aren’t worth investing in. That really would be a tragedy.

 

]]>
Does x know about y?, and Other Social-Media Math Puzzles http://www.stewartwills.com/2013/11/09/does-x-know-about-y-and-other-social-media-math-puzzles/ Sat, 09 Nov 2013 14:24:59 +0000 http://www.stewartwills.com/?p=220 English: jacket cover of Dominate your market ...

English: jacket cover of Dominate your market with Twitter – UK edition (Photo credit: Wikipedia)

How much is a human life worth? That fraught question arises in the chaos of war and revolution, and in the delicate and painful arena of what are euphemistically called “end-of-life decisions.” Now, thanks to the recently completed initial public offering of Twitter, we have an answer: One hundred thirty-seven dollars.

That, in any event, is the value that one gets by dividing the $31.8 billion first-trade market capitalization of Twitter’s shiny new New York Stock Exchange issue by the company’s most recent count of “active monthly users,” 232 million. (I can do the math, but TechCrunch has thoughtfully done it for me.) We can all take comfort in the fact that the value of an individual human by that standard has ticked up ten bucks, or 7.9%, since the ill-starred IPO of Facebook a year and a half earlier.

“Market value per user” is a convenient metric for companies like Twitter — more convenient, anyway, than troubling, fusty old valuation measures like price/earnings ratio, since as far as I can tell Twitter has never earned a dime in profit. (Indeed, Twitter seems to be managing to rack up ever-bigger deficits; its third quarter loss expanded 200% year to year, on a doubling of revenues.) But although TechCrunch, at least, seems to view value-per-user as a metric that might actually have some kind of meaning, clearly it raises as many questions as it answers.

First, of course, is: Just what is an “active monthly user”? A very small amount of digging reveals that Twitter — “like almost every online platform” — defines an active monthly user as someone who accesses the platform once or more per month. Well! I would respond: Active, no; monthly, yes; user, barely. Presumably, this definition would include the German avant-garde “Merz” artist Kurt Schwitters, who tweets daily, and a sample of whose postings I’ve reproduced below:

schwitters

Personally I don’t find this particularly illuminating, but I give Schwitters the benefit of the doubt, and definite props for trying — after all, he has been dead for 65 years.

underwearBut I am, perhaps, picking on Twitter a bit too much, as other social-media platforms provide their own wealth of pseudo-data with which to grapple. For example, we have those “Likes” on Facebook that we all pore over, struggle to interpret, paste onto PowerPoint slides for senior-management presentations, and generally grope toward as evidence of “brand engagement.” But as engagement currency, Facebook “Likes” strike me as a very base kind of coinage indeed. I’ve seen clicking the “Like” button described as “the closest thing on the Internet to a grunt” — a brief, inarticulate expression of vague approval, with very little activation energy and no long-term consequence whatever. In a world where an ad for “the most stolen underwear on the planet” can garner nearly 27,000 such passive nods in very short order, one can’t help but wonder about the value of any one of them.

But of all of the new metrics with which social-media platforms now confront us, none seem quite as problematic as those “endorsements” that appear to have all but superseded more thoughtful written “recommendations” on LinkedIn. I’m sure you’re familiar with them; occasionally, you receive an e-mail that one of your connections on the network has “endorsed” you for a particular skill. You feel a brief flush of pleasure, and follow the link, to be confronted with a matrix of questions about the expertise of others in your network, and invited to return the favor by endorsing them:

Does Jane Doe know about content strategy?

Does John Roe know about Web design?

Does James Williams know about HTML 5?

Does Mary Smith know about REST APIs?

By the mere act of clicking the blue “endorse” button, you can provide that same brief pleasure of recognition to a friend or colleague, without any real negative consequences. Why not just click? Even in the bleakest future imaginable, it is hard to envision the proverbial 2:00 a.m. rapping on the door by jack-booted thugs for the crime of an ill-informed endorsement of someone’s HTML 5 skills.

Yet, for me, it is more complicated. I spend a surprising amount of time suffering over these decisions, tormented by doubt. Does Jane know about content strategy? When I knew her, years ago, she was a proofreader, but surely she could have grown and changed since then . . . In the end, I tend to be rather stingy with my endorsements, reserving them only for cases where I have first-hand evidence. Jane, after all, will never know that I didn’t endorse her.

But not everyone views these things the same way — and that is what makes the LinkedIn endorsement so insidious. This was driven home to me recently when I received an endorsement on a relatively narrow skill by a LinkedIn contact at another organization. When I received the e-mail notifying me of this endorsement, I spent a few minutes with furrowed brows, wondering not only how this person could know about my skills in this particular area, but, in fact, just who this person actually was. I never did resolve either quandary.

So LinkedIn endorsements, while gratifying in themselves, have the worst characteristics possible as an actual practical measurement: They carry a patina of insider authority, but there are no standardized criteria for selection, and no indication of what an endorsement really means — and it appears that some people actually take them seriously. So you can either ignore them, or accept yet another unfunded mandate on your ever-dwindling leisure time. Case in point: I recently received an e-mail inviting me, for the mere cost of some personal data, to read a new whitepaper on “How to Manage Your LinkedIn Endorsements.” (Yes, there’s a “whitepaper” on that subject.)

We face an uncertain future, but we can count on one thing, at least: we all will confront an increasing burden of attending to these and other new, questionable measurement axes in our online personae — which social-media platforms, desperate to make some money and justify their high IPO prices, will continually cook up and foist upon us. Maybe it really is time to unplug.

 

]]>
The Real “STEM Crisis”: A Sloppy Label http://www.stewartwills.com/2013/09/29/the-real-stem-crisis-a-sloppy-label/ http://www.stewartwills.com/2013/09/29/the-real-stem-crisis-a-sloppy-label/#comments Sun, 29 Sep 2013 14:39:39 +0000 http://www.stewartwills.com/?p=177 English: ALEXANDRIA, Va. (June 15, 2011) Rear ...

Rear Adm. Nevin Carr, Chief of Naval Research, delivers opening remarks at the Office of Naval Research-hosted Naval Science, Technology, Engineering and Mathematics (STEM) Forum. (U.S. Navy photo by John F. Williams/Released) (Photo credit: Wikipedia)

Is there really a “STEM crisis”? To the discouragement of art history students everywhere, we frequently hear politicians, business leaders, and others warn of dark social and national consequences unless we significantly ramp up the percentage of students majoring in science, technology, engineering, and mathematics, the four pillars of the STEM acronym. This has always rung a bit hollow to many of us who actually spend time dealing with scientists, who hear tales of hapless souls bouncing from postdoc to postdoc with no steady job prospect in sight, who observe the popularity of the “alternative careers” sections of the Nature and Science careers sites, and who read ruminations like this one, in which the author sees the current landscape for biology “knowledge workers” as something approaching indentured servitude:

I have seen a comparison of a postdoc with a piece of equipment that is replaced whenever there is a new model on the market—and nobody buys a second-hand Polymerase Chain Reaction machine. Despite what I feel are excellent skills, I feel like a used piece of equipment.

If there is indeed a crisis in the pipeline of STEM workers to drive future innovation, how do we explain outpourings like this — which, believe me, are not rare?

One obvious possible answer is that, political cant to the contrary, there is no “STEM crisis” at all. The most exhilarating recent smackdown of the hypothesized STEM worker shortage appeared in an August 2013 IEEE Spectrum article by Robert Charette, candidly titled “The STEM Crisis Is a Myth.” Bringing to bear equal parts data and withering sarcasm, Charette shows that, notwithstanding the billions of dollars that governments are pouring into boosting STEM education efforts, there are already far fewer STEM jobs than there are qualified candidates to fill them; that false warnings of a STEM crisis have been a recurring theme from political and business leaders since the end World War II, not just in the U.S. but in other countries; and that STEM wages are not showing anything like the growth one would expect given a labor shortage. As is de rigueur on the Web, the article boils the matter down to a single, devastating infographic that compares the huge pool of STEM degree holders already working outside of their chosen field with the comparatively puny number of new jobs.

Given these data, why all the talk of a “STEM crisis”? In a peroration that stirred the few remaining embers of my youthful socialism, Charette finds the answer in a powerful combination of capitalist vested interests, seeking to ensure an oversupply of knowledge workers to keep wages down and the bottom line up:

No less an authority than Alan Greenspan, former chairman of the Federal Reserve, said as much when in 2007 he advocated boosting the number of skilled immigrants entering the United States so as to “suppress” the wages of their U.S. counterparts, which he considered too high.

(This particular quote prompted the amusing rejoinder from a former colleague of mine that “Maybe we need more H1-B visas to bring down economists’ pay.”)

Charette’s article did leave me wondering, though, whether part of the problem lies in the acronym “STEM” itself. Does the supply of jobs in enterprises as different as basic science, technology, engineering, and mathematics really respond monotonically to the same economic forces? The obvious answer is no, and is implicit even in some of the evidence that Charette adduces to eviscerate the supposed STEM crisis, sensu lato. John Miano, writing at the end of 2011, notes that in the previous year the U.S. “lost 19,740 computer jobs, 107,200 engineering jobs, and 243,870 science jobs, according to the Bureau of Labor Statistics” — losses across the board, admittedly, but of very different magnitude, with the broad category of “science jobs” clearly suffering the most. Do we have a “STEM crisis,” or only a “TEM crisis”?

A report from the American Institutes for Research weighs in firmly on the side of the latter. Titled “Higher Education Pays: But a Lot More for Some Graduates Than for Others,” this study — which should be read by, among others, any college-bound senior or parent thereof — looked at data from five states on the earnings of college graduates when they entered the labor force. The report concluded that, by this one metric at least, what students study is a lot more important than where they study, targeted short-term Associate’s degrees can have a lot more immediate earnings power than Bachelor’s degrees, and, most provocatively, “the ‘S’ in STEM . . . is oversold”: “Evidence does not suggest that graduates with degrees in Biology earn a wage premium—in fact, they often earn less than English majors.”

Less than English majors? Mon Dieu! [Full disclosure: the author of this post earned his first Bachelor’s in English, and has no regrets.]

Even within the separate divisions that make up the STEM rubric, things are far from homogeneous. An interesting and somewhat bewildering Congressional Research Service report from the past May on the science and engineering workforce projects that the largest number of new jobs through 2020 will come in areas like software development, civil engineering, and medical science, whereas some other engineering sectors such as agricultural engineering, mining engineering, and marine engineering, scientific sectors such as materials science and space science, and mathematics generally will see the smallest number of new jobs created. A recent episode of NPR’s “Planet Money” series suggested that for students going to school today, petroleum engineering is the true road to riches. Clearly, where the “STEM crisis” is concerned, we need to be splitters, not lumpers.

All of this matters because, as Charette correctly notes, our rush to throw funding, energy, and rhetoric at a STEM crisis that is, at best, highly variable and, at worst, an illusion not only condemns today’s students to disappointment, but also risks crowding out support for other worthwhile areas of human experience — “the arts, literature, and history” — that have traditionally carried intangible but real benefits in both the soul and the workplace. In perhaps the most revealing extract from the entire article, Charette quotes the former chairman of Lockheed-Martin, who wrote, in an op-ed for the Wall Street Journal:

In my position as CEO of a firm employing over 80 000 engineers, I can testify that most were excellent engineers . . . But the factor that most distinguished those who advanced in the organization was the ability to think broadly and read and write clearly.

Well! Perhaps that English degree was worth something after all. Somewhere, my late father, who paid the tuition, must be smiling.

]]>
http://www.stewartwills.com/2013/09/29/the-real-stem-crisis-a-sloppy-label/feed/ 1
Google’s “20% Time”: Goodbye to All That . . . http://www.stewartwills.com/2013/09/07/googles-20-time-goodbye-to-all-that/ Sat, 07 Sep 2013 13:38:07 +0000 http://www.stewartwills.com/?p=39 Google 貼牌冰箱(Google Refrigerator)

Google Refrigerator (Photo credit: Aray Chen)

A couple of weeks back, Atlantic Media’s business news outlet, Quartz, broke the story that Google’s famed “20%” time,” under which employees were supposedly encouraged to take up to 20% of their time to work on things not tied to their main position, was “as good as dead.” This perk has, of course, become a leitmotif whenever the table talk gets around to the subject of innovation — “We should all be like Google; they let their employees use 20% of their time to work on whatever they want.” Now, it would appear — at least if the initial Quartz piece is to be believed (and it spawned a great deal of contemplative thumb-sucking and follow-ups in the tech press) — that Google is getting more interested in having people just buckle down and do their jobs.

For its part, Google quickly followed up with a more or less information-free announcement declaring that 20% time is “alive and well,” showing that, whatever the policy’s actual status, the company understands its PR value. But just what is — or was — 20% time? The most reliable, and least exciting, statement of it comes in Larry Page’s “Founders’ IPO Letter,” included with the company’s 2004 S-1 when it went public: “We encourage our employees, in addition to their regular projects, to spend 20% of their time working on what they think will most benefit Google.” Google software engineer Bharat Mediratta, in a much-linked “as-told-to” puff piece in the New York Times on “The Google Way,” said that the 20% time is “to work on something company-related that interests them personally.” If you parse it out, that’s not exactly what Page said (but then, Page was talking to shareholders . . .).

So far, so good; these statements, at least, root the policy’s rationale in the company’s ultimate goals. But the more broad-brush treatment in the tech press — which has variously described the policy as giving employees space to work on “creative side projects,” “their own independent projects,” “experimenting with their own ideas,” and even “anything they want” — summons up visions of an unstructured weekly play date divided among desultory coding, navel-gazing, skateboarding, and other pursuits (“Boss, it’s my 20% time; Tetris helps to unlock my creativity”). A lengthy, interesting, and ultimately amusing thread on Hacker News featured predictably libertarian and anti-corporate themes, rooting the fall from this creative programmers’ Eden in the infiltration of “MBA-think” and the profit motive into the “Don’t Be Evil” halls of the Googleplex.

The reality, as always, is more nuanced, and, perhaps a bit uncomfortably for Google, makes it look like just another company after all (albeit a hugely successful one). A follow-up piece in Quartz, drawn from the aforementioned Hacker News discussion, noted that, according to some Google engineers, although the 20% policy is still in place, it has become a losing proposition actually to use it, owing to increasing pressure on management to measure and rank employees based on their actual output, and the lack of an incentive structure built specifically around the “20% projects.” The result, of course, is that “20%” time is sometimes referred to among Google staffers as “120% time” — that is, the small percentage of motivated and creative (and, perhaps, obsessive) staff willing to eat into their personal time “are free, as at any other job, to use their nights and weekends to do even more work.” Wally (of Dilbert fame) was apparently more right than he knew.

All of this seems perfectly natural. A thoughtful post on Ars Technica suggested that 20% time may be dead because Google “doesn’t need it any more” — that the company is no longer in start-up mode and now needs to focus on consolidating what it has. And, hard as it is to avoid a bit of schadenfreude at any tarnishing of the company’s cooler-than-thou image, the reality is that Google will continue to innovate and we will all continue to chase Google. But what drives that innovation won’t be “20% time” as such. It will be, as always, the efforts of a handful of driven, creative individuals willing to sacrifice a piece of their “personal lives” for a cool idea.

]]>
Do We Really Want Books to Be “Social”? http://www.stewartwills.com/2013/09/01/do-we-really-want-books-to-be-social/ http://www.stewartwills.com/2013/09/01/do-we-really-want-books-to-be-social/#comments Sun, 01 Sep 2013 15:31:25 +0000 http://www.stewartwills.com/?p=90 Readmill

Readmill (Photo credit: Gustavo da Cunha Pimenta)

Well! We read this morning in a post tweeted by Joe Esposito that “E-Books Could Be The Future Of Social Media.” In that post, technology journalist Michael Grothaus, after a paragraph or two declaring his ostensibly neo-Luddite preference for print books over electronic, spends his remaining on-screen inches in fulsome praise of Readmill, a “small but growing app . . . that seems to have its pulse on the future of reading.” Readmill, explains Henrik Berggren, the app’s CEO (and I was not aware until now that apps had CEOs), aims to turn e-books into “niche social networks” brimming with real-time interactions among readers — and, of course, piping all of the data on those interactions back to authors and publishers to help them “make more informed decisions.”

I’ll let you read the post for yourself; suffice to say, a number of things in it struck me as unintentionally funny. Start with the big banner image at the top: It shows a line of five people, sitting on a couch in an old-fashioned bricks-and-mortar bookstore, each immersed in a print book. There is not an e-book in sight. All of them look thoroughly absorbed in what they’re doing, and while none of them is smiling, a sense of vast contentment radiates from the photo.

Note that although this is, at least in the broadest sense, a group of people, they are engaged in a fundamentally solitary activity. And nothing is “broken” here — the print format’s support of sustained concentration, its momentary banishment of distraction, is a feature, not a bug.

It seems to me that this is likely to be true irrespective of the platform. In a moment of pure hubris, Berggren, in his interview with Grothaus, asserts that e-reader platforms like Amazon’s Kindle are “doing it in the wrong way,” kind of a remarkable statement given that Amazon owns 45% of the e-book market and the Kindle is what got them there. While there are many reasons for that success, I think you could make an argument that part of it of lies simply in the characteristics of the device itself, which (except perhaps in the case of the Kindle Fire) seems designed to be as book-like and distraction-free as possible. That, at least, is a big reason it’s worked for me. With my Kindle Paperwhite, I don’t just skim, I read.

There are larger social issues here than Amazon’s bottom line or the promise of apps like Readmill, when we think about what the rise of screen media has already done to our ability to engage with books as intellectual objects. Cory Doctorow memorably referred to the Internet as “an ecosystem of distraction technologies”; what happens when we start to embed them into activities, like reading, for which a lack of distraction is an essential part of the experience? We already have some idea, given the differing ways we interact with HTML pages versus PDFs versus print material. And an interesting survey earlier this year, focusing on the reading habits of children in the U.K. found that, yes, more children are now reading on electronic devices than in print, and they prefer it that way — but that “those who read only on screen are . . . three times less likely to enjoy reading (12% compared to 51%)” than those who read in print.

Certainly the world is changing, but it does seem worth thinking about these data and their implications for what the thing that we call “reading” actually is, and is becoming.

Also unintentionally funny in Grothaus’s article is his bout of hand-wringing over the implications of Readmill’s proposed business model. That model, like so many we are seeing these days, seems to revolve around selling data on user interactions back to interested parties — in this case, book publishers. Grothaus is worried: Won’t this mean that publishers will start to use these data to lean on authors and dictate their writing styles to boost sales? Berggren offers this glib response: “You can paint a very dystopian future where publishers say, ‘Oh, people are just skipping this chapter. You can’t write like this anymore.’ However, I think that’s unlikely to happen.”

Well, that’s a relief.

(Nota bene: Berggren’s response, in addition to being a dodge, shows how successfully we’ve managed to cheapen the term “dystopian.” You want dystopias, Henrik? I’ll show you some dystopias.)

]]>
http://www.stewartwills.com/2013/09/01/do-we-really-want-books-to-be-social/feed/ 2