Monthly Archives: September 2013

The Real “STEM Crisis”: A Sloppy Label

English: ALEXANDRIA, Va. (June 15, 2011) Rear ...

Rear Adm. Nevin Carr, Chief of Naval Research, delivers opening remarks at the Office of Naval Research-hosted Naval Science, Technology, Engineering and Mathematics (STEM) Forum. (U.S. Navy photo by John F. Williams/Released) (Photo credit: Wikipedia)

Is there really a “STEM crisis”? To the discouragement of art history students everywhere, we frequently hear politicians, business leaders, and others warn of dark social and national consequences unless we significantly ramp up the percentage of students majoring in science, technology, engineering, and mathematics, the four pillars of the STEM acronym. This has always rung a bit hollow to many of us who actually spend time dealing with scientists, who hear tales of hapless souls bouncing from postdoc to postdoc with no steady job prospect in sight, who observe the popularity of the “alternative careers” sections of the Nature and Science careers sites, and who read ruminations like this one, in which the author sees the current landscape for biology “knowledge workers” as something approaching indentured servitude:

I have seen a comparison of a postdoc with a piece of equipment that is replaced whenever there is a new model on the market—and nobody buys a second-hand Polymerase Chain Reaction machine. Despite what I feel are excellent skills, I feel like a used piece of equipment.

If there is indeed a crisis in the pipeline of STEM workers to drive future innovation, how do we explain outpourings like this — which, believe me, are not rare?

One obvious possible answer is that, political cant to the contrary, there is no “STEM crisis” at all. The most exhilarating recent smackdown of the hypothesized STEM worker shortage appeared in an August 2013 IEEE Spectrum article by Robert Charette, candidly titled “The STEM Crisis Is a Myth.” Bringing to bear equal parts data and withering sarcasm, Charette shows that, notwithstanding the billions of dollars that governments are pouring into boosting STEM education efforts, there are already far fewer STEM jobs than there are qualified candidates to fill them; that false warnings of a STEM crisis have been a recurring theme from political and business leaders since the end World War II, not just in the U.S. but in other countries; and that STEM wages are not showing anything like the growth one would expect given a labor shortage. As is de rigueur on the Web, the article boils the matter down to a single, devastating infographic that compares the huge pool of STEM degree holders already working outside of their chosen field with the comparatively puny number of new jobs.

Given these data, why all the talk of a “STEM crisis”? In a peroration that stirred the few remaining embers of my youthful socialism, Charette finds the answer in a powerful combination of capitalist vested interests, seeking to ensure an oversupply of knowledge workers to keep wages down and the bottom line up:

No less an authority than Alan Greenspan, former chairman of the Federal Reserve, said as much when in 2007 he advocated boosting the number of skilled immigrants entering the United States so as to “suppress” the wages of their U.S. counterparts, which he considered too high.

(This particular quote prompted the amusing rejoinder from a former colleague of mine that “Maybe we need more H1-B visas to bring down economists’ pay.”)

Charette’s article did leave me wondering, though, whether part of the problem lies in the acronym “STEM” itself. Does the supply of jobs in enterprises as different as basic science, technology, engineering, and mathematics really respond monotonically to the same economic forces? The obvious answer is no, and is implicit even in some of the evidence that Charette adduces to eviscerate the supposed STEM crisis, sensu lato. John Miano, writing at the end of 2011, notes that in the previous year the U.S. “lost 19,740 computer jobs, 107,200 engineering jobs, and 243,870 science jobs, according to the Bureau of Labor Statistics” — losses across the board, admittedly, but of very different magnitude, with the broad category of “science jobs” clearly suffering the most. Do we have a “STEM crisis,” or only a “TEM crisis”?

A report from the American Institutes for Research weighs in firmly on the side of the latter. Titled “Higher Education Pays: But a Lot More for Some Graduates Than for Others,” this study — which should be read by, among others, any college-bound senior or parent thereof — looked at data from five states on the earnings of college graduates when they entered the labor force. The report concluded that, by this one metric at least, what students study is a lot more important than where they study, targeted short-term Associate’s degrees can have a lot more immediate earnings power than Bachelor’s degrees, and, most provocatively, “the ‘S’ in STEM . . . is oversold”: “Evidence does not suggest that graduates with degrees in Biology earn a wage premium—in fact, they often earn less than English majors.”

Less than English majors? Mon Dieu! [Full disclosure: the author of this post earned his first Bachelor’s in English, and has no regrets.]

Even within the separate divisions that make up the STEM rubric, things are far from homogeneous. An interesting and somewhat bewildering Congressional Research Service report from the past May on the science and engineering workforce projects that the largest number of new jobs through 2020 will come in areas like software development, civil engineering, and medical science, whereas some other engineering sectors such as agricultural engineering, mining engineering, and marine engineering, scientific sectors such as materials science and space science, and mathematics generally will see the smallest number of new jobs created. A recent episode of NPR’s “Planet Money” series suggested that for students going to school today, petroleum engineering is the true road to riches. Clearly, where the “STEM crisis” is concerned, we need to be splitters, not lumpers.

All of this matters because, as Charette correctly notes, our rush to throw funding, energy, and rhetoric at a STEM crisis that is, at best, highly variable and, at worst, an illusion not only condemns today’s students to disappointment, but also risks crowding out support for other worthwhile areas of human experience — “the arts, literature, and history” — that have traditionally carried intangible but real benefits in both the soul and the workplace. In perhaps the most revealing extract from the entire article, Charette quotes the former chairman of Lockheed-Martin, who wrote, in an op-ed for the Wall Street Journal:

In my position as CEO of a firm employing over 80 000 engineers, I can testify that most were excellent engineers . . . But the factor that most distinguished those who advanced in the organization was the ability to think broadly and read and write clearly.

Well! Perhaps that English degree was worth something after all. Somewhere, my late father, who paid the tuition, must be smiling.

Google’s “20% Time”: Goodbye to All That . . .

Google 貼牌冰箱(Google Refrigerator)

Google Refrigerator (Photo credit: Aray Chen)

A couple of weeks back, Atlantic Media’s business news outlet, Quartz, broke the story that Google’s famed “20%” time,” under which employees were supposedly encouraged to take up to 20% of their time to work on things not tied to their main position, was “as good as dead.” This perk has, of course, become a leitmotif whenever the table talk gets around to the subject of innovation — “We should all be like Google; they let their employees use 20% of their time to work on whatever they want.” Now, it would appear — at least if the initial Quartz piece is to be believed (and it spawned a great deal of contemplative thumb-sucking and follow-ups in the tech press) — that Google is getting more interested in having people just buckle down and do their jobs.

For its part, Google quickly followed up with a more or less information-free announcement declaring that 20% time is “alive and well,” showing that, whatever the policy’s actual status, the company understands its PR value. But just what is — or was — 20% time? The most reliable, and least exciting, statement of it comes in Larry Page’s “Founders’ IPO Letter,” included with the company’s 2004 S-1 when it went public: “We encourage our employees, in addition to their regular projects, to spend 20% of their time working on what they think will most benefit Google.” Google software engineer Bharat Mediratta, in a much-linked “as-told-to” puff piece in the New York Times on “The Google Way,” said that the 20% time is “to work on something company-related that interests them personally.” If you parse it out, that’s not exactly what Page said (but then, Page was talking to shareholders . . .).

So far, so good; these statements, at least, root the policy’s rationale in the company’s ultimate goals. But the more broad-brush treatment in the tech press — which has variously described the policy as giving employees space to work on “creative side projects,” “their own independent projects,” “experimenting with their own ideas,” and even “anything they want” — summons up visions of an unstructured weekly play date divided among desultory coding, navel-gazing, skateboarding, and other pursuits (“Boss, it’s my 20% time; Tetris helps to unlock my creativity”). A lengthy, interesting, and ultimately amusing thread on Hacker News featured predictably libertarian and anti-corporate themes, rooting the fall from this creative programmers’ Eden in the infiltration of “MBA-think” and the profit motive into the “Don’t Be Evil” halls of the Googleplex.

The reality, as always, is more nuanced, and, perhaps a bit uncomfortably for Google, makes it look like just another company after all (albeit a hugely successful one). A follow-up piece in Quartz, drawn from the aforementioned Hacker News discussion, noted that, according to some Google engineers, although the 20% policy is still in place, it has become a losing proposition actually to use it, owing to increasing pressure on management to measure and rank employees based on their actual output, and the lack of an incentive structure built specifically around the “20% projects.” The result, of course, is that “20%” time is sometimes referred to among Google staffers as “120% time” — that is, the small percentage of motivated and creative (and, perhaps, obsessive) staff willing to eat into their personal time “are free, as at any other job, to use their nights and weekends to do even more work.” Wally (of Dilbert fame) was apparently more right than he knew.

All of this seems perfectly natural. A thoughtful post on Ars Technica suggested that 20% time may be dead because Google “doesn’t need it any more” — that the company is no longer in start-up mode and now needs to focus on consolidating what it has. And, hard as it is to avoid a bit of schadenfreude at any tarnishing of the company’s cooler-than-thou image, the reality is that Google will continue to innovate and we will all continue to chase Google. But what drives that innovation won’t be “20% time” as such. It will be, as always, the efforts of a handful of driven, creative individuals willing to sacrifice a piece of their “personal lives” for a cool idea.

Do We Really Want Books to Be “Social”?

Readmill

Readmill (Photo credit: Gustavo da Cunha Pimenta)

Well! We read this morning in a post tweeted by Joe Esposito that “E-Books Could Be The Future Of Social Media.” In that post, technology journalist Michael Grothaus, after a paragraph or two declaring his ostensibly neo-Luddite preference for print books over electronic, spends his remaining on-screen inches in fulsome praise of Readmill, a “small but growing app . . . that seems to have its pulse on the future of reading.” Readmill, explains Henrik Berggren, the app’s CEO (and I was not aware until now that apps had CEOs), aims to turn e-books into “niche social networks” brimming with real-time interactions among readers — and, of course, piping all of the data on those interactions back to authors and publishers to help them “make more informed decisions.”

I’ll let you read the post for yourself; suffice to say, a number of things in it struck me as unintentionally funny. Start with the big banner image at the top: It shows a line of five people, sitting on a couch in an old-fashioned bricks-and-mortar bookstore, each immersed in a print book. There is not an e-book in sight. All of them look thoroughly absorbed in what they’re doing, and while none of them is smiling, a sense of vast contentment radiates from the photo.

Note that although this is, at least in the broadest sense, a group of people, they are engaged in a fundamentally solitary activity. And nothing is “broken” here — the print format’s support of sustained concentration, its momentary banishment of distraction, is a feature, not a bug.

It seems to me that this is likely to be true irrespective of the platform. In a moment of pure hubris, Berggren, in his interview with Grothaus, asserts that e-reader platforms like Amazon’s Kindle are “doing it in the wrong way,” kind of a remarkable statement given that Amazon owns 45% of the e-book market and the Kindle is what got them there. While there are many reasons for that success, I think you could make an argument that part of it of lies simply in the characteristics of the device itself, which (except perhaps in the case of the Kindle Fire) seems designed to be as book-like and distraction-free as possible. That, at least, is a big reason it’s worked for me. With my Kindle Paperwhite, I don’t just skim, I read.

There are larger social issues here than Amazon’s bottom line or the promise of apps like Readmill, when we think about what the rise of screen media has already done to our ability to engage with books as intellectual objects. Cory Doctorow memorably referred to the Internet as “an ecosystem of distraction technologies”; what happens when we start to embed them into activities, like reading, for which a lack of distraction is an essential part of the experience? We already have some idea, given the differing ways we interact with HTML pages versus PDFs versus print material. And an interesting survey earlier this year, focusing on the reading habits of children in the U.K. found that, yes, more children are now reading on electronic devices than in print, and they prefer it that way — but that “those who read only on screen are . . . three times less likely to enjoy reading (12% compared to 51%)” than those who read in print.

Certainly the world is changing, but it does seem worth thinking about these data and their implications for what the thing that we call “reading” actually is, and is becoming.

Also unintentionally funny in Grothaus’s article is his bout of hand-wringing over the implications of Readmill’s proposed business model. That model, like so many we are seeing these days, seems to revolve around selling data on user interactions back to interested parties — in this case, book publishers. Grothaus is worried: Won’t this mean that publishers will start to use these data to lean on authors and dictate their writing styles to boost sales? Berggren offers this glib response: “You can paint a very dystopian future where publishers say, ‘Oh, people are just skipping this chapter. You can’t write like this anymore.’ However, I think that’s unlikely to happen.”

Well, that’s a relief.

(Nota bene: Berggren’s response, in addition to being a dodge, shows how successfully we’ve managed to cheapen the term “dystopian.” You want dystopias, Henrik? I’ll show you some dystopias.)