The Deflationist: How Paul Krugman found politics by Larissa MacFarquhar. In addition to explaining the evolution of the political Krugman, it has a great section on how economics works. Here's a teaser:
The most successful paper Krugman ever wrote was about target zones, and it was completely wrong. In the years before Europe adopted the euro, it was thought that establishing something between floating exchange rates and fixed ones—a “target zone” within which a currency would be allowed to float—might reap some of the advantages of each. He estimates that by the time the paper was officially published, in 1991, some hundred and fifty derivative papers had already appeared. “Empirically, it doesn’t work at all,” Krugman says. “People loved it as an academic thing, but it had some very strong predictions about interest rates inside target zones. Those predictions all turned out to be wrong. But nobody attacked me for that. I was showing that if target zones worked the way that people say they’re supposed to work, then this is how it would play out.”
See also this commentary by Thomas Levenson, picking up from MacFarquahar's article on the nature of academic economics:
The point being that economists, for good reasons, often need to rebuild a structure of known facts and ideas — not because they could not know these things by other means (like a good cartographic historian would) but because for economists to talk to each other, they need to express the objects of their curiosity in a form that their colleagues can understand. So far so good — but such mutual comprehensibility can come, as MacFarquahar documents Krugman discovering, at the expense of insights available for the taking. This is what I mean when I say, as I have on occasion that economics is an aspiring, or simply a young discipline.
That is: economics as practiced in the academy is in possession, its practitioners believe (and I mostly do too, not that my opinion matters) of a body of methods and a growing number of results that suggest that it is a powerful way of analyzing certain kinds of human behavior, and for making useful predictions about some things. But it is far from as comprehensive in its explanatory power as some of its practitioners — and many more in the economic pundit class — would have one believe.
Steven Levy in Wired: How Google's Algorithm Rules The Web. The algorithm gets tweaked hundreds of times a year, using "contextual signals" culled from search queries:
Take, for instance, the way Google’s engine learns which words are synonyms. “We discovered a nifty thing very early on,” Singhal says. “People change words in their queries. So someone would say, ‘pictures of dogs,’ and then they’d say, ‘pictures of puppies.’ So that told us that maybe ‘dogs’ and ‘puppies’ were interchangeable. We also learned that when you boil water, it’s hot water. We were relearning semantics from humans, and that was a great advance.”
But there were obstacles. Google’s synonym system understood that a dog was similar to a puppy and that boiling water was hot. But it also concluded that a hot dog was the same as a boiling puppy. The problem was fixed in late 2002 by a breakthrough based on philosopher Ludwig Wittgenstein’s theories about how words are defined by context. As Google crawled and archived billions of documents and Web pages, it analyzed what words were close to each other. “Hot dog” would be found in searches that also contained “bread” and “mustard” and “baseball games” — not poached pooches. That helped the algorithm understand what “hot dog” — and millions of other terms — meant. “Today, if you type ‘Gandhi bio,’ we know that bio means biography,” Singhal says. “And if you type ‘bio warfare,’ it means biological.”
Sunday, February 28, 2010
Links ...
Subscribe to:
Post Comments (Atom)
1 Comments:
Deflationist? Sound like Krugman for President.
Post a Comment