« Cash as the real real option -- to do anything | Main | What modern art and economic theory have in common »


Feed You can follow this conversation by subscribing to the comment feed for this post.

I'm not sure I see your point here.

For example, the example of drawing a Chicken doesn't really apply here since society still has the ability to do it. The fact that neither you nor I can do it just reflects a higher degree of specialization and that is definitely progress.

As for the extra study to learn what's known, it simply takes longer now because more is known and it absolutely is necessary. You need to learn what has already been done to avoid wasting resources rediscoving it and to avoid repeating mistakes (which is what Krugman was complaining about).

Dark ages refer to forgetting on a social level, entire civilization not being able to do something they used to be able to do and, most importantly, would still want to do. Most of what we, at the social level, have forgotten are things we stopped doing because we found better techniques to accomplish the same goal. But the fact that fewer people may have a particular skill, like being able to farm or change the spark plugs in their car, has absolutely nothing to do whatsoever with whether or not we are in a new Dark Age.

And I fail to see how a real dark age is ever good.

Not one of your better efforts Nick

Except for the part about Says Law. Now just do the same for Ricardian Equivalence, Quantity Theory of Money, Efficient Markets Hypothesis and Rational Expectations. Of course then you'll be closer to an MMTer.......... and almost out of the dark ages!! ;-)

"For example, the example of drawing a Chicken doesn't really apply here since society still has the ability to do it."

Seconded. Plus I have the ability to Google it and get the instructions.

Nick is right though - my Grandfather could do it. Having watched it, I'm not sure I have the stomach for it...

Nick, I'd highly recommend Pierre Bayard's "How to talk about books you haven't read" which actually relates to both this post and your previous one. Picking up a page more or less at random, he writes:

"Faced with a quantity of books so vast that nearly all of them must remain unknown, how can we escape the conclusion that even a life-time of reading is utterly in vain?... Reading is first and foremost non-reading. Even if the case of the most passionate lifelong readers, the act of picking up and opening a book masks the countergesture that occurs at the same time: the involuntary act of *not* picking up and *not* opening all the other books in the universe."

So "the option value of forgetting"?

Where I disagree with you is in academic paradigm shifts, which I think involve a non-trivial amount of rent-seeking behaviour, because what should be a cost - thinking up and writing up new ideas - becomes a personal benefit - highly ranked publications.


The critical point is why *collective* forgetting is better than *individual* forgetting. If I've forgotten, say, everything I ever knew about theory X, why is it a good thing that you've forgotten too?

I never used to understand how Kuhn could have thought what he did about scientific revolutions.  That's because my understanding of science was grounded in my background in physics.  We didn't throw out Newtonian mechanics in favour of relativity and quantum mechanics because of some random fad or cultural shift.  We did it because careful, replicated experiments revealed that a highly successful and accurate theory was wrong in the limit of very extreme (for humans) conditions.  So I thought Kuhn was ridiculous. 

But if I were to generalize to all knowledge from the field of economics alone, I can see reaching his conclusions.  That doesn't mean that that is a good thing to do.  It just means that economists don't know very much of anything about anything important.  For all of the dismay of Krugman et al about the Dark Ages, they can't point to anything that proves they are right.  All of their theories (and those of their opponents) are backed up by a handful of noisy independent observations (for how many business cycles do we have sort-of-accurate economic data?) and there are no experiments against which to test falsifiable propositions.  That ain't science.  And it doesn't seem to me to be a great model of "knowledge" either.  So, yup, there may be little harm in throwing away random bits of economic memory.  But it's no argument for throwing away knowledge.

Its kind of funny that you bring up Say's law since people forgot what it actually was. It cant possible be a good thing when "forgetting" occurs because a vandal like Keynes simply produced a strawman that appealed to the economists at the time and nearly eliminated the group of economists that were responsible for creating the discipline in the first place.

I dont think you would see this phenomenon in any hard science. What physicist would think that forgetting Maxwell's equations would be a good thing? Things are proven true or false based on evidence and the things that are proven true should never be forgotten. All your posts shows is that Macroeconomics is a pseudoscience that is prone to political fads.

Hi Nick, in contrast to virtually all the commentators here, I think you're correct.

In Gerd Gertzinger's "Gut Feelings" (a primary source for Malcolm Gladwell's "Blink") he advances the hypothesis that the ability to forget unimportant data, confers an evolutionary advantage. That's probably pretty shocking, so I'll repeat it: there is an evolutionary advantage in being able to to forget stuff we deem unimportant.

Gertzinger provides data showing that people make better decisions not when they have a minimum or maximum amount of information, but when they have a medium amount. In other words, it's a Laffer curve.

As an example, he cites a study that a higher percentage of Germans kids than American kids correctly guessed that Detroit was more populous than Milwaukee. German kids probably chose Detroit because they'd never heard of Milwaukee, so using the "recognition heuristic" (if I've heard of it, it must be big / important / a trustable brand-name) they guessed Detroit. American kids have an overabundance of information about both cities (Milwaukee has a baseball team, I hear Detroit is shrinking, isn't Eminem from Detroit?) and are less able to throw out the unimportant details when making their decision.

He provides a similar example when dealing with predicting the results of basketball games. If given a few pieces of information (win-loss record; score at the half) people are more likely to predict who wins, than if they're also provided with a flood of other details (who's on a streak; injured players; team records against each other; which is the home team; etc.)

We covered the book in our business book club at work; I've got a book summary here if anyone wants to check it out.

- - - - - - -

Since unimportant details get in the way of making the correct choices, the ability to forget details we deem unimportant should therefore confer an evolutionary advantage. An animal who rigidly remembers a few rules for avoiding danger (having forgotten a lot of other details from its past experience) will probably be better off than one who's wading through an ocean of data when making life-or-death decisions.

As such, for individuals, there is probably an optimal of collective forgetting, and we probably forget roughly the appropriate amount.

On the topic of chickens, I think there's a non sequitur in that the necessary skill sets are probably different between modern urbanites and past ruralites. I need to be able to read, but animal husbandry is irrelevant for my lifestyle (it's an unimportant, forgettable detail). In contrast, if we go back several centuries, my grandparents needed to remember rules about animal husbandry, but literacy was probably a forgettable skill (if they were ever taught) as it didn't influence their livelihood.

With respect to Dark Ages, I think it would be a stretch to say that Dark Ages are good (though the fact that a society didn't maintain knowledge from prior generations might suggest their situation was such that they had to focus almost all their resources on immediate survival). Fortunately the Dark Ages were only a Western European phenomenon -- scholarship was comparatively vibrant in the Byzantine Empire during that period, and I imagine other civilizations kept puttering along also.

"the ability to forget unimportant data, confers an evolutionary advantage. That's probably pretty shocking, so I'll repeat it: there is an evolutionary advantage in being able to to forget stuff we deem unimportant. "

This does not, in any way, imply Nick is in any way correct. Frances hit the nail on the head:

"The critical point is why *collective* forgetting is better than *individual* forgetting. If I've forgotten, say, everything I ever knew about theory X, why is it a good thing that you've forgotten too?"

and furthermore, specailization is a good thing. An individual's ability to forget something unimportant has absolutely no relation to a society concentrating knowledge of how to do something in a specialized few so that the rest of us can "forget" it (as in never expend time and resource learning to do it in the first place).

As the stock of knowledge increases over time, we have two strategies if we want to learn it all collectively. We can each spend longer learning; or we can each specialise more. The first strategy eventually hits the limit of finite human lives. We spend our most productive years learning, then eventually spend all our years learning, and so can do nothing. The second strategy, where each individual specialises more and more narrowly, eventually hits the limit of individual access to our collective information. In the limit, none of us has any knowledge in common with anybody else. Each of us becomes like one book in Borges' Library of Babel.

We have to decide what not to learn, both individually, and collectively.

Nick: Isn't your argument depending on the fact that the stock of knowledge grows at a rate faster than population growth?

"The second strategy, where each individual specialises more and more narrowly, eventually hits the limit of individual access to our collective information."

No, the point here is that I can consume the output of the activity of chicken drawing (whatever that is) without ever knowing how to do it, who does it or even what it is and that it happens.

I need no access to any collective information to eat the resulting chicken production. There is not limit to this.

Or put it differently:

K - Total amt. of knowlege
a - # of things a person knows
N - number of people.

Then the average number of people who know something should be: (N*a)/K.

What happens over time is going to depend on the growth rates of N, a and K.

Not learning is not equivalent to forgetting. Learning can be costly. Think, e.g., of word processing programs. Wordperfect 5.1 was the best - I could do everything I needed to do super-quickly. Since then I've spent countless hours learning variants on Word, with minimal gains in productivity (and sometimes productivity losses, I can't figure out how to program superscript and subscript macros in this new version of word). And some things we learn make us worse off, I would give examples but other readers would scream TMI (too much information).

There's much to be said for not learning.

Your argument for forgetting (ignoring the collective/individual issue) is like my argument for throwing away one item of clothing whenever I buy a new one - unless I do that, eventually there won't be enough room in my closet. (Yes, this is coming from a woman who yesterday wore a dress she's had since she was an undergrad.) It works for clothes because clothes eventually wear out, get stained, don't fit, or go out of fashion.

But what's the argument for throwing away ideas? Is it that new ones are better - e.g. you'd throw out an old heavy oilskin coat because gortex is lighter, more waterproof and generally superior.

Or is it that new ones are just more fashionable?

Someone needs to continue what Mike started at 10.36.

Let K be the stock of knowledge, N be the population, S be an index of the degree of specialisation. So S=K/N.

Let I be investment in new knowledge. So I=dK/dt.

Assume we spend a fraction L of our lives learning existing knowledge, and the remaining fraction (1-L) adding to the stock of knowledge.

For a given N and S, L must be increasing in K. So if I is positive, L must be increasing over time, which means (1-L) is falling over time, which means I is falling over time. I think I asymptotes towards zero. We approach a stationary state.

What happens when we add population growth?

There's a second question about the production function of new knowledge. I=F((1-L, N, and what else?).

I know there's some sort of literature on this. IIRC, there are two important questions:

1. If we double N, holding (1-L), S and K constant, do we double I? Or are there diminishing returns due to researchers overworking the common property resource of new frontiers (the race to invent first)?

2. If we increase K, holding N, (1-L) and S constant, does I fall or increase? In favour of I falling, we can argue that all the easy ideas have already been discovered. In favour of I rising, we can argue that existing knowledge makes it easier to discover new ideas, because it increases your productivity in everything.

A third question would be: what are the costs of specialisation?

Frances: "But what's the argument for throwing away ideas?"

I think your above emphasis on the costs of learning is the answer to your question. If learning were free, we would all learn all existing ideas. But with finite lives, and costly learning, keeping ideas in the existing stock of knowledge means every new generation has to learn them. It is as if the stock of ideas depreciates, just as capital depreciates. The greater the stock of ideas/capital, the more we have to spend to offset depreciation. If K gets too big, the whole of GDP is needed to cover depreciation of K.

We don't throw away ideas. We just stop teaching and learning them. But that has the same effect.

(And God yes, I hate having to learn new versions of Word. Especially when each new version seems to get worse. WordPerfect 20 years ago did everything I wanted. Bring on the Luddites, at least in that area!)


Is that the best way of modelling the relationship? Wouldn't there be overlap/redundancy.

Adam: "I need no access to any collective information to eat the resulting chicken production. There is not limit to this."

I'm not sure about that. Are there any costs to specialisation of knowledge? This is the same as the Hayekian question of the Use of Knowledge in Society. If we had a perfect price system, a perfect market system, a perfectly costless way of aggregating at the societal level each individual's local knowledge, then there is no cost of specialisation. But I don't think that's true. Otherwise, as Coase asked, why do firms exist?

Also, why don't we all specialise completely? Why do generalists exist? Why do some books in the library of knowledge get read by many people, and others get read by none? Isn't that duplication of individual knowledge a waste?

There are benefits to specialisation, sure. But there must also be costs.

edeast: yes, there is overlap/redundancy. (I was writing my response to Adam when you were posting). S=K/N seems like a good simplification to start with, if we hold S constant. But as soon as we start asking "What is the optimal S?" we have to address your question. Why is there overlap?

Why is there overlap? Because we don't want to end up like Archimedes. more later

All: your comments, as always on this blog, are good ones. They help me correct and clarify my ideas.

Let me try to re-state:

Learning is individually costly. Specialisation of learning is also costly (because social aggregation of relevant individual knowledge to tackle a problem is costly). Together that implies that learning is costly at the societal level.

Lives are finite. The greater society's stock of knowledge, the greater the cost of maintaining that stock of knowledge by teaching it to the next generation. The more we spend on preventing depreciation of existing knowledge, the less of gross investment is available for net investment in new knowledge.

Because maintaining knowledge (preventing depreciation) is costly, it will sometimes be better to let some knowledge be forgotten, even when it is potentially useful knowledge. (Sometimes it is cheaper to build new machines than to maintain old machines, so net investment is higher for a given gross investment, if we let some old machines rust.)

Is the equilibrium allocation of resources between maintaining old knowledge and building new knowledge socially optimal?

Don’t mind me, I shouldn’t have said: more later, I’m at work and can’t do it justice.
But cephalization, overlapping probability models, and the catch all phrase reduction of entropy.
As to new knowledge vs old, consider ‘cargo cult’, and whether your framework has traction on the real world. Now this doesn’t get at your argument, of potentially useful information being foregone.

Not all knowledge is equally important. Metaknowledge increases in importance as knowledge increases. Forgetting/not learning details suitable for specialists is the reasonable response to its proliferation, but even if no one has the details, still having them indexed in the library, recorded for posterity, built into complex computer models augmenting our minds, or having the ability to reproduce them easily if that is the most compact form, is still valuable. One of the costs of specialization is the information they have may not be applicable or conflict with information of other specialists some of whom may be relevant. Forgetting/not learning on a large significant scale is probably an indication we never really learned it or learned it correctly, for if we had a correct model we would not have to revert to the past. It is how we make progress, but progress is an indication we weren't where we believed we were in the first place.

Nick, I'm interested in the contrast between this conversation and my recent one (where WCIers demonstrated their truly impressive knowledge of classical music) on golden classics. There seemed to be a lot of consensus on "golden classics" post that survivorship bias and changes in relative costs of different types of music meant that the great composers *were* in fact the dead composers, so maintaining old knowledge was a good idea, building new knowledge relatively less important.

Whether new ideas are better than old ideas - so whether forgetting is better than not learning in the first place - is an empirical question.

But here is a thought for you: I'm messing around trying to finish up my paper looking at teaching etc. of Ontario economics professors - more on that on the blog in a week or two. So I thought, just for fun, I'd run a regression looking at students' evaluations of teachers quality (from www.ratemyprofessors.com) as a function of the professors age - or, more precisely, the number of years since the professor received his or her PhD. With controls for a whole bunch of other things (which university the prof is teaching at, whether he/she's full, assistant, associate, # publications, SSHRC grants etc) it turns out that each extra year since PhD decreases the "clarity" rating on a scale of 1 to 5 by 0.02 (p=000 which is as good as it gets). Not very much, but it means that 10 years on, your scores are down by 0.2, twenty years on, they're down by 0.4 (if you add in a control for whether or not the professor is hot, however, the age/teaching quality relationship is a bit weaker).

I'm not going to pile on, Nick - you've collected enough flak for one post!

But I was amused to divine the link between this post and your previous one; it was Mathew Klippenstein's mention of Gigerenzer that twigged me, for by coincidence the Psy-Fi blog recently had a post on satisficing that mentioned him (http://www.psyfitec.com/2010/07/satisficing-stockpicking.html)

Why should I make a decision on partial information when more is available? Well obviously, if the cost of collecting more information exceeds the benefit. And if either the cost or the benefit is stochastic, as is nearly always the case? Then I have a real option. Likewise with forgetting.

Nietzsche made this exact point in "The Uses and Disadvantages of History for Life."

Sometimes we forget that there's an ethical value to not knowing something and having to figure it out for ourselves. We forget sometimes that knowledge isn't transferable; it must be created/discovered anew in the mind of the student in each instance.

Archimedes comment, what good is book learning if you can’t defend yourself. Or in the context of the post: overspecializing within a society without including the tail risk of societal collapse causes the individual to fail. So the overlapping probability distributions are the individuals mental model of the world, which should includes a probability of societal collapse, or market failure.(requiring redundancy)

Great post.

I couldn't forget this if I tried:


We need more cowbell

And more hyperlinks

And Greg has the answer insofar as Economic darkness is concerned

Tangentially related on the value and cost of forgetting - the movie "Eternal sunshine of the spotless mind."

"We are always in a Dark Age."

And every colour is red.

And every note is C

And every tree is a maple

And so on.

Pretending words and phrases don't have the meanings they have doesn't strike me as a useful endeavour.

Also, I don't get the bit about the Library of Basel. Take pop music on the internet. At this point, the internet is practically a complete library of every pop song ever written. As the number of songs on the internet has expanded from 0 to effectively infinity, it has become easier, not harder for me to find what I'm looking for. So how would reducing the number of songs available in this library help anyone or make it more library-like?

The bit about Funes doesn't make any sense either. Computers can have perfect memory of all the details of things they encounter yet this is irrelevant (orthogonal, you might say) to their ability to make classifications. In fact, to the extent that there's a correlation, it works the other way - the more details available, the more axes that objects can be classified along and the more elaborate the hierarichies of classification can become. A person who can remember whether or not an animal they encountered has 6 or 8 legs is in a better position to classify it, not worse.

It sounds like Borges could benefit from taking some Computer/Information Science Courses.

Ian Lippert: "I dont think you would see this phenomenon in any hard science. What physicist would think that forgetting Maxwell's equations would be a good thing? Things are proven true or false based on evidence and the things that are proven true should never be forgotten."

Who remembers phlogiston? People think of it as bad science, but it wasn't. I got superseded by more detailed and accurate theory, that's all. Science does not prove things true. Scientists work hard to disprove things.

Dark Ages are indeed marked by lost learning. But the lost learning is a symptom, not a cause. The learning is lost because of a breakdown in society.

"there is an evolutionary advantage in being able to to forget stuff we deem unimportant."

The point here, as others have mentioned, is what is unimportant. There is no evolutionary advantage to forgetting how to identify non-poisonous foods, or how to start a fire. In that vein, Krugman wasn't referring to some unimportant and irrelevant knowlwedge, but in fact an important lesson in macro. The point was, someone was reinventing the square wheel. After someone had already determined that a round wheel worked better. These were people at the centre of the profession with some policy influence, not your crazy uncle who read a Ron Paul pamphlet.

And, the forgetting wasn't helpful when it led down a well worn path that led nowhere.

Sure, I can't know everything. And sure, the world changes, and knowledge changes, and the old ways are forgotten. BUT, the key challenge is determining what information to forget and what information to keep. And asking the right person/doing the right research when going down an unfamiliar road.

We don't have to individually remember everything, but because of the increase in the amount of information in this world, it is important to be able to learn about an area that you are treading in, or you will lose the benefit of prior generations of thought, and may not advance any further.

One other thing - the analogy to capital equipment is a little bit irrelevant. If you want to analogize it, maybe to land. Sure, like land, maintaining knowledge can have a cost. However, unlike a machine, land can't be replaced so easily (like knowledge), but can be relandscaped, refurbished, and altered. With knowledge, you may restate it, you may revise it, but it is not so much a wholesale replacement. I would maybe take the whole attempt to analogize knowledge to property out to the woodshed and kill and draw it. It just leads to so many problems.

Finally, I recently learned how to kill, defeather, and draw a chicken. I'll tell you where people still need to have that knowledge - the developing world where people do not have supermarkets and refrigeration. Much easier to buy a live chicken and then do the job before eating it. I think that everyone that eats meat should do something like that at least once - you learn something about what it means to eat another animal. But, that is another discussion.

I'm not sure how much I want to argue against those of you who disagree with the thesis of my post. I knew I was quite possibly overstating the case.

Just some minor points.

I happen to strongly agree with Paul Krugman's post. I think I said as much at the time. I think that understanding Say's Law (or what has come to be thought of as Say's Law, though whether Say believed it is another question) is very important. And it's bad it's been forgotten. But it is costly to remember everything. And ex post we will sometimes make mistakes and forget the wrong things, rather than less important things which aren't worth remembering. Forgetting Say's Law (as some did) was one of those mistakes.

Yes, I was certainly stretching language to say that we're always in a Dark Age. Obviously we aren't. It was just a way of emphasising that we are always forgetting, and need to forget. What's different about a true Dark Age is that more is forgotten (and less new stuff is learned). But Min's point about the the forgetting of the Dark Ages was maybe more a symptom than a cause. When society collapses, it becomes more costly to remember, so we forget more, and knowledge regresses. So we should bewail the cause, not the symptom. But at the same time we should wonder whether the loss of the past knowledge opened the way for new and in some ways better ways of thinking? Just as when students stopped spending so much time learning Latin and Greek?

It's interesting to argue whether knowledge is more like capital or land in terms of cost of maintenance or replacement. (It's obviously very different from both capital and land in being a non-rival good). I say it's more like capital in that you need to maintain it. And you can replace it. We rediscovered cement, after the Roman's knowledge was lost. In some cases we can rediscover it more easily than the original discovery. Historical knowledge is an exception. Now land needs to be cleared, drained, and fertilised, etc., and this can be lost if we don't maintain it. But then it becomes land plus capital. It's no longer "land" in the Ricardian sense.

Remember that the Library of Babel must also contain a book that is a guide to the Library, telling you where to find other books in the library. And it also contains millions of false guides to the Library. Others have drawn the analogy between the Library of Babel and the internet! To my mind, it shows that what matters is not just the existence of information (because the Library contains everything), but our ability to find the particular information that's important.

What an inane post.

Who chooses what we forget? The Texas school board, apparently, in the US.

This is so unilluminating and non-defensible on so many levels I'd have rather read another boring paper on trade.

Great post. Apparently a taste for Borges is one more thing we have in common. Prior to 2008 I assumed that all the old "pushing on a string" thinking had gone into the dustbin of history. Unfortunately, it doesn't seem that we destroyed quite enough knowledge.

Just seemed so ... relevant.


(linked out in the signature as well)

One possibly helpful comment - what do you mean when you talk about "knowing" something or "learning" something or "forgetting" something? Scanning through the comments, the sense seems to drift from "Knowledge X is an artifact that was created and has been destroyed" to "Knowledge X was a mystery learned by a few initiates after long study; however all the old initiates died without training any new ones." What would happen if you used other models of the social processes involved?

interesting article, on tech progress.http://www.theatlantic.com/science/archive/2010/08/whats-wrong-with-x-is-dead/61663/

The comments to this entry are closed.

Search this site

  • Google

Blog powered by Typepad