Caplan posted a link to this paper on the "Power of Personality" by Roberts et al. It includes some valuable passages on the history of psychometry:
... Walter Mischel(1968) argued that personality traits had limited utility in predicting behavior because their correlational upper limit appeared to be about .30. Subsequently, this .30 value became derided as the ‘‘personality coefficient.’’ Two conclusions were inferred from this argument. First, personality traits have little predictive validity. Second, if personality traits do not predict much, then other factors, such as the situation, must be responsible for the vast amounts of variance that are left unaccounted for. The idea that personality traits are the validity weaklings of the predictive panoply has been reiterated in unmitigated form to this day (e.g., Bandura, 1999; Lewis, 2001;Paul, 2004; Ross & Nisbett, 1991). In fact, this position is so widely accepted that personality psychologists often apologize for correlations in the range of .20 to .30 (e.g., Bornstein, 1999).Should personality psychologists be apologetic for their modest validity coefficients? Apparently not, according to Meyer and his colleagues (Meyer et al., 2001), who did psychological science a service by tabling the effect sizes for a wide variety of psychological investigations and placing them side-by-side with comparable effect sizes from medicine and everyday life. These investigators made several important points. First, the modal effect size on a correlational scale for psychology as a whole is between .10 and .40, including that seen in experimental investigations (see also Hemphill, 2003). It appears that the .30 barrier applies to most phenomena in psychology and not just to those in the realm of personality psychology. Second, the very largest effects for any variables in psychology are in the .50 to .60 range, and these are quite rare (e.g., the effect of increasing age on declining speed of information processing in adults). Third, effect sizes for assessment measures and therapeutic interventions in psychology are similar to those found in medicine. It is sobering to see that the effect sizes for many medical interventions—like consuming aspirin to treat heart disease or using chemotherapy to treat breast cancer—translate into correlations of .02 or .03. Taken together, the data presented by Meyer and colleagues make clear that our standards for effect sizes need to be established in light of what is typical for psychology and for other fields concerned with human functioning.
The paper goes on to make an important point about the cumulative power of small effects:
... Moreover, when attempting to predict these critical life outcomes, even relatively small effects can be important because of their pragmatic effects and because of their cumulative effects across a person’s life (Abelson, 1985; Funder, 2004; Rosenthal, 1990).In terms of practicality, the À.03 association between taking aspirin and reducing heart attacks provides an excellent example. In one study, this surprisingly small association resulted in 85 fewer heart attacks among the patients of 10,845 physicians (Rosenthal, 2000).
I have heard effect sizes around the 0.2 level from reseach in psychology and sociology dismissed by engineers as "useless". Medical practitioners are not so cavalier. An effect size at the 0.05 level in the field of finance would be a powerful money making machine.