The Ron Paul Upside

January 22, 2012

The Chronicle of Higher Education:

This evening, my family will sit down on the couch together to enjoy the opening episode of America’s favorite spectacle of poor metacognition. Along with millions of others, including some of you, we will marvel at the sight of so many human beings eager to put their deficient cognitive skills on display for the world.

This evening, my family will sit down on the couch together to enjoy the opening episode of America’s favorite spectacle of poor metacognition. Along with millions of others, including some of you, we will marvel at the sight of so many human beings eager to put their deficient cognitive skills on display for the world.

I’m talking, of course, about the season premiere of American Idol, where lousy metacognition will join lousy singing for two cringeworthy hours tonight and another hour tomorrow night, as amateur musicians audition for the opportunity to win fame, fortune, and a recording contract. The opening two episodes of each season have become notorious for featuring the worst singers who auditioned for the show, encouraging viewers to engage in some gentle schadenfreude as Idol participants make fools of themselves on national television.

What makes so many of those atrocious singers laughable to us—excepting the ones who put on deliberately bad performances in order to get on camera—turns out to be a problem that plagues many undergraduates, especially the weakest among them: an inability to judge accurately their own level of skill or knowledge in a specific area.

Poor metacognition means that some terrible yet hopeful singers on American Idol are unable to assess their own weak vocal talents. And it means that some students have a mistaken sense of confidence in the depth of their learning.

Cognitive psychologists use the term metacognition to describe our ability to assess our own skills, knowledge, or learning. That ability affects how well and how long students study—which, of course, affects how much and how deeply they learn. Students with poor metacognition skills will often shorten their study time prematurely, thinking that they have mastered course material that they barely know.

I was introduced to this concept by Stephen Chew, professor and chair of the psychology department at Samford University, who wrote to me in response to my column last month on teaching and human memory. Chew’s credentials in this area immediately caught my attention: In 2011 he was named one of four outstanding professors of the year by the Council for Advancement and Support of Education and the Carnegie Foundation for the Advancement of Teaching. His work focuses specifically on the implications of cognitive research for learning and instruction.

As a part of his work in this area, Chew has produced a series of videos for students on how to study effectively. After watching all five of the videos in his series—which I highly recommend to all administrators and faculty members who work with first-year students—I thought it worthwhile to devote one more column to drawing out a principle from cognitive psychology that could help many of us do our jobs better.

I asked Chew to give readers a basic definition of metacognition, with some illustrations of the concept both from education and everyday life. “Metacognition,” he explained in an e-mail, “is a person’s awareness of his or her own level of knowledge and thought processes. In education, it has to do with students’ awareness of their actual level of understanding of a topic. Weaker students typically have poor metacognition; they are grossly overconfident in their level of understanding. They think they have a good understanding when they really have a shallow, fragmented understanding that is composed of both accurate information and misconceptions.”

That leads weak students, he said, to make poor study decisions: “Once students feel they have mastered material, they will stop studying, usually before they have the depth and breadth of understanding they need to do well. On exams, they will often believe their answers are absolutely correct, only to be shocked when they make a bad grade.”

As for examples outside of education, Chew had no trouble pointing them out in a variety of areas, including reality television shows.

“Poor metacognition is a big part of incompetence,” he explained. “People who are incompetent typically do not realize how incompetent they are. People who aren’t funny at all think they are hilarious. People who are bad drivers think they are especially good. You don’t want to fly on a plane with a pilot who has poor metacognition. A lot of reality shows like American Idol highlight people with poor metacognition for entertainment. Everyone knows people who are seldom in doubt but often wrong.”

.

I’m talking, of course, about the season premiere of American Idol, where lousy metacognition will join lousy singing for two cringeworthy hours tonight and another hour tomorrow night, as amateur musicians audition for the opportunity to win fame, fortune, and a recording contract. The opening two episodes of each season have become notorious for featuring the worst singers who auditioned for the show, encouraging viewers to engage in some gentle schadenfreude as Idol participants make fools of themselves on national television.

What makes so many of those atrocious singers laughable to us—excepting the ones who put on deliberately bad performances in order to get on camera—turns out to be a problem that plagues many undergraduates, especially the weakest among them: an inability to judge accurately their own level of skill or knowledge in a specific area.

Poor metacognition means that some terrible yet hopeful singers on American Idol are unable to assess their own weak vocal talents. And it means that some students have a mistaken sense of confidence in the depth of their learning.

Cognitive psychologists use the term metacognition to describe our ability to assess our own skills, knowledge, or learning. That ability affects how well and how long students study—which, of course, affects how much and how deeply they learn. Students with poor metacognition skills will often shorten their study time prematurely, thinking that they have mastered course material that they barely know.

I was introduced to this concept by Stephen Chew, professor and chair of the psychology department at Samford University, who wrote to me in response to my column last month on teaching and human memory. Chew’s credentials in this area immediately caught my attention: In 2011 he was named one of four outstanding professors of the year by the Council for Advancement and Support of Education and the Carnegie Foundation for the Advancement of Teaching. His work focuses specifically on the implications of cognitive research for learning and instruction.

As a part of his work in this area, Chew has produced a series of videos for students on how to study effectively. After watching all five of the videos in his series—which I highly recommend to all administrators and faculty members who work with first-year students—I thought it worthwhile to devote one more column to drawing out a principle from cognitive psychology that could help many of us do our jobs better.

I asked Chew to give readers a basic definition of metacognition, with some illustrations of the concept both from education and everyday life. “Metacognition,” he explained in an e-mail, “is a person’s awareness of his or her own level of knowledge and thought processes. In education, it has to do with students’ awareness of their actual level of understanding of a topic. Weaker students typically have poor metacognition; they are grossly overconfident in their level of understanding. They think they have a good understanding when they really have a shallow, fragmented understanding that is composed of both accurate information and misconceptions.”

That leads weak students, he said, to make poor study decisions: “Once students feel they have mastered material, they will stop studying, usually before they have the depth and breadth of understanding they need to do well. On exams, they will often believe their answers are absolutely correct, only to be shocked when they make a bad grade.”

As for examples outside of education, Chew had no trouble pointing them out in a variety of areas, including reality television shows.

“Poor metacognition is a big part of incompetence,” he explained. “People who are incompetent typically do not realize how incompetent they are. People who aren’t funny at all think they are hilarious. People who are bad drivers think they are especially good. You don’t want to fly on a plane with a pilot who has poor metacognition. A lot of reality shows like American Idol highlight people with poor metacognition for entertainment. Everyone knows people who are seldom in doubt but often wrong.”…

Read it all.

Foreign Policy:

Since the 2008 financial crisis, Wall Street has been the perpetual whipping boy for the ensuing recession that has rocked the global economy. In the United States, Manhattan bankers relied too heavily on subprime mortgages, the story goes, sparking the crisis — in bureaucratic jargon, what is dubbed a “regulatory oversight failure.” In Europe, the debt crisis — which struck again last week when the credit-rating agency Standard & Poor’s stripped France of its AAA rating — is often blamed on the fact that eurozone governments maintained outsized debt-to-GDP ratios, thereby breaking the rules laid down in the Stability and Growth Pact they signed when they joined the currency union.

U.S. President Barack Obama has laid the blame at the feet of Wall Street “fat-cat bankers,” and he finds himself in the company of Federal Reserve Chairman Ben Bernanke. Even Republican presidential hopeful Mitt Romney criticized Wall Street for “leverag[ing] itself far beyond historic and prudent levels” in his 2010 book, blaming its “greed” for contributing to the crisis. The concept of runaway European profligacy, epitomized by 35-hour work weeks and gold-plated pension programs, is also firmly lodged in the popular imagination.

But these explanations for the twin crises in the United States and Europe simply ignore the facts. Subprime mortgages with exotic features accounted for less than 5 percent of new mortgages in the United States from 2000 to 2006. It is therefore highly unlikely that they were solely responsible for setting off the housing boom that ultimately went bust. The explanation offered for the crisis in the eurozone overlooks the fact that Spain and Ireland — two of the weak links in Europe today — were actually paragons of virtue in terms of the Stability Pact. Both countries boasted budget surpluses in the years leading up to the crisis, and both had debt-to-GDP ratios of roughly 30 percent, or only about half the level that was permitted under the Stability Pact.

The immediate cause of the housing bubbles in the United States and the eurozone periphery was not regulatory oversight failure, but the precipitous drop in interest rates in the early 2000s. And the country that bears partial responsibility for depressing interest rates is a traditional punching bag in the American political arena, one that has somehow avoided most of the blame in this round: China. The ascendance of the world’s most populous country in the global economy not only changed the terms of trade, but it also had a considerable impact on the world’s capital markets.

The chain of events that led to the current economic breakdown began in 2000, when the Federal Reserve began to lower the Fed funds rate, its main policy lever, to stave off a recession following the bursting of the dot-com bubble. The Fed slashed the rate from 6.5 percent in late 2000 to 1.75 percent in December 2001 and then down to 1 percent in June 2003. It then kept the rate at 1 percent for more than a year, even though inflation expectations were well above the Fed’s implicit inflation target and the unemployment rate was down to nearly 5 percent, which is considered the natural rate of unemployment. All the while, the Federal Reserve dismissed warnings about a nationwide housing bubble, with then Federal Reserve Chairman Alan Greenspan even denying that it was possible to have such a thing.

The low interest rates initially sparked the refinancing boom — or as commentators liked to say, Americans used their houses as ATMs. Between the first quarter of 2003 and the second quarter of 2004, the time when the Federal Reserve held its main policy rate steady at 1 percent, two-thirds of mortgage originations were for home refinance. Americans got themselves indebted up to their eyeballs and went on a prolific spending binge with their newly acquired cash. Spending out of home equity extraction amounted to $750 billion, or more than 4 percent of GDP, in 2005 alone.

Fed policymakers generally looked favorably upon remortgaging as a source of personal consumption expenditure. In his now infamous 2005 Sandridge lecture, Bernanke, then a Fed governor, boasted of the “depth and sophistication of the country’s financial markets, which … allowed households easy access to housing wealth.”…

Read it all.

Defining Ideas:

In “Obamacare vs. The Commerce Clause,” Richard Epstein provides a devastating critique of Supreme Court commerce clause case law since the New Deal. Because it is “an indefensible line of cases,” Epstein argues “[t]he United States Supreme Court should confess error and acknowledge that its past decisions are bad both as a matter of constitutional history and constitutional theory.”

Professor Epstein is right. Despite recurrent claims by the Supreme Court that there are, in fact, limits on Congress’ commerce clause authority, the case law described by Epstein demonstrates the opposite. If the federal system conceived by the framers of the United States Constitution is to survive in anything more than name, the Supreme Court must push the restart button.

But Epstein does not think the Court will admit to three-quarters of a century of intellectual confusion, so he urges a far more modest result in the Supreme Court’s review of the constitutionality of the Patient Protection and Affordable Care Act. The Court should accept, says Epstein, “the sensible claim that commerce does not apply to transactions that people never entered into.” The Court would thus draw a line in the commerce clause sands by acknowledging the “indefensible pedigree” of Wickard v. Filburn, but would not undertake to correct for past errors.

Wickard is the New Deal-era case in which the Court upheld federal restriction of the acreage of wheat a farmer could grow, even when all the forbidden wheat was consumed on the farm. It marked the demise of the commerce clause as the enumeration of a limited federal power, and the emergence of that clause as a source of what is effectively a Congressional police power. Absent Epstein’s suggested line in the sand, Wickard requires only that Congress find that the aggregate economic impact of individual decisions to not purchase health insurance have a substantial effect on the interstate health insurance market.

But Epstein is not optimistic, even with these limited ambitions. In a prediction based on judicial politics, as opposed to the substantive issue, Epstein reluctantly concludes that “the bet has to be that the mandate will be affirmed.” His pessimism about the prospects for a major rethinking of commerce clause doctrine is rooted in the fact that even the conservatives on the Court have been cautious in suggesting limits to Congress’ power. Restoration of the federal system contemplated by the Framers will be accomplished only by overruling Wickard, and Epstein believes that Justice Thomas is the sole member of the Court willing to do that.

Though the smart money will be with Epstein’s forecast of more doctrinal incrementalism, if not with his wager on the precise direction it will take, we should demand and expect better from our nation’s highest court. As recently as its last term in Bond v. United States, the Court suggested a foundation for what might be called, in today’s parlance, a “reset” of commerce clause doctrine in particular and federalism doctrine in general.

The structure of federalism protects individual liberty from government excess.

The issue in Bond was whether an individual has standing to challenge the validity of a federal law on the ground that Congress did not have authority to enact the law and was therefore in violation of the 10th Amendment, which states that “[t]he powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” In response to the government’s argument that only a state government can challenge the constitutionality of Congressional acts alleged to infringe on the powers of the states, Justice Kennedy, writing for a unanimous court, held that the individual claimant does have standing because “[w]hen government acts in excess of its lawful powers, . . . liberty is at stake.”

This was not an off-the-cuff throwaway line that might have been overlooked by the eight concurring justices. Kennedy’s argument that the structure of federalism, like the separation of powers, protects individual liberty from the excesses of government authority constitutes roughly one third of the opinion and is central to the holding that the plaintiff has standing.

Quoting from the Court’s 1999 opinion in Alden v. Maine, Kennedy observed that the “federal system rests on what might at first seem a counterintuitive insight, that ‘freedom is enhanced by the creation of two governments, not one.’”

“The Framers,” wrote Kennedy, “concluded that allocation of powers between the National Government and the States enhances freedom, first by protecting the integrity of the governments themselves, and second by protecting the people, from whom all governmental powers are derived.” No less a Framer than James Madison made precisely this argument in Federalist No. 51: “In the compound republic of America, the power surrendered by the people is first divided between two distinct governments, and then the portion allotted to each subdivided among distinct and separate departments. Hence a double security arises to the rights of the people.”

Although most federalism cases focus exclusively on the distribution of power between the federal and state governments, Kennedy underscores that “[b]y denying any one government complete jurisdiction over all the concerns of public life, federalism protects the liberty of the individual from arbitrary power.” Quoting from the Court’s 1992 opinion in New York v. United States, he adds that “[s]tate sovereignty is not just an end in itself: ‘Rather, federalism secures to citizens the liberties that derive from the diffusion of sovereign power.’” In a concurring opinion, quoting from the 1928 decision in Nigro v. United States, Justice Ginsberg is even more adamant about the individual right to be free of the force of unconstitutional laws: “In short, a law ‘beyond the power of Congress,’ for any reason, is ‘no law at all.’”

Every justice on the current Court approved Kennedy’s opinion in Bond, so they are on the record agreeing with the argument that federalism protects liberty. And they will be reminded of that point as they read the 11th Circuit’s decision in Susan Seven-Sky v. Holder where Judges Dubina and Hull, also quoting from New York v. United States, write: “The Constitution does not protect the sovereignty of States for the benefit of the States or state governments as abstract political entities . . . . To the contrary, the Constitution divides authority between federal and state governments for the protection of individuals.”…

Read it all.

Great White Newt

January 22, 2012

This image has been posted with express written permission. This cartoon was originally published at Town Hall.

Romney’s Money Troubles

January 22, 2012

Via About

Follow

Get every new post delivered to your Inbox.

Join 83 other followers