'Faking good' is the practice of showing yourself in a better light without much substance to back it. In academic terms, faking good is looking smart, intelligent and talented with apparently effortless ease. It is the comfy chino to the hardwearing tight jeans; the flip flop to the heavy mountain boot; the micro-wave dinner to the slow cooking of real food. So why do people fake good? It seems to be rooted in self theories. In my last blog I mentioned Carol Dweck, Stanford Professor of Social Psychology. Her research on personality, motivation and achievement shows that society divides up into two kinds of people: those who believe in a fixed theory of intelligence, a given entity, like a bunch of bananas or a can of baked beans; and those who hold that intelligence is malleable and grows incrementally through learning. She links these two predispositions to the idea of mastery-learning, which is the kind of bulldog tenacity you see in successful individuals. The hallmarks of those with a mastery orientation are a) they love learning; b) they seek challenges; c) they value effort; and d) they persist in the face of obstacles.
Building a "hardy can-do attitude"
How does an assessment system foster a hardy can-do attitude? Dweck argues that the entity theory of fixed intelligence leads to a preoccupation with looking smart, risk aversion, fear of failure, and anxiety about revealing the fragile foundations of one's cleverness. Entity theory is a performance orientation - play it safe, avoid mistakes, tune in to the cues. In the event of failure, those who have a fixed view of intelligence have no bulwark so fall into helplessness and blame. Watch those NSS scores plummet... In contrast, believers in the incremental theory of intelligence seek out challenges, relish obstacles, and remain focused on achieving mastery. In the words of Dweck, "they do not see failure as an indictment of themselves". Our assessment systems need to teach students to value learning over looking good, and to see rewards in effort, perseverance and tenacity.
From looking good to learning
I suspect giving undergraduate students an average of 36 summative assessment events with very few formative assessment tasks (as TESTA has shown) is not the way forward. Summative assessment is about measuring ability and plays to labels and categories of ability: "I'm a 2:2. I'm a 2:1". Dweck observes that, in the real world, learning and performance goals are often in conflict, and neither is intrinsically wrong. The question the conflict poses is "which is more important?" For students, the answer is a no brainer. Why would you waste time on a task that doesn't count when it conflicts with one that does? How many times have we heard students in TESTA focus groups saying that they ignore the reading/seminar preparation/trial run tasks because they need to focus on the assessment to get a good grade? The paradox is that often (but not always) doing the learning tasks leads to better performance. If we want to build hardy can-doers who are intent on learning and growing, then our curriculum design and assessment needs to be smart enough to weave the two goals together. Linking readings/seminars/formative tasks with the summative is the sine qua non.