How Do We Decide Whether or Not to Help Others
Given that this time of the year is a festive season for many people all over the world, today’s post will be about the decision to help others. Prosocial acts—which social scientists define as those acts that are costly to ourselves, but benefit others—are fascinating for many reasons.
For instance, most if not all other species on Earth almost exclusively cooperate in small groups of kin—that is, those individuals that share a lot of genes with one another. However, humans often engage in altruistic acts towards unrelated strangers (e.g., donating to charity). Thus, the most commonly invoked evolutionary explanation for fitness-reducing acts in animals—that it is a net-positive action in terms of maximizing the distribution of their own genes (as shared with their relatives)—cannot explain this form of prosocial behavior. Additionally, altruism is a complex social phenomenon, which arguably is crucial for people, groups, and societies to co-exist, bond, and thrive.
As a consequence, this behavior has received a lot of attention from psychologists and economists. So let’s review some of this research and see if we can make some (as always, cautious) conclusions about how to get ourselves to help others more often, as well as making it more likely for those around us to behave in a more prosocial manner.
Actually, before we talk about helping others, let’s talk about selfishness. You may ask yourself, “Are people really that prosocial?” thinking about the jerk that stole your parking spot last weekend or certain extremely rich individuals that put their own interests ahead of basically anyone else’s (I’m sure plenty of examples will come to mind, so I won’t provide any). It is true that, generally, people value their own interests over the interests of others.
This has been observed in the context of financial decisions, where one player, the “dictator,” can freely distribute a certain amount of money between himself and a second player (Engel, 2010). The second player cannot influence this in any way. In this context, people will usually unfairly split the money to their advantage. Similarly, it has been found that in a task involving uncertainty, people learn faster how to maximize monetary rewards for themselves than when the money goes to somebody else (Lockwood et al., 2016).
Additionally, it has been shown that people are less willing to invest the effort to benefit others than to benefit themselves (Lockwood et al., 2017). And even when the participants chose to help others, they ended up investing less effort than when working for themselves.
An interesting exception to this general preference for one’s own interest over the interests of others comes from a study in which participants could earn money by accepting tolerable levels of pain via electric shocks (Crockett et al., 2014). While they would always earn the money, sometimes they would be receiving the shocks, while at other times, another person would receive the shocks. In this study, people acted altruistically in the sense that they would rather receive shocks themselves than have another person get shocked.
Ignoring this last study (to which I will return), these results suggest that people are rather selfish. However, these studies share an important design feature: They did not involve any kind of interaction between people! As we will see next, this can make all the difference.
A similar experiment to the “dictator game” is the “ultimatum game” (Fehr & Fischbacher, 2003). In this situation, one player again gets a certain amount of money and can freely distribute it between himself and a second player. However, this time the second player can accept or reject the offer.
If the second player were selfish, he or she would accept any amount offered (because it would be more than nothing, which would be the result if the offer is rejected). However, this is not what happens! People regularly reject low offers, and offers are much higher than in the dictator game, too (because the first player understands that low offers will get rejected).
If a third player is added to the ultimatum game, people are willing to incur financial costs to themselves in order to punish others’ unfair offers. So why are these actions considered altruistic? Because people give up some of their own resources to induce a social norm which indeed is effective—if the game is repeated with the same or other players, the first player is likely to offer a fairer split this time around if he or she just got punished.
There are too many such experiments to cover them all, so after giving you a taste of what they look like, I will skip to some of the insights that we gained from studying them. Research indicates that the decision of whether or not to help has both a cognitive and an emotional component (Fehr & Rockenbach, 2004). This means that people both evaluate the action in terms of its anticipated costs (e.g., effort necessary to help) and anticipated benefits (e.g., the extent to which the other person needs this help and/or what the likely future payoff is for oneself), as well as its anticipated emotional consequences (e.g., how good will it make me feel to help/how bad will I feel if I do not help). Thus, we can abuse either of these pathways to make helping more likely—by making it more worthwhile (e.g., by reducing the amount of effort needed and/or increasing the reward) and by making the positive emotion that comes along with it more salient.
Furthermore, clever variations of these experiments have uncovered a few conditions that make helping others a lot more likely. Unsurprisingly, in a group, people will behave more altruistically if prosocial behavior is rewarded, but also if selfish behavior is punished. Hence, both rewarding those that help (e.g., by praising them) and punishing those that do not help (e.g., by scolding them) works to increase prosocial behavior in a group.
In a similar vein, it is important that helping is reciprocated—it has been shown that even in a large group, a very small number of freeriders caused the people that initially behaved prosocially to become selfish (Fehr & Rockenbach, 2004). Finally, research shows that people care about their reputation (Fehr & Fischbacher, 2003)—people are much more willing to help if they know that others will take notice of their good deeds than when they are anonymous.
If you are now slightly depressed because this paints a very calculated picture of people’s decision of whether or not to help others, let me try to make this feeling go away by making two final points. First of all, even if prosocial behavior were 100 percent cold-hearted calculation (which it is likely not), humans are still the species that engage in it by far the most (towards unrelated strangers). Second, there seem to be people that are much more willing to help others than is common—maybe because for them it is more rewarding and less costly, or because they have found a way to override the tendency to be selfish. We still have a lot to learn about altruism, and maybe we all can be this person.
To sum up, if we want to promote prosocial behavior (in ourselves and others), we can do several things. We can make sure to reward helping and punish selfishness. We can reciprocate whenever we receive help, and we can let people know that we notice when they are behaving altruistically. We can try to reduce the amount of effort needed to help, or be aware of our effort aversion and try to fight through it once in a while.
And just sometimes, we can accept moral responsibility for those that are less fortunate than us, which is what probably happened in the study involving the electric shocks. Helping others feels good, and we should do it as often as we can.