Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Behavioural Finance: A guide for listed equities teams
Behavioural Finance: A guide for listed equities teams
Behavioural Finance: A guide for listed equities teams
Ebook332 pages4 hours

Behavioural Finance: A guide for listed equities teams

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Many books about behavioural finance relate to relatively naive, individual investors. But professional investors are different. They have usually built their expertise on years of study and experience. And their decisions often result from following thorough and detailed investment processes. This book is about the aspe

LanguageEnglish
Release dateJul 27, 2022
ISBN9780994610263
Behavioural Finance: A guide for listed equities teams
Author

Simon Russell

Simon is at the cutting edge of topics that have the potential to transform the roles of investment professionals. In the future they will need to rethink how they collaborate, and they will need to become adept at integrating their expert professional judgments with mechanic decision-making tools. In the face of an increasingly 'noisy' and demonstrably uncertain world, they will also need to become better at determining what really matters, and at predicting the future.Simon is the founder of Behavioural Finance Australia (BFA). At BFA he provides specialist behavioural finance training and consulting to investment managers, major super funds, financial advisers, and other investment professionals. Simon brings a unique combination of finance and psychology to his clients. He is able to engage teams on the psychological considerations related to sometimes technical financial or investment issues, including making a financial forecast, building a business case, creating a valuation model, negotiating a transaction, or assessing a new project's IRR. Simon is the author of three other books on behavioural finance. His first book, 'Applying Behavioural Finance in Australia', details strategies to identify and overcome various investment decision-making biases. The book broaches some of the topics that are discussed in the current book, but its focus is broader than listed equities teams; it provides strategies that are also relevant for major super funds, asset consultant and family offices.

Related to Behavioural Finance

Related ebooks

Investments & Securities For You

View More

Related articles

Reviews for Behavioural Finance

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Behavioural Finance - Simon Russell

    SELF-MASTERY

    FOCUS ON WHAT’S MOST IMPORTANT

    One of the problems investment professionals face is how to cope with the seemingly insurmountable quantity of information that is now available. Sifting through an unmanageably large quantum of information for valuable nuggets is not merely a matter of inconvenience and frustration. It does not merely create a time-management problem for investment professionals; it creates a significant decision-making problem too. Several of them, in fact.

    This chapter discusses the decision-making research that is relevant for investment professionals as they process large amounts of information. This research demonstrates how, and in which circumstances, information overload can create difficulties for professional investors in being able to focus on what is most important. As discussed in this chapter, the problems associated with assimilating large amounts of information can be exacerbated by the way professional investors think about and respond to complexity and uncertainty. For example, there is a common misconception that having more information is, at worst, neutral to the quality of investment decisions. Of course, having more information can sometimes be beneficial and can significantly improve decisions; that is not in dispute. But can it sometimes be harmful?

    Some information is worse than useless

    The assumption that having more information is at worst neutral is often implicit in how professional investors think about the value of information. Underpinning this assumption is the idea is that if they receive information that is not useful then an astute investment professional can simply ignore it. In this scenario, they have merely wasted a little of their time. However, decision-making research paints a different, richer picture of how information can impact a professional investor’s decision-making.

    You can try a simple exercise for yourself that demonstrates some of the problems associated with receiving useless information. Search on-line for ‘the Stroop Test’ or ‘the Stroop Effect’. You might have seen it before. When you do the test you’re presented with a series of words written in different colours; your job is to say the colour of the text the words are written in. This sounds simple enough, but the trick is that the words spell colours that often differ from the colour of the text that you’re trying to say. So, for example, the word ‘green’ might be written in red ink (or using red pixels). To successfully complete the task, you need to ignore ‘green’ and say ‘red’. It’s easier said than done. When I give this exercise to participants in my workshops they tend to either progress slowly, or quickly but with errors. Theoretically, smart investment professionals shouldn’t have a problem with such a simple task.

    The reason I give investment professionals this exercise is to open the door to a conversation around the costs and benefits of different types of information, and to the psychological mechanisms involved in how that information is incorporate into decisions. The exercise demonstrates that having additional information (in this case in the form of words) can cause problems. The useless information contained in those words is not easily ignored. It does not have a neutral effect on decision-making; that information is worse than useless. If the words written on a page were replaced by coloured splodges of ink then the task would be much easier. Those coloured splodges would contain less information; they would remove the distraction caused by the text. In this case at least, having less information would be better.

    The exercise also demonstrates a number of other points that will be important when we consider the solutions to these problems. Firstly, the impact of additional information is personal; it will impact different people differently. For example, if you didn’t speak English then the word ‘green’ would not create the same distraction as it would for a fluent English-speaker. Or if the text was written in Mandarin then those who spoke only English would also be fine. We should not anticipate that everyone will be impacted by the same information in the same way.

    Secondly, the impact of information can vary over time. In this example, participants often speed up and improve their accuracy throughout the task as they discover strategies (such as blurring their eyes) that help them to ignore the distractions. There are differences between those who have practised and those who have not, but also differences for each individual over time.

    And finally, this exercise demonstrates that subconscious mental processes can be important in understanding the impacts of information on decision-making. In this case, participants were not asked to read the text. In fact, I explicitly tell them to ignore the text and to focus on the colour of the pixels. Despite this, they can’t help but read the words on the screen; they do so automatically and without conscious control. Therefore, when thinking about the solutions to the problems caused by information overload we need to factor in the role of subconscious and automated mental processes.

    It’s important to work out what’s most important

    Consider this scenario: you’ve been told by your (entirely unreasonable) boss that there’s a company they would like you to decide whether to invest in. At this stage you know absolutely nothing about it; not its name, nothing. The bad news is that your boss requires you to make your investment decision based on only three discrete pieces of information about that company. The good news is that you get to choose those three pieces of information. What would you ask for?

    Before you decide, your boss quickly clarifies that you can’t simply ask for the company’s name (in order to google it, visit its website, or read its financial accounts). But you could ask for some information from those accounts, such as the last 3 years of reported profits. Or you could ask for different ratios calculated on the basis of those accounts, such as its debt-to-equity ratio, or its return on capital. You could also ask for valuation information, such as its price-to-earnings ratio, or its EBITDA-to-enterprise-value ratio. You could also ask how much the share price has risen or fallen over the past 12 months, or for its dividend yield. When I ask this question of teams of professional investors, these are the types of information they normally request. There are several reasons for undertaking this (admittedly unrealistic) thought experiment. Firstly, it is to highlight how difficult it can be to identify what is most important. Many of the investment professionals who attempt this exercise find it challenging to identify their preferred three pieces of information. Explicitly identifying what is most important is not something that they are often asked to do; they are not practiced at it. In part, the scenario is deliberately contrived to make information scarce in order to force participants to recognise this difficulty. As a result, it is intended to help move some of them from the learning stage that is unflatteringly described as ‘unconscious incompetence’ (in which the learner doesn’t know how to do something and doesn’t necessarily recognise their skill deficit) towards the only marginally more complimentary stage of ‘conscious incompetence’ (at which stage they at least recognise the problem). This is then a step towards ‘conscious competence’ (ie being able to address the issue with conscious effort). As we shall see, being good at working out what is most important is critical to dealing with information overload.

    Secondly, one of the things that is often revealed in the process of investment teams completing this exercise is that there is not a consensus among team members about the information that they would choose. For example, after someone in the team selects three plausible-sounding numbers or ratios (like those described above), often someone else makes a comment along the lines of ‘but we don’t even know what industry the company is in! Surely that should be one of our three things?’ The fact that there is significant variability between different investment managers is to be expected (given their different processes, styles and expertise). But you would hope for a greater alignment within the same team.

    Assuming a team can settle on the information that they consider most important, what does decision-making research suggest is the likely impact of giving them that information? Research from across a number of different decision-making domains demonstrates that, perhaps unsurprisingly, when you give experts the pieces of information that they tell you are most critical, their decisions tend to improve a lot. And as you give them the information that they say is less important, their decisions also tend to improve, but to a lesser extent. Put differently, there are diminishing marginal returns from receiving each piece of less important information.

    While these broad findings might be unremarkable, a couple of conclusions from this body of research tend to be more surprising. One is how rapidly the marginal returns to additional information often diminish. Beyond the first few things that are most important, the curve that plots decision-making accuracy often flattens dramatically. This means that the tenth piece of information is much less important than the first. The Pareto principle (also known as ‘the 80-20 rule’) often applies. As much as thinking about everything seems to help, when it comes to complex multi-variant decision-making, identifying the big things and getting them right is often the main game.

    Another finding from this research that can defy conventional wisdom is that those marginal returns can sometimes turn negative. Receiving some information actually makes experts’ decisions less accurate. Importantly, this isn’t necessarily because the information is factually incorrect, or is ‘fake news’; it can happen even when the information provided is accurate. And it isn’t because the experts in these studies are necessarily wrong about what is important and what isn’t. Something else is going wrong, but what?

    Less important things can weigh more heavily than they should

    Decision-making research provides a few explanations for how less important information can make the decisions made by experts, in a range of fields, less accurate. One is what’s referred to as ‘the dilution effect’. This happens when a piece of information that should have a small weight in our decisions is assigned a larger weight. This is not necessarily a conscious mental process; people don’t typically consciously assign a specific weight to each piece of information they receive and then combine those pieces of information together (as they would if they were the personification of a multiple regression analysis). However, it can happen implicitly as they assimilate different sources of information into a decision. For example, if a professional investor changes their decision a lot after they receive some new information then they have implicitly assigned that new information a large weight. And if that information actually only warrants a small weight then, because decision-weights can only add up to 100%, the weights implicitly assigned to more important information are reduced (or ‘diluted’). Decision-making accuracy can decline as a result.

    My favourite example of this type of effect relates back to the moustached CEO discussed in the introduction of this book. A participant in one of my workshops commented (somewhat jokingly I think) that she would never invest in a company in which the CEO had a moustache. I’m not aware of a relationship between CEO facial hair and any financial metrics of interest to professional investors, but on the assumption that there is a relationship my guess is that it is small. In this case, the presence of the CEO’s moustache, by telling us little about the prospects of the company, risks diluting the impact of the things that have greater diagnostic value.

    You might be thinking at this point, ‘are you suggesting that I should simply ignore most things in order to focus on only a few? Because that doesn’t seem right.’ This is a common reaction, so it’s worth clarifying. When reflecting on the discussion above, keep in mind that it was premised on having already established what was most important (for, say, the selection of a company whose shares you wish to buy). Making that assessment is likely to require a thorough understanding of what drives the value of that company. That, in turn, probably requires having understood the industry and its competitive dynamics. What this means is that in order to identify the handful of most important things it might require turning over a lot of rocks. (Or reading a lot of Bloomberg alerts). Those rocks need to be turned over to see if anything important lurks beneath. The point is not that the rocks should remain unturned and the Bloomberg alerts remain unread. Rather, it’s that when it comes to making a decision one needs to be clear about what is most important and to place an appropriately large weight on those (possibly small number of) things.

    The solution to complexity is not more complexity

    Another exercise that I often give participants at my workshops is to present them with a series of squares, some of which are blue and some green. Their job is to accurately predict the colours of as many of the ten squares that would follow this initial sequence as they can. To more clearly define success, I ask them to imagine that they will receive $10 for each square they guess correctly (ie by guessing the right colour in the right spot). To make the task more realistic, I tell them that the sequence of coloured squares that I have given them is not entirely random, but neither is it entirely systematic. In doing this I try to establish the task as being akin to many investment decisions; somewhat predictable, but far from certain.

    What I don’t reveal to participants until later is that the coloured squares were generated by my daughter rolling a die. Whenever she rolled a one, two, three or four I translated it into a blue square. Fives and sixes became green squares. In effect, the unpredictability of the sequence was determined by the chaotic bounce of the die, whereas the predictable element was determined by the fact that blue squares were more likely to appear than were green ones. This was reflected in the sequence of ten squares participants saw, of which six were blue and only four were green.

    The purpose of this exercise is, in part, to demonstrate another way that having more information can reduce decision-making accuracy, in this case due to ‘over-fitting’. Overfitting happens when a relationship between two or more variables is described in a way that creates an apparently close ‘fit’ (or match) with the evidence, but where the close match is largely a mirage. This can happen where there is something else going on in the relationship that hasn’t been accounted for, some unseen causation, or some inherent randomness. A company’s good sales result might appear to be due to the CEO’s inspirational speech, for example, but the company might have quite unrelatedly landed a large client they had been working on for months. Why this matters is that these unaccounted for elements are likely to be different going forward. While the CEO’s inspiration remains, unless another large client is found the sales result falls away nonetheless. When there is overfitting the historical relationship fails when it really counts: in the future periods that the investment professional seeks to predict.

    Overfitting is often referred to in the context of undertaking a statistical analysis, such as a multiple regression. In that context, if we only have a few data points and a few variables then we can only identify the simplest of relationships between them. These general trends are likely to be most robust against the threat of overfitting. In contrast, if we have much data and many variable then we can identify complex relationships, whether they are real or imagined. In this case, overfitting becomes more of a threat. The same concept applies to qualitative judgments. However, rather than creating a multi-variant equation, with qualitative judgments a causal narrative is created, such as the one about the relationship between the CEO’s speech and the uplift in sales. This is not to suggest that all causal narratives are necessarily wrong, of course. But it does serve as a warning to investment professionals that while adding more elements to the causal story might make it appear more convincing, as a result of overfitting, it might also make it less accurate. As Kahneman and his co-authors write in their book, ‘Noise’, ‘complex rules will often give you only the illusion of validity and in fact harm the quality of your judgments. Some subtleties are valid, but many are not.’ ¹

    Let’s return to those coloured squares and see how overfitting applies. The sequence of coloured squares that I show participants is as follows (with B and G representing blue and green squares, respectively):

    B G G B B B B G G B

    To be fair, participants have little to go on in predicting the colour of the squares that follow this initial sequence. These are a few of their most common predictions:

    B G G B B B B G G B

    B B B G G B B B B G

    G G B B B B G G B B

    Each of these predictions perfectly fits with participants’ beliefs about the causal logic that underpins the original sequence of coloured squares. You can probably see how each prediction was derived; the first prediction is a copy-and-paste of the original sequence, for example. As logical as these prediction appear, unfortunately each fails to account for fact that there is a random element to the original sequence. As a consequence, each of these predictions is overfit. Because of the randomness, the different patterns that participants discerned from the original sequence were unlikely to

    Enjoying the preview?
    Page 1 of 1