The Bellmont Systematic Value portfolio is the implementation of Warren Buffett’s famous quote “It’s far better to buy a wonderful company at a fair price than a fair company at a wonderful price.”
Value investors at heart, our portfolio uses a rules-based approach to select and invest in Australia’s highest quality stocks trading at reasonable prices. Our objective is to offer a managed share portfolio that is designed to generate higher total returns than simply investing in the broad market*.
At a high level, we have designed and built a rules based stock selection process that frees us from the cognitive biases that we all inherit as humans. Free from these biases that impede all investors, the model examines years worth of institutional grade financial statement data filtering out the bad (and potentially bad) and selecting only those stocks that meet our exact criteria.
The model initially filters out stocks with evidence or characteristics that suggest potential issues in the financial statement data. This method alone helps us avoid stocks that could be at risk of bankruptcy. Once these stocks have been removed we then look at identifying characteristics that indicate high quality businesses, which tend to provide them with an advantage of their peers. Finally we select only those stocks that we can purchase at reasonable prices to ensure the best possible likelihood of attractive long term investment returns.
Enhancing our margin of safety by avoid stocks at risk of bankruptcy
Identifying high quality companies
Identifying companies that trade at reasonable prices
The Bellmont Systematic Value portfolio provides our investors with hands off exposure to the Australian sharemarket.
The Systematic Value portfolio is unapologetically mechanical. By design, it is free from the behavioural biases that influence the decisions of most fund managers. The decisions to buy and sell are instead driven by the rules we have intentionally built into the model. Inherently, this systematic approach also allows us to better understand the performance and risk of our strategy through simulation and back testing, providing investors with more information up front.
Investors are right to question the reliability of back-test results however, particularly those showing excess returns over the benchmark. Critics often point to data mining (the practice of examining large volumes of data in order to find an optimal investment method). In an effort to illustrate the ridiculousness of this practice, David J Leinweber (a physics and computer science graduate from MIT) famously gathered hundreds of data sets and found that the S&P500 could be predicted by looking at butter production in Bangladesh! If we examine enough data sets we are bound to find more meaningless coincidental patterns.
Rather than blindly examine any type of data set we limit our analysis to those factors identified in academic research, published in top rated peer reviewed academic journals, and that have continued to exhibit ongoing excess returns subsequent to their discovery and publication. It is by sticking to this time tested approach i.e. examining only those metrics that have been accepted and proven through both interrogation by peers, as well as through the passage of time that we can truly have confidence that we have a sound investment methodology rather than stumbling on some contrived data correlation.
Further to the criticism of data mining, sceptics often question the validity of backtest results with most arguments falling into two camps. These potential pitfalls will be outlined below as well as how we treat them to ensure we can have confidence in our model.
The first pitfall when constructing a systematic portfolio and back-testing is known as survivorship bias. This bias occurs when the database the model selects stocks from includes only stocks that are still in business today, i.e. it excludes companies that have subsequently gone out of business. It is essential that any back-testing & simulation draws from a data source that includes both delisted companies (e.g. bankruptcy, takeovers etc) as well as those still in business.
Without this important feature back-test results do not accurately represent reality because back-test results only include investments in survivors i.e. the model only selects from those companies still in business. This means that back-test results will generally be overstated and therefore cannot be relied upon to draw conclusions about the investment process and performance.
At Bellmont we spend tens of thousands of dollars per year on institutional grade data which is free from survivorship bias, to ensure best practice and prevent this bias being introduced via errors in our model.
The second pitfall when constructing systematic portfolios and back-testing is known as look-ahead bias. This bias occurs when the database includes data that was not available at the point the data was analysed during the back-test. Look-ahead bias generally occurs because it takes time to collect and input data into a database after it has been released by the market. For example, if a company releases its annual results on the 15th of August it is not necessarily available in the database on this day which means if the model assumes the data is available its back-test results will be positively skewed and the simulation results cannot be trusted.
Another source of look ahead bias can potentially stem from instances where ASX companies issue corrections to their financial reports i.e. their financial data is restated. Since it is unknown whether a company will restate data the quantitate researcher must always ensure non-restated data is used for any calculations.
The correct method to handle look ahead bias is to ensure that any back test simulations lag the data conservatively to account for the lag between release and input into the database as well as ensuring the model only uses non-restated financial data. Bellmont follows best practice with regards to look-ahead bias.