Planned Obsolescence

It began as all well-meaning ideas do -- as an effort to make life a little bit easier.

This year I began building my own stock-picking artificial intelligence (AI) program. I created it to help me invest my savings and gave it a name: AlphaBean.

But as AlphaBean became smarter and smarter, I eventually began to wonder: Could it replace me?

After all, I'm a financial journalist. My job involves studying companies and writing about them for the wider public. If AlphaBean could learn to invest, might it, or something like it, steal my job?

Past is prologue

Of course, I'm not the first person to create an investing algorithm. Algorithmic thinking about investing has existed for a long time -- much longer, in fact, than people normally think.

Benjamin Graham, the person who practically invented stock analysis and served as a mentor to Warren Buffett, famously suggested more than half a century ago to his readers that they "limit themselves to issues selling not far above their tangible asset value." In ruling out pricier stocks, Graham was articulating a rule or heuristic for winnowing the number of possible answers a human being must consider -- much like how programmers often do with AI.

Investing has become much more competitive and financial data more widely available since Graham's day, so you won't find many companies that meet his criterion anymore. Now the work of scanning through the newspaper has been replaced by websites like Yahoo! Finance, MSN, and CNBC, and partially automated with computerized screening tools.

Investors' relentless pursuit of money meant that technical progress didn't stop with the simple algorithms used by Graham and his followers. Aided by computing advancements, people began using trading systems that bought and sold stocks based on other things, too, including technical indicators.

Today, short-term-oriented algorithms direct the majority of stock trades. They aren't rooted in an understanding of businesses or investing for the long term, but in noticing infinitesimal price discrepancies; detecting and sneaking ahead of big orders; riding waves of price momentum; processing economic, financial, and related news reports before anyone else can; or just having superior fiber-optic (or, better yet, microwave) communication access to the market.

But automated trading is beginning to come to individual investors in a much a different form. Robo-advisors -- financial advisory firms whose algorithms automatically allocate client funds -- have just in the past decade grown from nothing to having more than $200 billion in assets under management. With lower variable costs, they're able to serve smaller accounts and charge lower fees than traditional advisors. This makes robo-advisors a great option for individual investors. From a purely technical perspective, however, the algorithms robo-advisors use aren't much fancier than those of their flesh-and-blood counterparts.

We're just now seeing machine-learning-based investment options becoming widely accessible. The writing's on the wall. Over the past two years, several AI-managed ETFs have come online. Their typical strategy is to scrape information from analyst reports, market news, and social media to execute ephemeral trades.

But I'm a long-term, buy-and-hold stock picker. Of all the available tools -- stock screens that we have to tell what to screen for, automated asset allocators, and short-term-oriented AI -- none do that. So I set out to make my own.

Time for another montage

Over the past year, I've spent more than 1,000 hours creating AlphaBean. It's been an extremely complex process that's involved late nights, spreadsheets, thousands of tests, 3,599 precise lines of code (so far), and hundreds of computer crashes.

I've been programming since I was an 11-year-old. This year, I studied AI part-time at Georgetown University as part of my research for this series. But making AlphaBean was different from anything I've ever done before.

Machine learning is not like many imagine it -- dozens of programmers and mathematicians writing programs on computers and formulas in notebooks until they crack the code and finally...voila: It's alive!

The reality is that machine learning is an iterative process. Even an AI program's creators don't always know where the adventure will take them.

Here's how my journey began.

I started with a standard set of machine-learning tools from the academic world. I spent weeks with dozens of their algorithms, getting a feel for the different ways in which they can be used alone or in combination with one another to invest.

I thought a lot about what kinds of financial metrics would be a good fit. Programmers typically hire subject-matter experts to orient the problems they face, suggest heuristics, and identify when AI platforms make silly answers that lack common sense. But AlphaBean was a lean, one-man, up-until-2-a.m., evenings-and-weekends operation. Luckily I had a sufficient background in investing to serve as my own subject-matter expert, for I had no money with which to hire one.

There's a belief that more information is always better, but that's not always the case. It's often better to be thoughtful of what you're doing, rather than to throw everything against the wall and see what sticks. Once I'd carefully selected metrics, I meticulously typed in years' worth of publicly available information into a spreadsheet.

I ran machine-learning tools on the information that I'd inputted to understand how different learning techniques react to the problem of investing and to identify which algorithms were promising candidates for getting to my desired result: 3-year outperformance.

To give you a lens into the process, here are two examples of such techniques: Bayesian networks and neural networks. Their names may sound similar, but they aren't at all the same things.

A Bayes network can organize information into a web of causal and probabilistic relationships between different events (Hey look -- CatfudCorp was really profitable over the past few years, and its stock went up, too! Maybe there's a connection). As a Bayes network encounters new information, it employs statistical calculations to reweight its network of probabilities. (For my coursework, I built a Bayes network from scratch, and I can tell you it's a beautiful experience to watch each part of the network communicate with each other part to reweight all these probabilities -- all on its own.) Finally, you can ask it to spit out predictions for which stocks it believes will be solid performers over the long run.

Recall that a neural network also models a web of interconnected data nodes, but its nodes are organized into layers, and they don't necessary represent real-world features. Each node contains a series of "weights" that indicate how much influence it has on each node in the next layer. The neural network's learning algorithm laboriously churns through input examples over and over, checking each time to see whether the network produces the correct output for the input. If it doesn't (Huh. When I plug in CatfudCorp's financial information, I thought the stock would have been a loser, but actually it was fantastic), it tweaks the nodes' weights according to cumbersome mathematical formulas and so hopefully "learns" how to produce more accurate outputs next time.

The differences may sound academic, but trust me -- they're not. Algorithms are tools, and some tools -- or combinations of tools -- are better suited to some problems than to others. And the neural network took me 30 times as long to run.

Trying a different process meant gaining a new insight, which meant more tinkering, and starting over. Again and again.

After all, if I was going to put my money behind my AI software's picks, I had better be darn sure the system was well-tested.

AlphaBean is a vastly more sophisticated creature today than it was when I started the project. With each iteration, I encountered new issues that demanded comprehensive problem solving. Over time, I designed and coded various tools that have made AlphaBean into a smarter, more objective student.

And that's the critical thing. The COMPAS criminologist system shows what an enormous challenge bias and overconfidence can pose for machine learning.

These pitfalls underscore why it's important to view machine learning as an experimental science. As with any test, you're never sure how it's going to turn out until you conduct it. Intuition is not a reliable guide. This meant that the bulk of my 1,000-plus hours was spent not on making the actual AI, but on devising clever ways to test and improve its investing skill. Originally I thought I might collect some encouraging results or stock ideas in just a couple of months. Instead, I've taken the meticulous path, and have now spent upwards of six months handcrafting my own custom tools just for evaluating AlphaBean's performance.

Throughout the process, I faced many of the same challenges I've written about -- avoiding bias, mitigating overconfidence, interpreting information, and making AI software that can think for itself.

There's no free lunch

Creating AlphaBean would have been much easier if all I wanted it to do was automate my own thinking processes. But because I wanted AlphaBean to improve upon my knowledge, I needed to program it with the ability to discover its own strategies. And I had to begin its training with a clean slate so as not to bias it toward mine.

What's more, I didn't want to assume that I knew how AlphaBean should work better than AlphaBean did. Just as human students have individual learning styles that adhere to different learning situations better than others, certain machine learners work better than others at mastering certain problems.

That's because -- as I discussed previously -- there's no such thing as the perfect AI program. Researchers have had much more success tailoring individual AI systems to specific problems than building a logic machine capable of general intelligence. Certain algorithms work better than others at certain problems.

This fact was captured in 1999 with a mathematical proof similar to the economic adage that you can't get something for nothing: AI's "no free lunch" theorem tells us that AI software attuned to one problem is necessarily worse at solving others.

It may seem that AlphaGo, which excels at Go, chess, and energy management, violates this rule. But as a product of deep learning, AlphaGo is best suited for learning subtle patterns out of large data sets. AlphaGo would be terrible, say, for identifying extremely rare diseases or one-of-a-kind cyberattacks -- problems which by definition lack ample data. Instead, rule-based systems with algorithms that follow a series of hard-coded procedures (if this, then do that) would work better than AlphaGo at detecting unique threats whose properties can be defined by experts but not mountains of data.

As for AlphaBean, I'm exploring its styles and quirks. Even though I'm its owner and creator, I won't understand everything about my program until we've gotten to know each other better.

AlphaBean has a mind and personality of its own. That's the point. So I'm letting AlphaBean teach me how it prefers to learn.

The struggle is real

There's more to evaluating a company than running numbers through a computer. Investors face squishier questions that even human intuition struggles to work through.

For example, we all know that company management doesn't always give us the full story, that they sometimes massage earnings, and that it takes critical judgment to see through the smoke and mirrors.

One investor relations officer I cited earlier this year put the quarterly earnings game in these stark terms:

Is management trying to hide something? Do the numbers capture reality? How will big-picture social trends affect this business over the long run? What are its competitive advantages? Number-crunching techniques can't easily ask or answer questions such as these. To compensate, I'm looking into which kinds of quantitative proxies help to answer qualitative enigmas. For example, heavy insider ownership might be a good proxy for shareholder-friendliness, because it demonstrates some alignment between shareholder interests and management's self-interest.

Machine-learning techniques for natural-language processing offer some promise of bridging the quantitative-qualitative gap. Over the past decade, news agencies and internet content creators have been using AI to help craft stories. Today, programs can collect figures for reporters, alert them to unusual data, identify viral topics, and even write templated stories.

But AI can't do the reporting itself. It takes dogged research and human judgment to comb through all kinds of qualitative issues like managerial integrity or competitive advantages. AI is less capable than humans at assessing unquantifiable human intentions, motives, and subterfuge. When it comes to understanding people, humans are still top dog.

To handle these problems, I managed to combine some of my own investing knowledge with AlphaBean's while using sophisticated techniques to avoid biasing it.

Now I'm a believer

Despite all of these challenges and AlphaBean's youth, it's already beginning to change how I look at companies, both as an investor and as a journalist.

Here's just one example. We often treat earnings per share (EPS) growth as one of the most significant measures of business performance. It's touted by management, harped on by analysts, and followed closely by investors and the media. Who doesn't like profit?

But EPS has some flaws. It's easier for management to manipulate than other financial measures. And it's not always the most relevant metric. Academic studies have suggested that the prominence of EPS stems in large part not only from its accounting explanatory abilities, but because it acts as a signal to traders and speculators. Momentum traders need a way coordinate their behavior, and EPS provides that, as I've written before: "[A]n earnings beat means we all buy, and a miss means we sell."

AlphaBean seems to agree that when it comes to long-term investing, EPS shouldn't always be the headline number. Sometimes my program prefers to look at free cash flow, a different metric that's harder for management to manipulate and can do a better job describing businesses with improving working capital efficiency. Its choices depend on context. Every company is different.

AlphaBean's urge to examine each company in a unique way has also changed how I look at financial information. As a writer, editor, and reader of financial news, I am now much more skeptical of articles that draw significant conclusions about a company from generic financial metrics. After all, as an executive, you wouldn't manage Facebook the same way as Tractor Supply.

EPS growth, revenue growth, operating margin, and so forth matter, but they aren't always the most significant pieces of information for every company. Knowing what to look for takes knowledge, experience, and sensitivity to a company's individual circumstances.

So why keep me around?

I analyze companies. AlphaBean can do that.

I write about business. AI does that, too.

What's left for me to do?

A funny event points to an answer.

Earlier this year, a software engineer and impatient Game of Thrones fan decided he'd had enough after waiting six years for the publication of book 6. To speed things along, he created his own author: A neural network that could write in the style of his beloved fantasy series.

In July, he uploaded five chapters to the internet.

The writing is interesting -- and reminiscent of our knight flaunting his fish-eyeball cloak that Google's neural network produced.

Here's a snippet:

Wow. What happened to his half-buried-mad-on-honey-of-a-dried-brain AI?

It had learned to imitate style, but meaning remained elusive.

And that's the heart of the problem. As the inventor of modern logic noted, there's a big difference between the things in the world we talk about, and the sense, or cognitive significance, of those things. AI struggles to make sense out of things.

Humans will continue to surpass machines for some time in areas like appreciating contextual nuance, weaving together disparate ideas, comprehending human motive and intent, integrating an interdisciplinary conception of the world, and general intelligence. AI is simply not at our level yet.

Humans are also far superior at generalizing from a small number of experiences. It's a miraculous ability we have, and no one understands how we do it. Remember that the first version of AlphaGo studied positions from 30 million human games and played more than 30 million practice games with itself. Lee Sedol began serious training when he was 8 years old for 12 hours a day. That means AlphaGo required at least 500 times as much practice as Lee to achieve a comparable level of skill.

We've also seen that the current trend in AI is not to imitate human understanding of the world, but to accomplish a task -- and, for each AI, to perform a single, specific task. This approach has proven effective, but it leaves AI vulnerable to missing the bigger picture. That bigger picture may not be that significant if you only want to discover which facets of a business are worth looking at, or you only want to predict which companies will succeed. But if you want a why, you need a human being to connect all the dots.

We've seen AlphaGo flub a crucial moment in a game whose environment consists of 361 squares, two colors of stones, perfect information, and a handful of fixed rules. Human interactions have far more features than either the physical world of robots or the game-board world of AlphaGo.

At The Motley Fool, we try not to just throw data at our readers, but to contextualize, educate, and help people understand concepts. What AI fails to do is just that. Correlation is not explanation.

So what is it good for?

That said, AlphaBean or similar tools could make my life and my job easier. Much like we saw how AI will transform certain jobs, AlphaBean will allow me to focus on higher-value parts of my life.

For one, AlphaBean could work like the radiologists' CADs I described in part 3, only for investors and journalists. It could apply its own triaging layer of pattern recognition to look at information in a different way than I would, and to return with a second opinion.

AlphaBean could also bubble up suggestions for me to look at, like a kind of AI stock screener. It already has the ability to analyze obscure, under-the-radar companies and scoop up ideas that I would have missed.

I'm also refining AlphaBean to the point where it can invest part of my retirement savings. If rigorous testing that shows my program can consistently beat the market and do a better job investing than I can, I'm happy to let it take over.

Finally, I'm convinced that AlphaBean will teach me new ways of looking at businesses. It has the ability to develop its own investing strategies and to identify characteristics of companies that I had never considered. I'll be watching how AlphaBean learns to examine the world and learning from it.

Understanding the ways AI is best-suited to serve us is the best way to ensure that it does.

For in spite of its astonishing powers, AI is not perfect. Just like each of us, algorithms have their flaws. We all need to realize what's inside the black box if we are to understand our new world of AI and have a say in how that world unfolds.

AI can't go it alone

A few months after AlphaGo defeated Lee Sedol, I confided in a friend who trains AI for DeepMind that I had been initially skeptical of AlphaGo. That I had thought the whole endeavor typical technological hubris to believe people were smart enough to program something as capable as a human being. His response? Maybe it's hubris to think humans are so smart that we can't make something smarter than us.

He may be right. Computers are already better than us at many things, and though AI has a long way to go, it's quickly making up ground.

AI's growing capabilities in strategic thinking, robotics, language processing, and learning are leading to new developments in healthcare, manufacturing, retail, the home, and every white-collar job.

But AI also encourages overreliance; it can be opaque, literal-minded, and susceptible to bias; and it can crash unpredictably into erratic, catastrophic dysfunction that offends common sense.

Human ingenuity and adaptability have so far allowed mankind to survive a relatively brief length of time. They've helped us dominate the planet. And they've permitted us to fashion a powerful new technology whose thinking is alien to ours yet is created in our own intelligent image.

Powerful though it may be, AI won't do what we want it to do without our understanding and guidance. If -- and only if -- we're careful with this awesome power, can we create a world worthy of our values.

Ilan Moscovitz has no position in any of the stocks mentioned. The Motley Fool owns shares of and recommends Facebook. The Motley Fool recommends Tractor Supply. The Motley Fool has a disclosure policy.