Six technology trends for 2021
15 March 2021
Looking back at our 2020 sector book, where we included seven key technology trends, including ESG becoming a bigger theme, as our predictions for the year, we are happy to see a high hit rate. We have picked six new themes for this year, some of which are evolved from past themes. We again hope for a high hit rate, as that should open up opportunities for new companies championing them to come to market.
#1 – Custom chips are about to explode
There are not many listed chip experts listed on the LSE (IQE and Nanoco being two exceptions). However, the UK is bursting with private companies that are manufacturing chips or exploring new hardware routes to solve existential problems. Graphcore’s focus is on extending the capabilities of artificial intelligence (AI) through what it calls its intelligent processing unit (IPU), which, according to one of the co-founders of Arm, is the next generation of chip after the CPU and GPU. This business sits within Draper Esprit at the moment, but it continues to raise money at higher and higher valuations.
Then there are companies like Garrison and Deep Secure that are cybersecurity reinvented through hardware, or ‘hardsec’, which reimagines what microchips can do – see our note on that topic. The concept of systems on chips (SoCs) potentially has even more widespread implications and could take the continued phenomenon of miniaturisation much further. By combining previously separate components (processors, memory, GPUs, etc) into a single chip, often be as small as a single GPU by itself, the ability for internet of things (IoT) devices (eg smartphones, smart meters) to do more computations themselves without needing a connection to the cloud grows. This is called edge computing.
#2 – Edge computing brings greater scale and performance
The explosion of smart devices has meant that we interact or pass numerous mini computers every day, including our personal consumer tech, our electricity smart meters, retail car parks, cash machines, kiosks, etc. A lot of what those devices do is collect data to send back to a centralised computer network (or the cloud) that then does the calculations to draw out insights, before pushing a decision back to the device. However, as time goes on, the volume of data collected and the speed at which that data needs to be analysed will start to put pressure on the communications network.
A perfect example of this is the potential for applications within smart vehicles, where there is a life-and-death element to analysing potential crash scenarios. Edge computing or edge cloud is a way to solve that issue, and is possible thanks to the continued miniaturisation of technology and chips. Edge computing by itself is the concept of pushing the burden of the analysis back to the device (‘the edge’), so it can make its own decisions based on the data it collects. Edge cloud is one step removed from that, as it is a defined as a decentralised and distributed network of computers that are located closer to the edge of the network (ie the devices using it). This brings with it greater scale than traditional cloud computing, but maintains the benefits of security and power.
#3 – Greater potential for sharing data without breaching privacy
One of the biggest features of the cloud (specifically the public cloud) is that it is a neutral secure environment where data from different stakeholders (even competitors) are ostensibly stored next to each other. The core competency of the cloud providers is ensuring the fidelity and security of these data centres – to allow the data owners to get on with their day job. In this environment, we have to assume ‘zero trust’ through encryption.
With all these data, there is potentially an untapped goldmine of insights that could benefit all. For example, we as end consumers want our data held separately and encrypted. However, we may also want to benefit from improvements through knowledge attained from other users – this is called federated learning. It is how Google is improving its predictive keyboard GBoard. The current settings are held centrally and downloaded to a person’s phone. That phone (or edge device) learns how that individual user interacts with it and sends back an encrypted summary where it is immediately averaged with everyone else. The updated setting from this average is sent back to everyone’s phone and is done easily due to how the apps have been build: through APIs. Or, in the UK, there is Truata, founded by MasterCard and IBM in 2018. As they put it:
“With Truata, you can unlock the potential of your data without compromising privacy”.
#4 – The API lets non-tech businesses do tech
Whilst the application programming interface (API) has been around for a while, there is a growing phenomenon that is both allowing existing tech businesses to improve their offering by easily plugging into other companies’ tech, and helping tech-powered businesses to become fully fledged tech players. The API is the concept that a piece of software is built up through small and separate modules that connected seamlessly together, which means pieces of that app can be updated and tested individually before being plugged back in, ensuring that the app as a whole is not disrupted.
However, because these APIs are built this way, they can more easily be plugged into other apps. For example, Salesforce has built its platform so third parties such as dotdigital can create plug-ins that work seamlessly with it. This plug-in model is also the case for Craneware, Blue Prism, and EMIS’ platforms too, each becoming a more feature rich and attractive, without them having to do the work.
Then you have businesses that are not traditionally technology businesses but are or have the potential to licence out their IP. Ocado is the perfect example. A retailer with an excellent tech platform, has become a tech company by licencing out its platform IP to the likes of Kroger. CMC has the potential to go on that same journey too (it is already doing white and grey labels) and ATG, whilst always a tech platform, has a white label service to enhance the offerings of its individual auction house client brands too.
#5 – The death of the programmer?
One of the topics that is rarely talked about in tech circles, and perhaps for obvious reasons, is the impact on humans within computer programming. AI and computers are becoming so smart that is it foreseeable that at some point in the future they will be able to function without humans at all.
The concept of machine learning has already taken us partially there. This is when a computer algorithm gets smarter as it learns from data that is fed through it. It is then able to determine the next course of action or how to modify itself to get better outcomes next time.
Deep learning takes it even further, where it uses data to actually write software. A perfect example of machine learning is self-driving cars. Whilst not commonplace today, companies like Waymo have accumulated more than 20 million real-world driving miles and collected a whole lot of data as a result. The cars then learn how to drive better and safer. Then we have apps like TikTok that uses deep learning to serve up the best video recommendations, and as a result has surpassed the number of Snapchat and Pinterest users combined.
But what has this got to do with the programmer and his potential demise? We are already halfway there with the concept of low-code or no-code programming, where an untrained person can develop apps (or automation solution) in a ‘drag and drop’ way. This is how dotdigital’s marketing automation is set up, or indeed RM Assessment Master’s assessment creator kit. Then, if we combine smart speakers that can understand what you are asking for, with deep understanding of the consumer, and growing compute power (thanks to the new chips above), it is not inconceivable for a computer to be able to create a app from scratch based on what a human asks for. One step further: why couldn’t that same computer create an app it thinks humans might want and release it to the app store itself?
#6 – The death of cash?
Around the world, in areas where credit and debit cards are common, we have been used to spending money electronically via those methods for years. When Covid hit, it may not be a surprise that in a recent global survey by MasterCard 46% have swapped their most frequently used card for one that is contactless, growing to 52% for those under 35 years old. This trend may also be here to stay, as 74% said they would continue using it once we exit the pandemic. However, a large proportion of the world do not have bank accounts (hence no debit cards), nor credit cards, and transact using cash from their wallet. As we move more online, the solution is e-wallets. These are literally wallets in the digital world and, as well as a play to create virtual credit cards (eg Apple Wallet), in countries where cash is more prevalent they have become a place to store digital cash.
In 2014, the most popular payment method online was the credit card, at 30%, with debit cards and bank transfers not far behind at 20% and 11%. Hence bank-led payment methods made up 61% of global e-commerce payments. e‑Wallets were at 22%. Today, that has all changed. Those same bank-led methods make up 43%, with e-wallets making up more than those three combined at 45%. This has been helped by digital wallet companies springing up around the world, such as Revolut, Alipay, and Venmo. As well as the large unbanked population mentioned above, the cost to acquire an e-wallet customer is dramatically lower. So, with parts of the world increasingly using contactless to do their offline shopping and other parts of the world that are generally more cash orientated now using digital wallets, what is stopping cash from going obsolete?
If you interested in talking with one of our technology analysts please contact email@example.com