5 Emerging Technologies Explained by Gartner Experts

Bidirectional brain machine interface, generative artificial intelligence (AI) and DNA computing are a few examples of the technologies highlighted on the Gartner Hype Cycle for Emerging Technologies, 2020. Although each of these may sound like a plotline from the latest Hollywood blockbuster, Gartner experts expect these emerging technologies and their corresponding trends to have a transformational impact on business in the next five to 10 years.

Kasey Panetta, Gartner Senior Content Marketing Manager, interviews Gartner experts to talk through the process of forming the Emerging Technologies Hype Cycle and related technologies.

2-part interview

This interview was conducted during a two-part podcast series. Both podcast episodes are available below; the transcript that follows has been edited for clarity and length.

Episode 1 (15 mins):

  • Brian Burke, Research VP, on the Hype Cycle (00:50)
  • Yefim Natis, Distinguished VP Analyst, on composable enterprises (6:53)
  • Avivah Litan, Distinguished VP Analyst, on authenticated provenance (9:00)

Episode 2 (30 mins):

The Emerging Technologies Hype Cycle Explained – Brian Burke

What is the Gartner Emerging Technologies Hype Cycle, and what makes it different from other Hype Cycles?

The Hype Cycle for Emerging Technologies is unique among Gartner Hype Cycles because we really look at all of the technologies on all of the Hype Cycles. So that’s 1,700 technology profiles. And then we distill that down into a set of 30 or so technology profiles that we believe will be most impactful for organizations over the next five to 10 years.

Read more: 6 Trends on the Gartner Hype Cycle for the Digital Workplace, 2020

And how do you get from 1,700 technologies down to a list of 30?

It takes a couple of months, but we start by looking at all the technology profiles that we’re creating and we create a shortlist of technologies that we believe will be the most impactful. We go from about 1,700 to about 150, and then we have a broader group of analysts who actually vote on those technology profiles. The top 30 are selected during the voting process.

We also have an algorithm that’s applied to the scoring, which basically considers whether a technology is new to all Hype Cycles. If so, that technology will get a few points extra. If the technology existed on any of the previous year’s Hype Cycles, it loses some points.

This is to combat the fact that in the past we had technologies that hung around on the Hype Cycle for years and years. For example, smart dust, which was a technology that was on the Hype Cycle as a perennial favorite for six years. This approach ensures that we’re having a fresher view on the Hype Cycle. This is especially important given that we have limited real estate.

What are this year’s trends?

  • Composite architectures
  • Algorithmic trust
  • Beyond silicon
  • Formative AI
  • Digital me

[swg_ad]

Composite architectures: Composable enterprises – Yefim Natis

What are composite architectures and why do they matter?

A composite architecture is made up of packaged business capabilities, built on a flexible data fabric. Basically this enables an enterprise to respond really rapidly to changing business needs.

The ultimate benefit of composable thinking, composable architecture, composable enterprise technology is that their organization unifies resources. Composite enterprises bring business expertise and technology expertise together to reengineer decision making and establish the policies and the structures of their organizations from a focus on stability to focus on agility and continuous change.

And why is this technology featured on the Hype Cycle?

Every organization today is seeking greater resilience, greater responsiveness to change, greater ability to integrate, and greater involvement of business and of IT together in making strategic, technology and business decisions. Composable enterprise promises to significantly improve each one of these capabilities of a modern enterprise. So it’s no surprise that composable enterprise generates a lot of interest, a lot of hype, promise and investment from vendors and, increasingly, from users as well.

Algorithmic trust: Authenticated provenance – Avivah Litan

What is authenticated provenance?  

 Authenticated provenance is part of algorithmic trust. Basically what it does is authenticate the origin of something. Algorithmic trust applies to the whole life cycle. Authenticated provenance asks how do you know something is real and valid when it is created? You can use many different methods to authenticate provenance.

One method is humans. You can have regulators go and look at the wheat field and say, ‘Yes, this is definitely organic wheat’, but that doesn’t scale very well. The second way is to use AI models and have one that distinguishes organic wheat from nonorganic wheat by looking at the different composition and biology or DNA of the wheat itself.

The third way you can tell that something is authentic is through certifying at the point of origin, using some technique that’s relevant for that domain. So let’s take a pharmaceutical, a drug that’s manufactured in a plant. As soon as it’s signed off by the QA process in the factory, that data is locked in, and now you have a record of that pharmaceutical drug provenance that you can track until the time someone takes the drug.

This feels really relevant to the current state of the world. Is that why it’s featured this year?

The reason this technology is featured now is because it’s so needed in our digital world. You can’t trust anything anymore. And I know that sounds very extreme, but it’s actually true. There’s so much ability to insert fakes and counterfeits into processes, whether it’s manufacturing or content, that we need to be able to trust the source and trust the provenance. There’s also a bigger demand from consumers to know that things are trustworthy, so the need for an authenticated provenance is stronger today than it’s ever been in our history.

Beyond silicon: DNA computing and storage – Nick Heudecker

What is DNA computing, and how does it work?

DNA computing plays into the beyond silicon trend because it introduces a brand-new computing substrate instead of using silicon. It use molecules and the reactions between those molecules to not just store data, but give you a new way to process it as well.

Storing data in DNA sounds hopelessly complex, but the technologies are well-established and understood. First, the digital content is compressed and mapped to the four nucleotides in DNA (adenine, thymine, guanine and cytosine, or “ATGC”). Because there are four nucleotides, each nucleotide can represent two digital bits. These nucleotide codes are used to create matching synthetic DNA, which is then replicated and stored in DNA strands. Those strands are then “amplified,” or copied millions of times, to make reading the data easier when material is extracted from its storage container.

When the data needs to be read, the opposite process occurs. The DNA strands are prepared and sequenced back into nucleotide codes, which are then converted back into digital content.

From a resiliency and storage density perspective, nothing beats DNA. Properly stored, DNA can last for at least 500 years. And a gram of DNA can store over 200PB of data

With digital data represented as DNA, the next step is introducing a processing mechanism to create a full DNA computing environment. While it is still a highly experimental domain in DNA computing, enzymatic processing is gaining prominence.

Enzymatic processing uses enzymes, which are proteins that act as catalysts, to perform a logical operation on a collection of DNA. This mechanism is inspired by how DNA is replicated and error-checked in organisms. Custom-designed enzymes can take the form of “logic gates” that process data and create new DNA strands as output, which can then be read by a DNA sequencer. Recent experiments have used enzymatic processing to perform machine learning over data represented as DNA.

From a resiliency and storage density perspective, nothing beats DNA. Properly stored, DNA can last for at least 500 years. And a gram of DNA can store over 200PB of data. Another advantage of DNA is it’s never going to go out of style. We are made from it. Unlike other technologies that might be fads or become incredibly difficult to maintain, DNA is pretty straightforward. And the technologies that synthesize it and the technologies that sequence it are well- understood and falling in price every day, making it much more approachable.

How might this be used today? 

You might see DNA computing in any industry that has a massive amount of data. A good example is CERN with the Large Hadron Collider. They collect petabytes of data every year. Storing that in magnetic tape is incredibly expensive. It takes a lot of room and they can only store it for about 10 years before they have to move it to fresh tape. Other use cases include storing national archives, scientific endeavors producing large amounts of data like astronomy, or industries like oil and gas.

But that’s only half the story — you also have to be able to process that data. And this is one of the real advantages of DNA computing. You can have millions of copies of a given dataset, and you can replicate it very cheaply. Once you have that data represented millions of times, you can introduce enzymes into that pool of DNA strands, and using enzymatic reactions, it will do whatever kind of computing you might want to do. Viable DNA processing is several years away, but the possibilities are fascinating.

Where is technology in terms of market adoption?

DNA computing is at a very early stage. We’ve seen some early investments from large and small technology vendors. A lot of research is happening at universities, but it is very early. I think we’ll see DNA storage as a viable option within three to five years, likely in a cloud infrastructure scenario. And then DNA computing will take longer to develop. I predict that’s going to happen within eight to 10 years.

[swg_ad id=”36598″]

Formative AI: Generative AI – Svetlana Sicular

What is generative AI?

Generative AI is not a single technology, but it’s a variety of machine learning methods that learn a representation of artifacts from data and use the data to generate brand-new, completely original, realistic artifacts. Those artifacts preserve a likeness to the training data, but they don’t repeat it. It can produce novel content such as images, video music, speech, text and even materials, and all of this can be produced in combination. It can improve or alter existing content and it can create new data elements or data itself.

What are the downsides of generative AI?

Generative AI has gained a partly negative reputation because of deep fakes. If AI can generate a face, text or video, it could be used to compromise someone for political or blackmailing purposes. We’ve already seen the first case of a generated voice being used to embezzle money. A voice of a CEO was generated and used to request the quick transfer of a large sum of money.  But we cannot negate the pluses, such as generative technology being used to predict how some conditions, like arthritis, will develop in the next three years.

Digital me: Bidirectional brain machine interface – Sylvain Fabre

What does a bidirectional brain machine interface do?

Bidirectional brain machine interfaces can turn the human brain into an Internet of Things (IoT) device. It’s an interface where you can record brain activities over time and guess or infer the mood of someone or their emotional state. We call it bidirectional because you can also write just like you would write to a memory device or a computer and you can send or remove currents from the brain.

One early application is sending currents to change people’s moods. In China, for example, experimentation has started on monitoring whether coworkers started to become angry or agitated and so on. So it’s basically reading the mental state of the individual, as well as potentially changing it.

Can you share an instance of bidirectional machine interface in practice?

In terms of the applications, early examples for wellness and fitness are monitoring recorded brain activity. Another example is professional driver safety, with detection of micro sleeps. You can monitor employees’ stress and wellness. We’ve seen early examples of controlling machines for medical applications, for example for people with paralysis, where they could use the brain to control an exoskeleton.

There could also be some outcomes that are not positive for the individual, fFor example, antidepressants that today are used mostly in chemical form. Antidepressant waves dispensed via a bidirectional brain machine interface could be used to make people more pliable. You could have addiction issues where people get accustomed to sending pleasure-inducing pulses via their brain machine interfaces. So there are some dark aspects that need to be monitored.

We looked at the investments from venture capitalists, which gives us a direction from what has been prioritized with bidirectional brain machine interfaces. We found nothing about security or privacy,which is a bit of a concern. You have this great potential for positive use cases, together with a nonnegligible risk both for personal data and corporate information privacy and security, as well as risk of physical harm to the users. These risks need to be addressed to protect individuals and corporations.

This technology has a very sci-fi feel to it, but just how far out is this reality?

Beyond research in the lab, there are early products that are noninvasive. We think the next step will be more invasive variants, where people might choose to do this on their own for an advantage in sports or at work or in school. That’s where the “bring your own” and shadow aspects of this would be a significant concern for corporate CIOs.

Our own assumption at the moment in terms of planning is that by 2025, employees experimenting with bidirectional brain machine interfaces would cause at least one major corporate data security outage. And we think by 2030, about 5% of employees in North America will use some form of bidirectional brain machine interface.

For example, teachers, nurses or drivers, could be monitored for alertness and their ability to be positive at work, and may be required to opt in on brain wave management, for example, to boost alertness or cognition. And again, some of that would be through the employee bringing their own, some of that would be corporate. This raises issues of consent, data privacy and security.

[swg_ad id=”36843″]

The post 5 Emerging Technologies Explained by Gartner Experts appeared first on Smarter With Gartner.

Share this Story: