Skip to Main Content

Strategic Business Insights (SBI) logo

Omnipresent Sensing and Intelligence Featured Signal of Change: SoC1232 May 2021

Author: Rob Edmonds (Send us feedback.)

Early visions of smart cities and the Internet of Things (IoT) included functionalities that would enable every digital system to talk to every other digital system, somehow creating an almost magical self‑managing utopia. In this utopia, every system would work together with every other system to maximize efficiency in an intelligent way, and decision makers would be able to see and direct all actions from some beautifully designed dashboard interface.

The reality was quite different from these early visions. Integrating sensor systems and data across myriad systems—even through standardized means such as use of APIs (application-programming interfaces)—turned out to be too slow, too complex, and, in many cases, too low in value. Smart-city projects were slow to commence. And although IoT devices became fairly widespread, they became widespread typically within atomic, application-specific systems rather than in any kind of interconnected whole.

But today, a situation somewhat like that in the early visions is starting to become possible. The missing ingredient was artificial intelligence to tie all the elements together. In particular, AI is now becoming capable of automatically integrating (or "fusing") diverse sensor-data streams and analyzing the resulting information to aid (or even automate) decision‑making.

Computers that understand everything, all at once, right now, and all the time could eventually emerge and radically transform commerce, society, and security.

Possibly, the original smart-city and IoT visions are now too timid to account adequately for the potential future outcomes. In combination, long‑term, large-scale multisensor fusion; big data; and AI can enable systems capable of sensing and understanding everything all at once—a kind of omnipresent intelligence. In many ways, developments are trending toward bringing the online world of big‑data analysis—in which systems analyze real‑time big‑data streams to serve up advertising or purchase stocks, for example—to the physical world.

The clearest current example of this emerging phenomenon is the so‑called multi‑intelligence-fusion—sometimes automated-fusion—system. Multi-intelligence-fusion systems automate the process of integrating multiple intelligence sources (both real-time sensor data and stored information such as terror watch lists) for decision-making and are seeing increasing use by law‑enforcement agencies, security agencies, and the military. Genetec's (Montreal, Canada) Citigraf is one example of a multi-intelligence-fusion system; other such systems are available from multiple companies, including Cisco Systems (San Jose, California), Microsoft (Redmond, Washington), BAE Systems (London and Farnborough, England), and Palantir Technologies (Denver, Colorado). Like similar systems, the smart-city-focused Citigraf centralizes real-time sensor data such as those from gunshot detectors, license-plate readers, and public and private security cameras. The system can easily integrate new sources of data and use AI to match patterns of criminal activity with suspects that it predicts are likely to have participated in the crime, giving police specific leads to investigate. In the military, similar systems might integrate multiple sources of surveillance and intelligence data to identify targets.

Security and military applications of multi-intelligence-fusion technologies are controversial. Like other AI systems, multi-intelligence-fusion systems can have problems with bias and accuracy. And similar technologies have aided in some governments' efforts to suppress protestors or target specific minority populations. In addition, some stakeholders may be concerned that multi-intelligence fusion is a stepping-stone to fully automated policing or weapons systems. After all, the decision makers who read the collated intelligence represent a kind of latency. If an integrated multi-intelligence-fusion system says that a crime is happening, why wait for a human to press the "dispatch" button (or in a military scenario, the "fire" button). At least so far, however, such concerns appear to have done little to quell adoption.

Although the implications of the use of technologies that enable omnipresent sensing and intelligence in military, security, and law‑enforcement applications are highly significant, such technologies are also likely to have a significant impact in business and other areas of society. Amazon.com's (Seattle, Washington) checkout‑free supermarkets—which let shoppers select goods and leave without scanning items or passing through a checkout—make extensive use of sensor fusion and analysis. As is the case in other systems, Amazon's system derives its strength not from the capabilities of any one sensor but from the fusion and processing of data from many fairly simple sensors. Supply-chain management is also moving toward technologies that centralize real-time data. For example, emerging multiparty-network platforms centralize supply-chain-management data from multiple sources and participants in a single platform. Manufacturing is another area ripe for technologies that enable sensor fusion and analysis. Indeed, the April 2021, Pattern, Intelligent and Adaptive Manufacturing, mentions that the government of Baden‑Württemberg, Germany, is funding a collaborative project among five research institutes to develop AI systems that will analyze real-time plant and factory data to enable process and product optimization. Predictive-maintenance systems might also benefit from multi-intelligence fusion, as might the related field of insurance. And some experts believe that self‑driving cars could benefit from the bird's‑eye view that fusing roadside and mobile sensor data with data from car‑integrated sensors could create.

Multi-intelligence fusion may yield widespread and unpredictable changes. Although military commanders originally envisioned using multi-intelligence-fusion technology simply to automate the work of teams of humans who complete intelligence-fusion tasks, the technology's long-term potential is to enable omnipresent sensing and intelligence with perception and inference capabilities far beyond those of even organized teams of humans. Single-camera AI machine-vision systems have already demonstrated that they can spot details that humans cannot (for example, signs of rare genetic disorders). What capabilities will evolve when such systems fuse and analyze data from many disparate sensors simultaneously?

The technologies necessary to support multi-intelligence fusion are far from settled; however, progress is rapid, investment is strong, and adoption is moving apace. Given potentially exponential progress in computing power, the falling costs of sensors, and the increasing ubiquity of digital technologies, computers that understand everything, all at once, right now, and all the time could eventually emerge and radically transform commerce, society, and security. Very likely, innovations in business models and applications—and perhaps entire new industries—will result. Quite possibly, new digital monopolies will emerge, or existing ones will gain strength. Like digital technologies that have come before, multi‑intelligence-fusion technologies have the potential to confer significant power to organizations that own the platforms that centralize and analyze data. Of course, governments could (and arguably already do) harness this power to enforce laws and policies both in useful and in undesirable ways.