Since the beginning of telecommunication, there has been a mismatch between how the service is billed, and the value the customer gets from it.
Measuring the number of minutes for the call’s duration does not in any way reflect the actual value of the call. The, very distant, dream has instead been to measure the actual information transferred, and to bill based on the value of this information.
We can all agree that we are not there yet. We are still, depending on the channel, paying voice calls per minute, and the only change is that data-transfer now is charged based on transferred data volumes.
But data is not information and it does not by itself contain any value. Data becomes information when (successfully) interpreted, and any value of it is materialized if it serves a purpose.
The telecom industry has not yet succeeded in transforming their business models, but when looking into the crystal ball we can see that the future calls for this (r)evolution to happen.
An increasing number of regulations are aiming at being technology-neutral and are therefore governing context, the actual information, instead of technical solutions and data.
This is particularly true for GDPR, with its new definition of personal data being any piece of information, that in combination with any other information, in the processor’s possession, or publicly available, that can be used to identify someone. This definition is in fact the human ability to associate information and use it to identify an individual.
This approach from regulators, to govern context, can also be found in many other upcoming regulations, such as unlawful inside trade, information concerning national security, health information and anti-money laundry regulations. On top of this, also internal policies are aiming at context, to secure both IP-assets and trade secrets.
Together, they create a new challenge, where existing technology falls short of providing solutions that is up to the task.
When the context, within the content, is what must be controlled and managed, regular expressions and traditional technology used in data-mining solutions, orchestration and DLPs simply are to blunt to provide relevant granularity and performance.
Since data becomes information when it is processed and interpreted by a human, we are left with only two choices.
Either we assign human resources to perform the tedious work of interpreting all our data, or we apply technology that successfully imitates those human cognitive abilities. These artificial abilities must also continuously evolve, in the same way as humans and language does.
At Aigine, we are going for the second approach, assuring continuous training of the artificial abilities by collaborative cognitive learning.
If you want to know more about how to approach contextual challenges, do not miss us on stage at IBM Think in Stockholm, 3rd of October.
Not registered yet? Get your free ticket at:
Not in Stockholm the 3rd of October, but still want to know more about a modern approach to contextual challenges? Drop as a line: