[groups_non_member group=”premium”]

It goes without saying that today we live in an information-rich and interconnected world. Many of us own a smartphone, tablet, and computer— all of which are used daily to consume vast quantities of data. The fact that at any time the sum of human knowledge is available in the palm of our hands is often taken for granted.

The ability to communicate novel and disparate issues across the globe instantaneously, the growth of content available through the internet, and our daily interactions with this content—every digital inquiry, web search, and social media post—are illustrative of being in the age of “big data”. Data availability has been growing at an exponential pace, with estimates suggesting that over 90% of the data created in the history of humanity has been generated in the past two years.

Massive sets of data can provide a lot of value; they can be analyzed to reveal patterns and trends that provide new insights. However, too much of a good thing can also be problematic: the majority of the information in big data sets can be classified as irrelevant “noise”. With such huge amounts of information, analysts are increasingly finding it difficult to access the relevant data that matters. One solution to cutting through this noise and extracting meaningful signals lies with the new computational paradigm known as “cognitive computing”.

If you are a subscriber click here to login and read the complete article.  For more information about the JSFB click here  or contact us to learn more about Cornerstone’s research and service offering.

[/groups_non_member]

[groups_member group=”premium”]

It goes without saying that today we live in an information-rich and interconnected world. Many of us own a smartphone, tablet, and computer— all of which are used daily to consume vast quantities of data. The fact that at any time the sum of human knowledge is available in the palm of our hands is often taken for granted.

The ability to communicate novel and disparate issues across the globe instantaneously, the growth of content available through the internet, and our daily interactions with this content—every digital inquiry, web search, and social media post—are illustrative of being in the age of “big data”. Data availability has been growing at an exponential pace, with estimates suggesting that over 90% of the data created in the history of humanity has been generated in the past two years.[1]

Massive sets of data can provide a lot of value; they can be analyzed to reveal patterns and trends that provide new insights. However, too much of a good thing can also be problematic: the majority of the information in big data sets can be classified as irrelevant “noise”. With such huge amounts of information, analysts are increasingly finding it difficult to access the relevant data that matters. One solution to cutting through this noise and extracting meaningful signals lies with the new computational paradigm known as “cognitive computing”.

Cognitive computing 

Cognitive computing refers to self-learning systems that can adapt, learn, and think—in essence imitating how the human brain works. The benefits of these new computational systems is their ability to bring together the strengths of traditional computing systems (number crunching, calculations) with a more human and cognitive approach (analysis), allowing them to unearth and ‘understand’ relationships in massive data sets no single human brain has the capacity to grasp.

Unlike traditional computing systems, which are programmed by humans to execute logical sequences of steps, cognitive systems learn from and draw inferences from their interactions with both data and humans. They are in essence able to program themselves to achieve their objectives, addressing complex problems by establishing contexts and aggregating, analyzing, and interpreting huge amounts of data. There are a number of key technological developments which underlie cognitive computing systems, including:

Natural language processing

Natural language processing (NLP) enables computers to derive meaning from natural language just as a human would, allowing for the ingestion and analysis of unstructured data. Unstructured data is typically text-heavy semantic content that is not organized in a pre-defined manner, preventing it from being effectively incorporated within analytics generated by traditional computing systems.

Artificial intelligence and machine learning

Strictly speaking, artificial intelligence (AI) refers to the ability of a system to understand contexts and adapt approaches to maximize success rates. From a computational perspective, this is done by applying statistical learning techniques to identify boundaries and delineate patterns within data. Machine learning refers to the autonomous repetition of these techniques along with regression analysis to improve results over time, producing increasingly accurate outcomes.

Big data analytics

Big data refers to structured (numbers and tables) and unstructured (semantic text) data sets that are too large or complex for traditional computing systems to process. Big data encompasses both the huge volume of data now available as well as the interactions that take place throughout these data sets. The systematic computational analysis of this data can reveal new relationships and insights.

Cognitive Computing + Big Data = Better ESG/Sustainability Analytics

Cognitive_BigData_Chart 1_Bartel_Khurgel

 

The insights provided by cognitive computing systems have already begun to disrupt industries: from healthcare, finance, and pharmaceuticals, to consumer-focused sectors such as retail and travel, to software that makes recommendations based on online behavior. Analyzing big data allows for the discovery of new meaningful relationships within data sets: for example, Google data (the analysis of web searches, social media posts, and location) has been used in conjunction with Centers for Disease Control data to improve influenza predictions.[2]

New analytics, new insights, new possibilities

TruValue Labs is at the intersection of the two most important trends in our contemporary society: the continuing development of information technology, and the increasing mainstream importance of sustainability issues. It’s our mission to extract meaningful ESG and sustainability signals from big data, to better understand the sustainability performance of publicly listed companies.

By using technology to aggregate data, extract meaningful signals, and analyze content, an entirely new class of “real-time” sustainability analytics is now available. The provision of these enhanced analytics gives our users a more precise and higher-resolution understanding of corporate sustainability performance: They can now look at the sustainability data and events that have affected performance at a granular level, instead of being dependent on annual or irregular reports. This new ability is allowing investors and decision-makers to understand the performance implications of events for different sustainability categories, different companies, and overall sectors with a precision that was previously not possible.

While traditional sustainability data is dependent on the opinions of analysts and is subject to ambiguity, subjective perspectives, and phenomenon such as confirmation bias – all of which inherently prevent a completely objective and consistent approach – technology-derived analytics are based on a consistent set of rules. Since these rules determine how the analytics are generated, it is possible to alter them: meaning that for the first time it is now possible to take into consideration the variety of opinions regarding materiality, weighting, and the overall effect of issues and their relation to particular performance metrics, allowing analytics to be generated according to personal investment beliefs and strategies.

We are only just beginning to see the implications of what cognitive computing and the enhanced analytics from big data will be able to offer. In these exciting times, we are witnessing a genuine paradigm shift as new technological possibilities are leveraged by investors in a world increasingly recognizing the material importance of ESG and sustainability issues.

 

[1] http://www.ibm.com/smarterplanet/us/en/business_analytics/article/it_business_intelligence.html
[2] http://www.nature.com/articles/srep08154

[/groups_member]

Hendrik Bartel is the CEO and co-founder of TruValue Labs, Inc., a San Francisco-based technology startup leveraging advances in natural language processing, cognitive computing and machine learning to provide actionable sustainability insights.

Isaac Khurgel is Head of Marketing and Communications for TruValue Labs, Inc. He was previously based in London with the Principles for Responsible Investment.