Summary: A culture of data sharing between governments, tech giants, start-ups and consumers is a critical element of creating artificial-intelligence applications. Regulatory intervention needs to be carefully balanced so that it doesn’t stifle innovation.
Original author and publication date: Kalliopi Spyridaki – November 17, 2020
Futurizonte Editor’s Note: Data sharing is a problem. Let’s be honest: the big technology companies know more about each of us than what we know about ourselves
From the article:
The Covid-19 pandemic has caused a sea-change in public attitudes to data sharing. Prior to the outbreak, governments were not the most trusted entities with which individuals were willing to share their data. Yet, facing an unprecedented health crisis and wanting to play their part in the national response, citizens across the world have willingly given their information to government test, trace and isolate programmes.
Success in fighting the pandemic hasn’t been evenly distributed, for many reasons. However, some of the most successful countries in Europe and Asia Pacific have something in common – a commitment to data-protection standards. The General Data Protection Regulation (GDPR) may have encouraged a culture of confidence in data sharing for Europe as well as its trading partners.
This offers a useful perspective to organisations working in the domain of artificial intelligence (AI). The willingness to share data with or among businesses, as well as governments, depends on trust and the expectation of reward. To part with their data, individuals, businesses and governments need to expect something valuable in return as well as to be reassured the data will be protected.
This trend towards increased data sharing can be encouraged by a coherent framework for trusted data use and responsible AI. Such a framework can address data quality, transparency and accountability as well as people’s expectation of control over their data.
Europe is moving towards a data-agile economy. The region is seeking to address many of the weaknesses that have limited the competitiveness of European companies, most notably the lack of access to large quantities of high-quality data. This is an integral asset in the race to develop powerful AI solutions that can greatly enhance business insight and efficiency.
In the frame of the recently adopted European Data Strategy, the European Union will propose a Data Act in 2021 that will aim to foster business-to-government data sharing for the public interest as well as to support business-to-business data sharing. The aspiration is to create a genuine single market for data and common data pools that organisations can tap for growth and innovation.
Core to the strategy remains a continued respect for citizens’ rights and freedoms. Consistent with Europe’s stance on the protection of fundamental rights, including privacy, the new data ecosystem is unlikely to mandate data sharing as a general rule. The new requirements will need to take into account the existing body of consumer rights and are likely to enhance organisations’ responsibility for keeping customer data secure.
In parallel, the EU will propose legislation in early 2021 that aims to drive a horizontal, risk-based and precautionary approach to the development and use of AI. While the specifics are still taking shape, the legislation will advance transparency, accountability and consumer protection. This is likely to be achieved by requiring organisations to adhere to robust AI governance and data-quality requirements.
If the digital trust felt by citizens has contributed to the success of many test and trace programmes, the upcoming legislation will likely help entrench this trend in the realm of AI. Notably, European legislation is likely to have implications across the world. Much like the GDPR’s effect, which saw other nations enact similar data-protection laws, new data and AI legislation may create a global ripple. As the UK develops its own strategies for data sharing and AI development, lawmakers will surely be keeping a close eye on Europe.
An active and inclusive culture of data sharing between governments, tech giants, start-ups and consumers is critical to creating tomorrow’s AI applications. Digital trust is the necessary foundation to this end. In their management of data and development of AI, organisations should strive to build confidence with consumers beyond merely complying with applicable standards.