BaFin - Navigation & Service

The picture shows the cover of the first BaFin Perspectives in 2019. © BaFin / www.freepik.com

Erscheinung:21.03.2019 | Topic Fintechs Big data meets artificial intelligence – results of the consultation on BaFin’s report

BaFin received numerous responses regarding its report “Big data meets artificial intelligence – Challenges and implications for the supervision and regulation of financial services”. Industry associations, individual institutions, national and international authorities and representatives from academia contributed to the consultation. This article provides an overview of the responses and includes an interview with BaFin President Felix Hufeld, offering an initial analysis of the results.

Introduction

On 16 July 2018, BaFin launched a public consultation1 on its report “Big data meets artificial intelligence – Challenges and implications for the supervision and regulation of financial services”2 (BDAI report for short3). The consultation focused particularly on the strategic key questions in the chapter “Supervisory and regulatory implications”4 regarding the future development of supervision and regulation in times where the use of big data and artificial intelligence is increasing.

BaFin received 31 responses in total. Both industry associations and individual institutions took part in the consultation. Comments were also submitted by national and international authorities as well as members of academia. All of the respondents welcomed the open and broad discussions that BaFin initiated with the report. It was also pointed out that it would be important to coordinate any BDAI-related adjustments to supervision and regulation at international level. Moreover, it was stressed that, even in the age of BDAI, German financial service providers should not be compared with bigtech companies – for instance, in terms of how customer data is handled.

This article provides a summary of the responses to the BDAI report and is – like the report itself – divided into three main topics: financial stability and market supervision, the supervision of institutions, and collective consumer protection. Each of these topics and subtopics is introduced with a text box summarising the views stated in the BDAI report.5 These are followed by an anonymised summary of the responses received in the course of the consultation.6

Due to the diversity of the respondents (institutions, associations, authorities and academics), representative majorities could not be given for the responses as they would need to be weighted (e.g. an individual institution compared to an association). In this article, all of the responses were given the same weighting for simplification purposes.

As this article is intended to provide an overview, it does not cover all the details contained in the responses. It should also be noted that BaFin has not yet evaluated the statements presented in this article. Neither does the article answer the question of what feedback is to be reflected in supervisory practice and regulation. This will take some time to determine.

In an interview about the consultation, BaFin President Felix Hufeld offers an initial analysis of the results.

Financial stability and market supervision

Consultation results on the emergence of new companies and business models

In a nutshell:Key points from the BDAI report

New providers are emerging in the financial sector as a result of BDAI-driven innovations. This could intensify the disaggregation of the value chain, particularly if existing businesses cooperate with new specialised providers. BDAI is a phenomenon that could also give rise to new types of business models and market participants that are not yet adequately covered by the current regulatory framework. It is vital that such cases are identified and that the range of providers and companies to be supervised is expanded accordingly.7

General market observations

Many of the respondents stated that the emergence of new value chains can be observed across all sectors. In particular, the companies that are able to bring data processing to a new level using BDAI are entering the market, according to these respondents. Some of the consultation participants assume that only those that establish themselves at the digital customer interface will be able to secure their position on the market in the long run. The overall trend that can be observed is that, by making intelligent use of data, providers of search engines, social networks or online (comparison) platforms are advancing into areas that used to be the sole preserve of specialised and often regulated providers. According to these respondents, this also includes data that can be obtained on the basis of the Second Payment Services Directive (PSD 2).

Level playing field: financial market regulation

The majority of the respondents consider the existing technology-neutral and principle-based financial market regulatory framework to be adequate in principle – also in relation to financial stability issues. They also see the risk that premature regulatory reactions to new technologies could be detrimental to technology neutrality and that a more rules-based approach would be adopted. There were calls for the removal of regulatory obstacles. Existing paper copy requirements are cited as an example.

However, the respondents also highlighted that restricting the application of existing regulations to institutions and insurance companies could lead to distortions of competition. In particular, there is a perceived discrepancy between the supervisory and regulatory assessment of traditional business models and new business models which rest on the analysis of financial and alternative data for own purposes or for third parties.8 It is assumed that new market participants will deliberately attempt to avoid regulation in order to drive innovation. In this context, it was also proposed to examine the extent to which new sales channels, such as targeted marketing measures of platform providers, are to be subject to adviser liability. Some also criticised the fact that new players, such as fintech companies, do not contribute to the funding of supervisory authorities, which leads to distortions of competition.

Although the anticipated additional competition with fintech and bigtech companies could reduce the profits of established providers on the financial market in the short term, the tools currently available to assess the solvency situation of supervised financial services providers are deemed adequate. However, some of the respondents pointed out that there could be increased pressure on margins particularly if bigtech companies gain entry into the financial market with free and/or cross-subsidised financial services.

Level playing field: competition policy and supervision

A number of respondents indicated that, overall, digitalisation exacerbates the risk of “winner-takes-all” market structures, which could emerge due to monopolistic structures in the area of data access. It was also noted that, for insurers, access to vehicle data, for instance, is important for offering telematics rates. According to these respondents, effective competition policy and supervision are more than ever a prerequisite for a viable financial market – especially in the age of BDAI.

Open markets could also mitigate systemic risk – particularly as far as data access is concerned – as they would result in a wider variety of market participants. It was also pointed out that sufficient competition is important for effective pricing as well.9 A few of the respondents referred to PSD 2 as a positive example for enabling free access to data. Offering interfaces to the data of bigtech companies in a manner similar to that under PSD 2 is another idea that was put forward.

All in all, the respondents were in favour of closer cooperation between competition authorities and financial supervisory authorities.

Maintaining a level playing field – categorisation of data as a legally protected right

One of the respondents asked how, beyond data protection law, data is to be categorised as an individual legally protected right/legally protected good. It was also noted that a convincing answer needs to be found for this legal-political and highly complex issue in order to be able to adequately address, in particular, the issues surrounding business models that are primarily data-based. An expedient approach that was suggested would be to draw parallels with intellectual property law.

Systemic importance and interconnectedness in the age of BDAI

In a nutshell:Key points from the BDAI report

The systemic importance of providers with data-driven business models could rapidly grow due to their scalability and reach. However, systemic importance may also arise if central data or platform providers make identical or very similar structures for processes or algorithms available to a wide range of market participants. Systemic importance could also emerge as a structure from the interaction between various market players. This raises the question of whether and how the banking- and insurance-based concept of systemic importance needs to be redefined in order to keep pace with new business models and market structures.

Redefining and addressing systemic importance: divided opinions

The respondents hold divided opinions on the potential broadening of the notion of systemic importance. On the one hand, it was pointed out that, at the present time, with many technologies still in the developmental stages, it would be premature or would even impede innovation to lay down new definitions and criteria. In particular, these respondents argue that it is not clear whether BDAI actually increases systemic risk or whether the growing number of market players will not even reduce dependence on banks and insurance companies. In any case, it must be clearly demonstrable on empirical grounds that certain risks may arise in a way that would actually jeopardise the existence of institutions.

On the other hand, many respondents consider that it is important to redefine systemic importance in relation to BDAI. Some even consider that this could be an argument for developing a Basel V framework. It was also noted that categories such as interconnectedness and complexity are already covered by the current definition of systemic importance. But as the level of interconnectedness and complexity is increasing as a result of digitalisation, more emphasis should be placed on this, making the case for an international discussion on the definition and measurement of interconnectedness within the context of the Basel Committee on Banking Supervision. There were also calls not to restrict the measurement of system risk to traditional financial services providers but to include fintech and bigtech companies that have not been supervised to date. In particular, business models that are based on the monetisation of data should also be taken into account.

Addressing providers that have not been regulated to date based on their interconnectedness with the financial market

Some of the responses indicated that, in many cases, institutions that have not been regulated to date could become essential for the functioning of the entire industry. Cloud service providers, telecommunications providers, automobile manufacturers (see above: “telematics rates”), algorithm and code providers, and providers of data or evaluations such as scorings and ratings were given as examples. With their expertise, these companies could erect monopoly-like structures vis-à-vis consumers and other market players. Another issue perceived as problematic is that procyclical effects may arise if many financial services providers rely on the data and analyses of a single provider, for instance.10

According to these respondents, expanding the notion of critical infrastructures could be one way to address the fact that institutions that have so far not been supervised may become systemically important. In doing so, the same minimum technical requirements could be enforced for both regulated banks and data, platform and algorithm providers that have not been supervised to date. There were also calls for the definition of systemic importance to be broadened in order to include functional and process-related factors. This would allow key companies to identify themselves more easily, and financial services providers would be able to take this into account.

Ideas to adapt outsourcing systems in the case of fragmented value chains

It was argued that, as value chains are becoming increasingly fragmented, current outsourcing systems, where financial institutions are the only point of contact, may no longer be adequate. It was also pointed out that infrastructure and data providers are, in many cases, direct competitors on the financial market as well. One proposed option would be to use a type of digital signature, especially for products that are created in a fragmented value creation process. Every company involved in the value creation process would have to be named when using such a signature. Overall, these respondents deem that it is necessary to consider whether supervisors should shift their focus from individual institutions to sustaining entire value chains. Smart contracts with a back-up party that would take on any element within the value chain if a company cannot provide it were suggested as a measure to sustain value chains.

Capital buffers, on the other hand, were considered less suitable as a mitigating measure to absorb shocks from outside the financial sector, such as the failure of an IT service provider. Measures aimed at minimising the likelihood of such occurrences are deemed more appropriate. In this context, these include minimum technical standards, targeted scenario analyses and – the subject of the next section – using technology to limit undesirable developments. Volume limits could be another appropriate measure.

Using technology to limit undesirable developments

In a nutshell:Key points from the BDAI report

Closely interconnected systems are susceptible to the rapid and uncontrolled spread of disruptions – not only on trading venues but also elsewhere. This raises the question of whether the technological safeguards which are already widespread on trading venues would also be necessary and could be usefully applied outside of trading venues in the age of BDAI. For example, decoupling mechanisms for data streams could be considered, as the significance of data supplies is increasing considerably as a result of BDAI.

Technological safeguards may be necessary but only if there is a risk of significant losses

Technological safeguards aimed at limiting cascade effects are considered to be potentially necessary. Addressing cascade effects is thought to be difficult, though, if parts of the market within the relevant cascade are not subject to supervision or regulation. The risk of herd and cascade effects is perceived to be greater in the banking sector and on the capital market than in the insurance sector, where key processes – such as risk and benefits assessments – are only initiated by the customers themselves. The respondents suggest examining where new developments, such as real-time payments, are actually needed in the real economy and justify the risk of undesirable developments. The respondents pointed out that, generally speaking, a deceleration with specified minimum time frames and reversibility can limit undesirable developments.

Technological safeguards are, for instance, thought to be worth considering if the level of algorithmic differentiation is too low when using BDAI. Cyber network interfaces could also be sought to reduce the risk of significant losses. Setting up redundant emergency systems could be considered in this context. The diversification of data providers is suggested as another (non-technological) measure. Interfering with data streams is only thought to be justified in situations of extreme risk. Under no circumstances should tools such as circuit breakers be triggered by self-learning algorithms, even with more precise calibration, as market participants would no longer be able to predict when the halt will occur.

Technology from a supervisory perspective: maintaining transparency and monitoring new structural relationships

In a nutshell:Key points from the BDAI report

Greater interconnectedness could result in greater complexity in the market, for instance, if a market participant’s formerly internal processes are distributed among several market participants, including those that have not been supervised to date. The changing structures of dynamic markets and the resulting risks must therefore be monitored, evaluated and addressed from a regulatory and supervisory point of view.

Humans and not computers must bear responsibility – also in the area of financial supervision

The respondents clearly expressed the expectation that humans must continue to bear ultimate responsibility – also in the area of financial supervision – and that this responsibility cannot be passed on to computers. However, the majority of the respondents consider the increased use of technology in supervisory activities to be absolutely necessary. To detect systemic importance, supervisors could also use more external data and take this into account using methods such as network analyses. Interesting external data could be, for instance, consumer expenditure, household saving behaviour or open source data, which in turn could also be of interest for detecting fraud.

Real-time access to data – using API in supervision

According to the responses, analyses that are based on data that is gathered once or on a monthly/quarterly basis will increasingly lose relevance as the market becomes ever more dynamic. Supervisors should therefore seek to maintain real-time access to specific corporate data using application programming interfaces (APIs) and use this to conduct ongoing analyses, such as cash flow analyses, in order to identify new risks and business models at an early stage. Setting up APIs is also considered to be useful for a smooth exchange of data between different (supervisory) authorities. Making use of the interplay between APIs and BDAI would also allow supervisors to monitor outsourcing more effectively. This would mean that the relationships between the institutions involved could be taken into account in supervisory analyses automatically.

Supervision of institutions

BDAI governance

In a nutshell:Key points from the BDAI report

BDAI will create additional opportunities for automating standard market processes. When designing (partially) automated processes, it is important to ensure that they are embedded in an effective, appropriate and proper business organisation. Responsibility remains with the senior management of the supervised institution, even in the case of automated processes. Appropriate documentation is required to ensure this. It may also be necessary to extend established governance concepts, such as the “four eyes” principle, and to apply these to automated processes.

Existing concepts are adequate to a large extent – but interpretation guidelines would be welcome

Most of the respondents consider that the existing supervisory framework is generally adequate, even if the use of BDAI is increasing. BDAI applications do not necessarily involve a higher level of risk and could be regarded as analogous to existing process changes. As a result, they could also be embedded in existing business organisations and the relevant regulations.11 It was also pointed out that current requirements allow supervisors to perform spot checks on processes in a risk-oriented manner. There are also strict requirements concerning senior management that are already incorporated into law. It was also stressed that the use of BDAI is regularly covered by outsourcing requirements, but that responsibility and liability for artificial intelligence cannot be outsourced. If outsourcing to regtech companies were to increase, this could weaken risk culture and expertise within supervised institutions.

The respondents pointed out that some aspects still need to be clarified in relation to how existing provisions are to be applied to BDAI and that supervisors should consider specifying requirements in interpretation guidelines. Irrespective of the complexity of the underlying processes, supervisory requirements must not be weakened under any circumstances.

Ideas to extend existing governance concepts

In contrast to the opinion above, a number of respondents explicitly stated that there is a need to revise existing regulations and supervisory practice in the medium term. In the case of algorithms, for instance, the requirement to implement and cross-check different sub-systems could be examined. In the case of self-learning systems, this would generally mean the use of different learning processes and possibly different training data as well. Outlier mining could also help to detect potentially erroneous decisions. There is also the question of whether current regulations (adequately) cover the validation of BDAI algorithms.

Changes to existing business organisations can be observed in places due to the increased use of BDAI, respondents said. They stated that it is important that institutions change the culture of how mistakes are dealt with to take into account lifelong learning and the associated ongoing changes in algorithms and models. In addition, data quality management (DQM), which has so far been a purely administrative task, is becoming an analytical and conceptual task that will play a key role in companies. The appointment of algorithm officers – comparable to data protection officers in certain respects– and the establishment of a data ethics commission in companies were also suggested. However, it is essential to ensure that an unclear allocation of responsibility is avoided.

Traceability and explainability of algorithms and decisions

In a nutshell:Key points from the BDAI report

It is the responsibility of supervised institutions to ensure the explainability and traceability of BDAI-based decisions. At least some insight can be gained into how models work and the reasons behind decisions, even in the case of highly complex models, and there is no need to categorise models as black boxes. For this reason, supervisory authorities will not accept any models presented as an unexplainable black box. Due to the complexity of the applications, it should be considered whether process results, in addition to documentation requirements, should also be examined in the future.

Two levels of explainability: the model and the individual decision

The respondents first indicated that there are two levels of explainability and traceability to be considered: the general model that is used for decision-making on the one hand and the (individual) decision that is reached on the other. They added that the explainability of a model based on machine-learning necessarily depends on the complexity of the processes and data used. However, the reasons for supervisory intervention or regulatory adjustments should not be based exclusively on the complexity of a model or the use of BDAI. Rather, they should always take into account the individual application and the anticipated risk situation.

Traceability is considered particularly important when dealing with customers and is thus important in individual cases since customers often ask for the reasons behind a decision. Only the reasons behind decision-making allow those concerned to correct inaccurate data and decisions based thereon. As in an audit trail, the individual steps in the decision-making process must be traceable at all times. At the very least, the traceability of decisions should always be ensured so that they can be used for forensic purposes.

How to create explainability and traceability

Ensuring the explainability and traceability of algorithms, models and processes is in the institutions’ own interest, according to a number of respondents. The many (self-)governance measures that exist, such as product oversight and governance requirements (POG) or the internal control system (ICS), are to be noted here. In practice, the model-finding and calibration process often includes backtesting and, in the following application, warning and alarm signals that ultimately result in manual and/or human intervention. Many of the respondents stressed that there must be ways to perform (manual) corrections and general revisions. There must also be processes to shut down BDAI applications that are erroneous or to be discontinued.

One way to ensure traceability is to run existing models and those based on BDAI in parallel. In doing so, it is possible to understand which influencing factors exist. Complex models can also be approximated using simple models. This allows approximations to be made and individual decisions can be explained locally using a simple model. In this context, defining minimum validity for the approximation is key.

Diverging opinion: explainability and traceability as unreasonable restrictions

A number of respondents argued the opposite, stating that it is difficult or impossible to trace the decision-making process of an algorithm in detail due to the nature of BDAI processes. In particular, highly complex processes, such as deep learning, can only be explained with great difficulty. They feel that imposing algorithm explainability as a requirement would create unreasonable restrictions. Due to the complexity of models, supervisors should focus on the validation of results. However, these respondents doubt the usefulness of test scenarios for testing the algorithms of institutions since the inclusion of predefined scenarios entails the risk of overfitting in such scenarios.

In particular, the respondents find it unrealistic to require that every customer profile – i.e. individual decision – is checked. Contrary to the opinion expressed in the BDAI report, these respondents stated that there is already a limit to explainability due to the complexity of data alone. What is important and technically possible, in their opinion, is to provide evidence on the forecast quality and stability of the models used. The respondents are opposed to the idea of a supervisory approval process for BDAI applications in this context, as this would stifle innovation at institutions. In their view, financial enterprises would be unreasonably disadvantaged compared to other (unregulated) institutions, such as bigtech companies.

Ideas for the supervision of BDAI models

The respondents also raised the question of how BDAI models can be examined by supervisors. If BDAI is used to a significant extent in critical business processes, extended requirements may be needed, e.g. for code review processes, simulation and penetration tests and reviewing sample profiles. The respondents indicated that requirements for the documentation, explainability and traceability of BDAI applications should be specified, using best practice guidelines, for instance. Effective supervision must also go beyond the examination of documentation and individual cases.

According to these respondents, supervisors must be able to understand complex processes, such as deep learning, and must themselves test the applications of institutions with a risk-sensitive approach. Requirements should be extended but only depending on how critical the relevant process is – also in relation to consumers. For financial supervisors, the use of algorithm-based decision-making systems offers the opportunity but also the obligation to check that algorithm-based decision-making processes and thus large parts of business activities comply with supervisory requirements and civil law.

Internal models subject to supervisory approval

In a nutshell:Key points from the BDAI report

Any use of BDAI in models that are subject to supervisory approval would also have to be approved by supervisory authorities on a case-by-case basis. Beyond the individual case, it is to be examined whether existing legal (minimum) requirements for the data used and model transparency are sufficient in relation to BDAI or whether additional requirements would be necessary. In the case of dynamic BDAI models, it is necessary to examine which general modifications constitute a model change in the supervisory sense, which banks or insurers, e.g. in line with the model change guidelines for insurance companies, would have to report to supervisors and may have to secure approval for.

BDAI not yet used in models that are subject to supervisory approval

Firstly, the responses indicated that BDAI has so far not been used for internal models that are subject to supervisory approval. Since the stability of internal models is crucial, BDAI models are rather expected to be used for support applications. There are doubts as to whether BDAI models could be approved when notifiable model changes are automated due to a change in data.

Diverging opinions on existing regulation

There are differing views on the applicability of BDAI in models subject to approval, particularly in relation to the model change process.

On the one hand, it was argued that BDAI applications are generally suitable for use in models subject to supervisory approval. These respondents said that using BDAI applications offers a significant opportunity to improve risk modelling. They do not expect that the definition of a model change has to be altered and do not consider that additional regulations need to be created either. They focus on the argument that the requirement to report model changes depends primarily on the impact on risk-weighted assets rather than on the (BDAI) technology used. These respondents do not consider it necessary to extend existing (minimum) requirements for the explainability of models and data either. They acknowledge, however, that this may need to be reviewed if BDAI methods turn out to have a significant impact on the parameters that are relevant in this context. Under certain circumstances, they say it may be necessary to extend regulatory technical standards.

However, other respondents argued that the approval process should be changed. For instance, institutions should be given the possibility to change the models they use more flexibly without having to go through lengthy approval processes for model changes beforehand. Supervisors should allow for quicker “validation feedback loops”. These respondents consider that it would also be desirable not to consider model changes based on the reassessment of parameters as model changes if a model-inherent process determines the need for the reassessment. Some respondents also argued that, already today, a change in parameters does not constitute a model change, at least in the insurance sector. In addition, it was proposed that regular, automated and highly standardised monitoring on a BDAI basis be established for the supervision of BDAI models in the hope that reports on such activities would simplify communication with supervisors. Increasing the use of validation tools, such as stability analyses, sensitivity analyses and backtesting, could be useful, too.

Finally, it was also noted that it would be necessary to examine whether the extensive use of BDAI leads to (undesired) capital relief or a circumvention of supervisory requirements due to a significant decrease in risk-weighted assets.

Fighting financial crime and conduct violations

In a nutshell:Key points from the BDAI report

BDAI can improve the detection rate of anomalies and patterns, and thus increase the efficiency and effectiveness of compliance processes, such as money laundering detection or fraud prevention. If BDAI technology were to make the detection of money laundering far more effective, criminals could potentially turn to companies that are less advanced in this area. It is therefore necessary to monitor whether this will materialise. The results of algorithms must be sufficiently clear to ensure that they can be checked by supervisory authorities and used by the competent authorities (e.g. law enforcement agencies). Minimum requirements may need to be developed for this purpose from a regulatory and supervisory point of view.

Standards already set out in regulations – but it may be appropriate to extend requirements

The vast majority of the respondents consider it disproportionate and inadequate to impose the use of BDAI in order to fight fraud and prevent money laundering. However, many of the respondents were in favour of introducing general minimum standards, also beyond BDAI applications. Such standards could increase the effectiveness of processes to identify financial crime and breaches of conduct and improve the detection and prevention of money laundering. Since BDAI is developing quickly and individually, these standards should be principle-based. Documentation, particularly in the case of sanction and intervention measures, should be sufficiently clear to ensure that humans can examine it, for instance. It was also suggested that supervisors assess the effectiveness of anti-money laundering systems using, for instance, standardised audit records to be reviewed at least on a yearly basis – as in the case of penetration testing.

Other respondents, however, indicated that there are enough standards set out in the existing principle-based regulations, particularly with the implementation of the 5th EU Anti-Money Laundering Directive. In addition, special BDAI standards are not considered absolutely necessary as they can be derived directly from academic standards set by the machine learning community.

Feedback loops are indispensable for model calibration

How effectively BDAI models can be used to prevent money laundering depends to a large extent on whether they were calibrated with reliable data. The respondents stated that feedback on model predictions is therefore crucial for the calibration and improvement of models. They added that recent feedback from the German Financial Intelligence Unit (Zentralstelle für FinanztransaktionsuntersuchungenFIU) or law enforcement agencies would be desirable. In this context, the compatibility of data systems is to be considered with regard to data standards and, where applicable, technical implementation using an API. This, it was stated, is the only way to exchange data between companies and investigating authorities without conversion.

Advantages of pooling solutions, particularly for companies that are less familiar with BDAI

Cross-institutional data pooling was also suggested, e.g. to support smaller institutions with a smaller database. Pooling expertise and using joint metrics could also be considered in this context. Being part of a network offering access to an information pool in which information that is relevant to money laundering could be stored and downloaded is viewed as offering the advantage that members would have a holistic view of customer risk – even without BDAI. Some respondents also expressed the wish that supervisors and legislators support such know-your-customer platforms (KYC platforms).

Another suggestion was that supervisors should support institutions whose money laundering detection systems are less advanced in terms of BDAI. However, institutions must also be willing to invest more in new technologies, and supervisors should make their expectations clear to institutions.

Multi-dimensional approach to combating money laundering

Some of the respondents argued in favour of a multi-dimensional approach for combating money laundering, such as a combination of BDAI analyses with peer group comparisons, public data and KYC scores. In addition, some expressed the wish to use the findings made in the detection of money laundering for other purposes – such as credit risk ratings – as well. However, the respondents pointed out that the prohibition of arbitrariness must be observed when using BDAI. Characteristics must not be linked via BDAI arbitrarily – otherwise individuals would be wrongfully prosecuted. Direct or indirect discrimination, as described under Article 3 (3) of the Basic Law (GrundgesetzGG) should not occur either.

Handling information security risks

In a nutshell:Key points from the BDAI report

BDAI can improve the detection rate of anomalies and patterns, and thus increase the efficiency and effectiveness of compliance processes, such as money laundering detection or fraud prevention. If BDAI technology were to make the detection of money laundering far more effective, criminals could potentially turn to companies that are less advanced in this area. It is therefore necessary to monitor whether this will materialise. The results of algorithms must be sufficiently clear to ensure that they can be checked by supervisory authorities and used by the competent authorities (e.g. law enforcement agencies). Minimum requirements may need to be developed for this purpose from a regulatory and supervisory point of view.

Principle-based regulation for information security risks can also be applied to BDAI

It was also pointed out that information security risks could increase with the use of BDAI technologies – especially due to the growing level of interconnectedness and the resulting increase in the number of weak points. It should be noted, though, that this involves security aspects similar to those to be observed in other software solutions. For this reason, the respondents do not see a need for any (extensive) regulatory adjustments. It was noted that numerous requirements, such as the Supervisory Requirements for IT in Financial Institutions (Bankaufsichtliche Anforderungen an die ITBAIT), the Supervisory Requirements for IT in Insurance Undertakings (Versicherungsaufsichtliche Anforderungen an die IT – VAIT), the MaRisk or certain ISO standards, are already taken into account for the use of BDAI. But there are still calls for certain requirements, such as the BAIT, to be further specified in relation to BDAI. If further amendments turn out to be necessary in the future, they should be principle-based as far as possible and be supplemented with rules-based provisions only where required. Overall, the respondents believe that it is risky to set standards as information technology is developing very quickly.

The respondents all confirmed that it is possible to use BDAI to tackle or detect cyber attacks.

Encryption is no panacea

In order to minimise the fallout from security incidents, data should, in principle, be extensively anonymised or pseudonymised as much as possible. However, it was emphasised that the idea of eliminating BDAI-related risks to data protection with cryptographic processes is unrealistic. Encryption systems may give a false sense of security. It was noted that, from a technical point of view, it is not to be expected that machine learning can be successfully applied to encrypted data outside special applications. One respondent suggested a general ban on data trading for the purpose of data monetisation in order to minimise information security risks and ensure data protection.

Collective consumer protection

Taking advantage of individual customers’ willingness and ability to pay

In a nutshell:Key points from the BDAI report

BDAI could make it easier for providers to customise products, services and the corresponding prices at a low cost (and on a large scale). This would allow companies to set higher prices on a case-by-case basis without incurring higher costs. Individualisation could make it more difficult to compare prices overall. BDAI could also allow providers to take advantage of customers’ (situational) willingness and ability to pay if they have this information. In particular, BDAI could help to link financial data and behavioural data to other (sources of) data and make it easier to estimate how much customers are willing to pay. In theory, this data could promote the extraction of consumer surplus,12 also outside the regulated financial sector. A BDAI-driven trend towards only a few key customer interfaces ("winner-takes-all" market structures) could further promote such developments thanks to enhanced data access and evaluation synergies. For this reason, consumers need to be made more aware of how their (financial) data may be used and the significance it has.

Distinction between risk-adequate price differentiation and taking advantage of individuals’ willingness to pay

In the financial sector, a distinction is to be made between pricing based on an individual’s willingness to pay and price differentiation, which is generally necessary due to the individual risk costs incurred. According to the respondents, such price differentiation should also be possible in the future to ensure risk-adequate pricing. It was also noted that situational insurance, for instance, is regularly calculated on the basis of (a multi-annual or) an annual premium and is more expensive than long-term insurance as it is often based on a period involving higher risks.

Competition and long-standing business relationships are arguments against the extraction of consumer surplus on the financial market

Most of the respondents stated that using BDAI would not make it easier to extract consumer surplus. In particular, fierce (price) competition for customers was given as a counter-argument. Effective competition policies and competition supervision were attributed a key role in preventing unilateral pricing. A market failure, such as the formation of monopolies and oligopolies or pricing agreements and agreements that restrain competition, is seen as a prerequisite for taking full advantage of the consumer surplus.

It was also indicated that regulations such as the German Regulation on Price Indications (Preisangabenverordnung – PAngV) are applicable in the banking sector. For these respondents, pricing that is alleged to be arbitrary or based solely on the individual is therefore hardly imaginable on a large scale in the customer business. For financial services providers, fairness towards customers and keeping their trust are of vital importance for customer relationships. Fair pricing is thus in their own interest and is often already incorporated in company codes of conduct. Other respondents argued that consumers are themselves responsible if, by giving their data, their willingness or ability to pay is taken advantage of. The proposition that financial and behavioural data is widely used outside the core business was countered by the argument that no such usage can be observed at present.

BDAI could increase transparency for product alternatives

Another argument cited against the extraction of consumer surplus is the fact that BDAI makes it easier for customers to gain an overview of prices and the product alternatives that are available. Strong market dynamics fuelled by BDAI could even lead to a drop in prices, it was argued. In particular, customers with a low willingness to pay may also have new consumption options thanks to BDAI applications.

Greater market concentration could give rise to new risks in the future

Some of the respondents disagreed, noting that it may already be possible to partially extract consumer surplus as there are only a few online platforms. If data-based price differentiation methods were to be more widely used, this could exacerbate asymmetries of information between consumers and institutions – to the consumer’s disadvantage. If this were to result in new risks, the initiation of supervisory or regulatory measures would need to be considered. In addition, common requirements should be laid down on the obligation to provide information on the data that would be used for pricing. Respondents also noted that suitable regulations should be in place for the use of data stemming from the Internet of Things.

Consumers need to be given more information

Some of the respondents explicitly stressed that it is important and necessary to inform and educate consumers. Consumers must be able to reach informed decisions in relation to their personal data and financial products. Consumer protection organisations, which are responsible for informing customers of any changes in market supply and potential pitfalls in the selection of products available, have a key role to play in this context, it was asserted. According to the respondents, it would also be beneficial to enhance the data sovereignty of customers.

Differentiation and potential discrimination

In a nutshell:Key points from the BDAI report

Using BDAI can increase the risk of discrimination: algorithms could be based on features for which differentiation is prohibited by law. Approximations are still possible, even if unauthorised features are not used, as there is a lot of other data available allowing conclusions to be drawn. There is also the risk that differentiations are made on the basis of false assumptions or false conclusions drawn by algorithms, and that consumers may in fact be discriminated against – even if this is unintentional. When programming algorithms and evaluating results, providers must take special care to ensure that individual consumers are not discriminated against. This raises the question as to what monitoring and transparency mechanisms could be useful in this context.

Risk of indirect discrimination is increasing – evidence of freedom from discrimination is essential

The risk of indirect discrimination (as described above) could increase with the use of BDAI, according to some of the respondents. Providers should therefore provide proof that their systems run in a non-discriminatory way and that the variables used are relevant. There were calls for algorithms to be checked regularly – by third parties and within institutions. One respondent even called for a state monitoring system for all BDAI algorithms, including those outside the financial sector. In addition, potential discrimination must already be looked into during the development of models, using methods such as bias correction, for instance. It was noted that overall, imposing a ban on discrimination in the context of BDAI would be a difficult task, from a technical point of view, for which a completely satisfactory solution has yet to be found. Retrospective spot checks of individual decisions are, according to the respondents, the only feasible approach at present.

Many anti-discrimination rules are established in the insurance sector

In the insurance sector, there are already a number of sector-specific provisions that must be observed in addition to general requirements such as the German General Equal Treatment Act (Allgemeines GleichbehandlungsgesetzAGG) and the German Genetic Diagnosis Act (Gendiagnostikgesetz – GenDG). Reference is made in particular to the German Equal Treatment Act for Life Insurance (Gleichbehandlungsgrundsatz für Lebensversicherungen) (section 138 (2) of the German Insurance Supervision Act (VersicherungsaufsichtsgesetzVAG) and the German Insurance Contract Act (Versicherungsvertragsgesetz – VvaG) (section 177 (1) of the VAG). Furthermore, it was claimed that supervisors have an extensive set of tools that are considered adequate for dealing with violations of consumer protection law. Any additional microregulation or micromanagement of product features and pricing models would stifle innovation. Illegal discrimination is easier to prevent in the insurance sector than in other industries, respondents stated, since every characteristic, with the exception of gender, can only be a differentiating factor if it is risk-relevant.

Definition of discrimination and freedom from discrimination

In cases where a characteristic has only two variants (e.g. smoker/non-smoker), freedom from discrimination is deemed to exist if both groups are just as likely to enter into a contract (on the same terms). In other words, two consumers with the same risk-relevant features should pay the same price. Respondents stressed that, in the case of characteristics that have not been gathered, it is not possible to guarantee that such characteristics have no influence over the result of a model decision. According to the respondents, if a characteristic is known, the model decision could in almost all cases be revised to prevent discrimination. Hence, some of the consultation participants deem that a data set containing precisely the characteristic to be ruled out is necessary in order to rule out any form of illegal discrimination.

Broader social debate appears to be necessary

The respondents see a need for a social debate in order to distinguish between desirable differentiation and unacceptable discrimination. This could promote the acceptance of new technologies. However, it should be noted that the greater differentiation made possible by BDAI could counteract phenomena such as moral hazard and adverse selection. If these opportunities are not used, the result could be unfair distribution or conditions in relation to risk-relevant information. As regards differentiation, it was noted that highly segmented rates were not successful in the past. Some considered that refined segmentation jeopardises the basic principle of insurance coverage.

Access to financial products

In a nutshell:Key points from the BDAI report

Linking different types of data (sources) could be a particularly promising way to improve risk assessments in the financial sector. In future, customers could therefore be confronted with situations where they have to give access to more (new) data (sources) – such as social media accounts. It is therefore possible that future data requirements will go far beyond current requirements and that the price of a financial service will depend on whether this data is made available. In addition, BDAI selection mechanisms could inordinately hamper access for individual consumers to certain financial services. The situation can be particularly precarious if consumers are disadvantaged by having access to a narrower range of products but are unaware that this is due to their personal data. This raises the question of how access to (affordable) financial services can be maintained if customers cannot or do not want to grant access to (new) sources of data to a significant extent.

In this set of topics, the respondents focused on insurance products. However, many of the arguments that were given are generally applicable to financial services.
4.3.1. Data is essential for risk assessments
Most of the respondents pointed out that the provision of data is essential for risk assessments in the financial sector (e.g. creditworthiness assessments). For instance, the basic principle of private insurance, they stated, is that premiums are oriented towards insured risk since private insurance – as opposed to social insurance – is based on the idea that only the risk of random fluctuation is distributed between the community of policyholders. Whether and on what terms a customer can obtain (private) insurance – and, generally speaking, a financial service – therefore depends on the individual risk.13 It was also stressed that customers who disclose less risk-relevant data or information have a different risk profile. This means that premium rate conditions differ depending on how much relevant data is available.

Determining risk-relevant data is key

It is proposed that legislators, supervisors or industry (through self-commitment) create binding definitions to determine what data is actually necessary for appropriate differentiation. Government authorities could then ensure that consumers who only consent to their data being processed to the extent required are not refused access to financial services. Respondents noted, though, that it is unclear what risk category such consumers would fall into, i.e. whether freedom from discrimination can be deemed to exist if these customers are not denied access to a service altogether, but still obtain, where applicable, services on less favourable terms. Respondents also warned that the possibility of the price for a financial product dropping if more data is provided could undermine the right to informational self-determination.

Competition governs access to financial products – also for customers who provide data only to the extent required

It was also noted that competition governs access to financial products – also for customers who provide data only to the extent required. The respondents observe a growing trend among providers highlighting contracts that can be entered into conveniently using data only to the extent required as a selling point. The amount of data and the form in which data has to be provided in order to enter into a contract is already a competition factor, according to the respondents. It was also noted that the requirements for data minimisation (Datensparsamkeit) are generally at odds with the fact that BDAI systems require a sufficiently large database. Requiring companies to offer products that no longer meet market standards and are based on obsolete technologies is no solution, respondents stressed. Such products would be of no interest to customers, and legal interventions would be obsolete as well.

Diverging opinion: proposal to expand basic products

To prevent the exclusion of customers who are reticent about sharing their data or who are non-digital, other respondents call for legal requirements obliging providers to offer non-digital contracts as well. Clearly defining when a contract is non-digital or conventional seems to be a highly complex task, according to the respondents. As with the introduction of the right to open a basic payment account, legislators could guarantee basic coverage, e.g. for health and long-term care insurance, personal liability insurance, occupational disability insurance and motor vehicle liability insurance, especially since growing differentiation could mean that certain groups of customers may no longer be insured at all or only at a very high cost. This would be particularly problematic for customers who are not able to influence risk themselves. It was also noted that supervisors could create a certificate for financial services requiring limited amounts of data, which, if accepted as a seal of quality, could minimise the risk that consumers who are reticent about sharing their data are not given access to services. However, the respondents also warned that special legal requirements for conventional financial products and products requiring limited amounts of data may suggest that the principle of data minimalisation does not apply to other (financial) products.

Consumer sovereignty

In a nutshell:Key points from the BDAI report

The potential of BDAI can only be exploited for financial services if it is possible to gain and maintain the trust of consumers by ensuring that their data is used as desired and in accordance with the law. Providers should particularly ensure that consumers are able to make sovereign decisions by ensuring that consumers are adequately informed about the potential reach and consequences of the use of their data and that they are given reliable options to control how their data is used and have genuine freedom of choice. It is not enough to provide consumers with highly complicated terms and conditions, which are usually accepted without being read. In particular, technical (data protection) measures (e.g. privacy-preserving data mining) or a “privacy by design” concept could also bolster consumer trust in BDAI innovations.

Data sovereignty regarded as a key issue

Most of the respondents clearly stated that the data sovereignty of consumers is a highly relevant issue – not only in the financial market but also in other sectors. They noted that it must be ensured that consumers are given clear information on what their data is going to be used for, that they are aware of the implications and that they are able to make a well-informed decision when sharing their data. Genuine freedom of choice is deemed to be an essential requirement. Social and financial pressure, lock-in and network effects, on the other hand, are considered problematic and counterproductive. Any regulatory measures must take into account these factors in addition to the limited ability of consumers in general to gather and process information. In this context, a minimum level of data protection is proposed, which would also apply after consent has been given. But if all requirements are met, the scope of action of financial services providers should not be further restricted, according to the respondents.

Financial supervisors are not primarily responsible – but a dialogue with other authorities is necessary

The respondents stressed that they do not consider that financial supervisors are responsible for reinforcing the data sovereignty of consumers. In this context, supervisory activities should focus on and be limited to the supervision of violations of consumer protection law. Other authorities and society as a whole need to be involved. Digital training, consumer education and learning opportunities for children and adults were proposed among other measures to raise awareness of the pros and cons of “paying with personal data”. A closer dialogue between financial and data protection authorities is deemed necessary.14

Ideas to strengthen and ensure data sovereignty

According to the respondents, data sovereignty can be guaranteed, in principle, by complying with legal provisions such as those set out in the General Data Protection Regulation and a transparent information policy geared towards consumers. To ensure that consumers have a better overview of the data they agree to share, the development of a data protection cockpit, for instance, was suggested. It must be generally ensured that personal data is to be used only in clearly defined and documented processes. Consumers must be given a point of contact that they can turn to, regardless of where in the chain of events damage or problems occurred. According to the respondents, consumers need a complete overview of who assumes liability, even in fragmented value chains.

It was also noted that industry standards in the financial sector could become de facto minimal requirements for the use of personal data if they are accepted or embraced by customers. This would allow customers to themselves choose reliable partners for the provision of financial services. For instance, it was noted that insurers, in cooperation with data protection authorities, have recently undertaken to ensure that data is used only to the extent required in a code of conduct that was published. Finally, it is assumed that there will be service providers specialising in the enforcement of informational self-determination. Such providers could use BDAI methods to find out where a user’s personal data is stored. Service providers could then be asked to delete such data at the user’s request.

Ways to ensure data protection using technology

The general view was that trust can be fostered by using processes such as privacy-preserving data mining and, as far as possible, pseudonymisation and anonymisation. Calling for the establishment of privacy-preserving data mining as a basic requirement to be strictly observed is considered problematic, as this often involves considerable restrictions for the development of algorithms.15 It was noted that privacy-preserving data mining relies on a trusted third party in practice. Financial institutions could play an important role here. However, the respondents noted that government authorities may also be required to take on the role of a trusted third party due to the potentially high liability risk.

Interview by Felix Hufeld, President of BaFin:In the future, we will no longer only look at individual companies

Mr Hufeld, the respondents to BaFin's consultation point out that, by making intelligent use of data, providers of search engines, social networks and online (comparison) platforms are advancing into areas that used to be the sole preserve of specialised and often regulated providers. What is your opinion on this?
If these and other tech or platform-based companies were to offer regulated financial services, they would, of course, have to meet the same supervisory and regulatory requirements as all the other institutions. But even if they do not provide any regulated financial services themselves, the respondents rightly pointed out that these companies could become essential for the functioning of the entire industry – e.g. as providers of cloud services, algorithms, data, and evaluations such as scores and ratings. These have been around for a while, but once BDAI and automated interfaces come into play, the impact of these services on the financial market could be even more immediate.
The respondents put forward a number of interesting ideas on how to address the growing importance of these providers for the financial market from a supervisory and regulatory point of view. One suggestion was that outsourcing companies should be subject to minimum technical standards similar to those for regulated banks. Another idea was a digital signature that lists all the companies involved in the development or provision of a product. This, it is argued, would help customers to understand more clearly who is behind a product or service. Above all, accountability would not lie solely with the financial services provider involved but would be extended to other companies along the entire value chain. In addition, a back-up party could also be agreed upon for every element within a product’s value chain, which would be obliged to step in if one of the companies involved cannot provide the expected service. Tech solutions, such as blockchain-based smart contracts, could play a part here.
All these considerations confirm the proposition we put forward in our BDAI report, which is that, as regulators and supervisors, we will no longer only look at individual companies but will increasingly consider value chains that are spread across multiple companies. Supervisors would then also focus on the activities of companies that are not part of the regulated financial sector but can still have an impact on customer trust and the integrity of the financial market as such. I am not saying that BaFin should supervise bigtech companies that do not provide financial services as a whole. What is important to me are some of the activities and conduct of such companies in order to establish a direct supervisory mandate in this respect.

Let's continue with value chains. Value can be created by linking data from various sources – for instance at key customer interfaces on platforms. How could the growing importance of (financial) data be taken into account?
We agree with most of the respondents on this matter. The growing importance of data in the age of digitalisation is also based on the fact that data from different sources is combined and compared, allowing new information to be obtained. By connecting data on financial transactions and data on the behaviour of consumers, it is possible to have a fairly clear idea of the amount of money that customers are willing and able to pay for products and services. In addition, the emergence of platform-based business models is breaking down information silos, and information from one area can have an impact on other areas. It is only logical that the authorities supervising different areas of economic life collaborate more closely and share information with each other – provided that this is permitted by law, of course. As the use of BDAI is increasing, data protection authorities and competition watchdogs are particularly important for us as financial supervisors. Our supervisory counterparts abroad are not to be forgotten either.
Of course, market participants see great economic potential in digging up treasure troves of data. But with data mining – as with any conventional process of prospecting, mining and utilising resources – we must keep a watchful eye on the associated risks. For us supervisors, it is crucial that consumers and providers are confident that the financial market is stable and that things are being done as they should be. We also need to consider what negative spillover effects there can be when financial data is used in value creation processes outside the financial market – even if formal consent has been given in accordance with the law. Social achievements, such as the protection of privacy and informational self-determination, should not be undermined under the guise of innovation – e.g. by obtaining people’s consent to share their data by giving them the impression that there is no alternative. Not everything that is technically possible, innovative and economically sensible in the short term is all the above if looked at from a holistic and long-term perspective.

Let's take another look at the financial market. Do you consider that the use of data is a key issue that could become more relevant for the financial market as a result of BDAI?
The argument that data is necessary for assessing risk could be used to justify the need to gather virtually all data in the context of providing financial services – although such practices have not been observed on the German financial market to date. The responses to our consultation have made one thing clear to us: we have to increasingly ask ourselves which data is really needed for an appropriate assessment of risks – in other words, for a suitable differentiation as required by supervisors. Insurers that took part in our consultation have, in cooperation with data protection authorities, already pledged to minimise the use of data in a code of conduct that has been published. But what I find interesting in this context is the fundamental question of where the limits of data collection and analysis should be in the case of BDAI. At what point does a marginal improvement in risk assessment justify the collection of more data? Which data can we categorise as offering real long-term and material advantages while ensuring a balance between the information that needs to be obtained and other objectives such as data minimisation (Datensparsamkeit)? I think we need to have a broad dialogue with all those concerned – but we also need to ask ourselves, as a society, where we want red lines to be drawn in the brave new world of data.

Let's now turn to responsibility in the context of self-learning decision support systems. In the BDAI report, BaFin pointed out that humans must always bear ultimate responsibility and that this responsibility cannot be passed on to computers. This also applies to financial supervision, according to the respondents. What is your view?
The Fraunhofer Institute for Intelligent Analysis and Information Systems was one of the institutions that assisted us with our BDAI report. Fraunhofer stressed that the successes of machine learning have so far only been observed in highly specific applications and that approaches for the general simulation of human intelligence are still not foreseeable. We can therefore expect to rely on the interplay between artificial and human intelligence in the foreseeable future. Responsibility will and must therefore continue to rest with humans in the area of financial supervision, too. Financial supervision is and will remain a flexible process that focuses on the assessment of complex issues. But artificial intelligence can support us as supervisors and help us prepare decisions and establish better and quicker processes. In highly data-driven areas – such as market abuse analyses or, perhaps in the future, money laundering prevention – supervisors will not be able to do without BDAI.

In the responses to the consultation, there were calls for manual or human intervention in decision support systems based on artificial intelligence. However, imposing algorithm explainability as a requirement is seen as unreasonable restrictions. What is your opinion?
In my opinion, blind trust in technology is dangerous. Humans must be able to intervene and it must be possible to switch off automated processes. As mentioned earlier, humans, not machines, bear ultimate responsibility. We need to bear this in mind when evaluating new processes.
As far as the explainability of AI systems is concerned, we stressed in our report that a distinction should be made between explainability and transparency. Transparency means that the behaviour of the system as a whole can be understood in its entirety. Fraunhofer pointed out that this is often impossible to achieve as many models are inevitably highly complex. On the other hand, explainability is a criterion that is far easier to fulfil from a technical point of view, according to Fraunhofer, as it focuses on identifying key influencing factors behind a specific decision reached by a system.
Respondents to our consultation also hold the view that we as supervisors are confronted with the question of whether and how BDAI models can be examined. Extended requirements for business-critical process areas were suggested, including the use of code review processes, simulation and penetration tests and the assessment of sample profiles. Respondents also called on BaFin to lay down specific requirements for documentation and the explainability of BDAI applications. But do not expect us supervisors to shoot from the hip. We should first deepen the dialogue with academia and industry and make sure that industry best practices are developed. Once we know if and how they work, we can, as a next step, consider to what extent we will derive standards from them.

What are the next steps for BaFin now that the consultation has closed?
We have started evaluating the responses, which we have summarised in this article. A number of subject areas are becoming apparent and we intend to prioritise and deal with these based on how urgent and significant they are. To address all the aspects of this complex topic, we need to work even more closely with industry, academia and other authorities in some areas. This is something we intend to do in the near future. But we have already achieved something with our BDAI report and the consultation: we have looked into the burning questions surrounding this topic – and placed them in the public eye.

Authors

Jörn Bartels
Strategy Development Division at BaFin

Dr Thomas Deckers
Innovations in Financial Technology Division at BaFin

Footnotes

  1. 1 BaFin, “Consultation on BDAI report”, https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/Konsultation/2018/kon_bdai_studie_en.html, retrieved on 23 January 2019.
  2. 2 BaFin, “Big data meets artificial intelligence – Challenges and implications for the supervision and regulation of financial services”, https://www.bafin.de/SharedDocs/Downloads/EN/dl_bdai_studie_en.html, retrieved on 23 January 2019. BaFin worked on the report in collaboration with PD – Berater der öffentlichen Hand GmbH, Boston Consulting Group GmbH and the Fraunhofer Institute for Intelligent Analysis and Information Systems.
  3. 3 “BDAI” stands for big data and artificial intelligence.
  4. 4 BaFin, loc. cit. (footnote 1), page 164 et seq.
  5. 5 The views in the BDAI report that are referred to in this article can mostly be found in the chapter “Supervisory and regulatory implications”.
  6. 6 Each topic also takes into account relevant information from responses that concern other sets of topics.
  7. 7 These and other views in the BDAI report that are referred to in this article can mostly be found in the chapter “Supervisory and regulatory implications”.
  8. 8 See also section 2.2.
  9. 9 See also section 4.1.
  10. 10 It should be noted, as a caveat, that another respondent argued that at least credit risk models are closed systems within the individual institutions that could not cause inter-institutional cascade effects.
  11. 11 For example, under no. 164 of the Minimum Requirements under Supervisory Law on the System of Governance of Insurance Undertakings (Mindestanforderungen an die Geschäftsorganisation von VersicherungsunternehmenMaGo), an analysis of the operational risks must be carried out before products, processes and systems are implemented or are subject to a significant change. The results of this analysis must be included in the decision-making process. In the banking sector, the Minimum Requirements for Risk Management (Mindestanforderungen an das RisikomanagementMaRisk) in relation to organisation and documentation (AT 5 and AT 6) are to be observed.
  12. 12 Consumer surplus is the difference between the maximum price that a consumer is willing to pay for a product or service and the price that they actually have to pay on the market.
  13. 13 Customers with the same risks receive the same terms, while customers with different risks receive different terms, respondents wrote, citing the VAG, where the principle of equal treatment is stipulated in section 138 (2), section 146 (2), section 147 and 161.
  14. 14 There were also calls for financial supervisors to work more closely with competition authorities and competition supervisors – see section 2.1.3.
  15. 15 There is also the opposing view that the potential of BDAI could often be fully taken advantage of even if these processes are used.

Additional information

Did you find this article helpful?

We appreciate your feedback

Your feedback helps us to continuously improve the website and to keep it up to date. If you have any questions and would like us to contact you, please use our contact form. Please send any disclosures about actual or suspected violations of supervisory provisions to our contact point for whistleblowers.

We appreciate your feedback

* Mandatory field