A practical guide for advisers considering the use of AI

By Jennifer L. Klass; Pablo J. Man; Christopher J. Valente, K&L Gates

Published: 24 March 2025

The increasing prevalence of artificial intelligence (AI) in the financial services industry offers advisers the potential for greater efficiency, improved client experiences, and enhanced advisory capabilities. However, advisers must understand the unique risks of the technology to navigate the regulatory framework. Before integrating AI into an advisory business, advisers should consider a range of factors, including technological limitations, regulatory compliance, and governance frameworks.

Understanding AI and its capabilities

There is no universal definition of AI. AI is an umbrella term encompassing various longstanding technologies, including machine learning (ML) and deep learning (DL). ML enables computers to learn from data and make predictions or decisions without explicit programming. DL, a subfield of ML, uses neural networks to analyse complex data and identify patterns. These technologies can be, and in many cases have been, applied in numerous ways within the asset management industry, such as algorithmic trading. 

More recently, the term AI has become synonymous with generative AI (GenAI), which is a subset of AI that can create new content, such as text, images, and models. In asset management, GenAI can assist in summarising and digesting large volumes of investment research, transcribing calls and meetings, optimising portfolios, managing risk, generating investment summaries and marketing materials, and engaging with clients through chatbots and natural language processing (NLP) tools. The considerable focus on AI from both an industry and regulatory perspective is largely attributable to recent developments in GenAI. 

US regulatory considerations

Advisers operate in a highly regulated environment, and the adoption and use of GenAI must align with existing compliance requirements. The Securities and Exchange Commission (SEC), under former Chairman Gensler, explored the use of digital engagement practices and proposed, but ultimately did not adopt, controversial rulemaking restricting the use of predictive data analytics.1 To date the regulatory framework for advisers under the Investment Advisers Act of 1940 (Advisers Act) is based on general principles of fiduciary duty and disclosure, compliance with existing rules (e.g., recordkeeping and marketing rules), and maintenance of appropriate policies and controls. Key elements of the regulatory framework include:

Fiduciary Duty. Advisers are fiduciaries that owe clients a duty of care (including having a reasonable belief that their advice is best interest of the client based on the client’s investment profile) and a duty of loyalty (including disclosing material conflicts of interest that could affect the advisory relationship). One of the most tempting use cases for advisers is incorporating GenAI into the investment decision-making process. The challenge, however, is that in order to do this, advisers need to have a clear understanding as to how investment decisions are being made to satisfy their duty of care. Advisers also need to understand the AI’s output to ensure that they are making full disclosure of all material conflicts of interest. This puts a premium on the transparency and explainability of the AI tool, as well as the management of data sources and the adviser’s model risk management, testing, and verification processes. To the extent advisers are using AI to provide investment advice or recommendations, AI-driven investment strategies should be carefully monitored to ensure that they align with each client’s investment profile. In addition, advisers should regularly review AI models to confirm that their outputs remain in the best interest of clients. 

With respect to conflicts, advisers should consider whether AI-driven portfolio allocations disproportionately favour proprietary funds or otherwise optimise the results to generate additional revenue or fees for the adviser. Given the significant regulatory risk, and the transparency limitations of the technology itself, most advisers either have not yet incorporated AI (or GenAI) into the investment decision-making process or require robust human verification of the output before implementation. Further, the limitations and risks of AI should be disclosed in Form ADV and client communications.

Data management. The effectiveness of the output of AI-driven tools depends on the quality of the data that AI applications use to train and fine tune models, conduct analyses, identify patterns, and make predictions. Accordingly, it is critical that advisers understand the underlying data that is used by the AI-driven tools and develop a process to confirm the legitimacy of the data sources and evaluate any potential biases that may affect the output of the AI models. 

Privacy considerations. Advisers should adopt controls to prevent proprietary or confidential information of the adviser or its clients from “seeping” out into the public domain through the adviser’s use of GenAI, particularly if using publicly available AI tools. The risk of seepage highlights the potential privacy and data security risks. Advisers should ensure that AI applications comply with data protection regulations, such as the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) and implement appropriate security protocols to safeguard client information against breaches.

Marketing rule compliance. Advisers are responsible for the content, supervision, and recordkeeping obligations associated with AI-generated content to the same extent as communications generated by humans. Accordingly, advisers should be cautious about how they discuss their use of AI, including on social media and in marketing materials. The SEC has brought enforcement actions relating to so-called “AI-washing,” where advisers overstate the role of AI in their business and investment process. Do what you say, say what you do.

Books and records. As with marketing content, all AI-generated communications are potentially subject to the Advisers Act Rule 204-2 (requiring advisers to maintain certain books and records). For example, using AI to generate written summaries of meetings or calls creates a record that may, depending on the topic of the meeting, be required to be retained. Similarly, relying on chatbots to interact with clients creates written records that may need to be retained if those communications relate, for example, to recommendations or advice, the receipt, disbursement, or delivery of funds or securities, the execution of orders, or otherwise would be considered advertisements under the Marketing Rule. 

Third-party vendor oversight. Many advisers rely on external AI tools or work with vendors that increasingly are integrating AI into the services they provide. In either case, advisers should augment their existing service provider oversight processes to address initial and periodic due diligence about the use of AI and the vendor’s related controls. Among other things, advisers should consider contractual provisions that require advanced notice before a vendor can incorporate AI (and particularly GenAI) into existing services and restrictions on the ingestion of the adviser’s data to develop, train, or improve the vendor’s AI system. 

AI governance best practices

To integrate AI responsibly, investment advisers should adopt a structured approach to AI governance. Best practices include:

  1. Employee training: Employees should be trained to recognise AI-related risks and effectively use AI tools. Advisers may also consider developing educational programmes to help employees remain knowledgeable about AI advancements and compliance requirements.
  2. AI use policy: Firms should consider adopting acceptable use policies for AI that outline how employees may engage with AI for business purposes and what uses are prohibited. This includes whether employees may only interact with specific systems for business purposes, requirements around the use and verification of output generated by AI, and restrictions on uploading proprietary or confidential information (including client information) into publicly available AI systems. In the absence of an AI policy, advisers should consider blocking access on work devices to publicly available AI systems.
  3. Governance frameworks: Establishing a formal governance committee to oversee AI applications helps promote accountability and proper oversight, including by evaluating the introduction of AI-based tools based on business needs and risk mitigation. These governance committees should include compliance officers, data scientists, investment professionals, and risk management personnel to consider potential regulatory risks, monitor AI use and evaluate the effectiveness of the firm’s controls. 
  4. Model risk management: Advisers should establish or enhance existing model risk management controls to address risks that are unique to the use and complexity of AI applications. These include policies and procedures that govern the development, updating, testing, and verification of outputs from AI models. The model risk management framework should also include data source evaluation (as discussed above) and a focus on the explainability and transparency of AI tools.
  5. Testing and verification: Regularly testing AI models for accuracy, fairness, and explainability is a critical component of any control structure. Advisers may also conduct scenario analysis and stress test AI-driven tools to help firms understand how AI performs under different market conditions and evaluate AI model resilience.
  6. Disclosure: Advisers should clearly communicate to clients how AI is being used in their investment strategies and disclose the limitations of AI-driven advice and any related conflicts of interest.

Conclusion

As AI continues to evolve, it is important that advisers understand both its risks and opportunities. Firms that prioritise regulatory compliance and proactively implement strong governance will be best positioned to leverage AI’s potential while fostering long-term client confidence and managing risk. 

 

1. See Request for Information and Comments on Broker-Dealer and Investment Adviser Digital Engagement Practices, Related Tools and Methods, and Regulatory Considerations and Potential Approaches; Information and Comments on Investment Adviser Use of Technology to Develop and Provide Investment Advice, Exchange Act Rel. No. 34-92766 (Aug. 27, 2021); Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers, Advisers Act Rel. No. 6353 (July 26, 2023).