Artificial Realities: The Use, Risk and Reward of AI for Fund Managers

Published: 21 October 2025

Basic fact: even if you wanted to, you cannot shut out artificial intelligence - AI - from your business. Your vendors, service providers and trading counterparties use it. Your devices use it, your car uses it and the companies behind nearly every product you touch rely on it. You use it, and in turn, it uses you - for data and insights, mostly without you realizing it. AI is now embedded in modern life to the point of invisibility.

The Air We Breathe
AI is everywhere because it is efficient. AI is designed to streamline tasks and speed workflows, much like old-school automation for repetitive tasks. But unlike simple automation, it adds prediction, reasoning and adaptive behavior. It can understand language, recognize patterns, learn from data and make decisions that approximate human logic. When designed well, AI-enabled tools can offer radically improved efficiency, which can translate into cost savings at the least and substantial gains at best. It optimizes work, workflows, and it can optimize you, too - in your work life, at least.

According to AIMA’s most recent Artificial Intelligence in Asset Management survey, 95% of asset managers now use generative AI in some part of their operations. Generative AI sits at the forefront of industry transformation. It does just as promised; it creates. It can synthesize and produce entirely new outputs from what it learns from existing data, using patterns it recognizes. Once prompted, it goes about creating what you have asked - and sometimes not at all what you asked because these AI tools are only as smart as the questions they are asked and the data they are fed.

Agentic AI is its smarter sibling: it is designed not just to respond but to act to achieve goals. Similar, but different. You don’t need to prompt agentic AI; it has been trained for autonomy, to adapt, to take initiative. Agentic AI certainly can and does vary in capability, most often due to varying degrees of autonomy by design. Certain forms of generative AI also are agentic by definition, and there actually are defined levels of “agency” to AI, with the latter ones - unsurprisingly - being assigned to the agentic forms only. (One form, L5, is fully creative and can work to improve itself autonomously but is pretty much hypothetical. For now.)

It is no surprise that nearly all asset managers now use some form of generative AI, though not without boundaries. Over 60% of firms have implemented restrictions on how AI may be used, and 16% access these tools only through secure internal platforms to manage risk.

The Duality of Opportunity and Risk
Opportunity rarely occurs without risk, and both are aplenty with AI. AI’s potential for efficiency, precision and scale is extraordinary - but so are its risks. Some risks can be avoided, others can be mitigated and a few may just be too spicy for most palates. Some can be managed through governance, while others are legal, regulatory or reputational.  You cannot be risk proof, but you can and should be risk aware. Risk awareness is key to AI usage, though not all of the risks are obvious.

AI “learns” through machine learning, a subset of artificial intelligence that draws insight from massive datasets that are far beyond human comprehension. Every digital interaction - every search, message, transaction or sensor reading - adds to the data pool from which AI learns. Multiplied by billions of people and unlimited interactions, there is an almost boundless information universe. While humans naturally filter out noise, AI does not. In the AI data universe, no detail is too small; it finds meaning in the noise.

When paired with advanced algorithms, this infinite dataset gives AI extraordinary analytical reach. It can draft reports, analyze markets, identify anomalies or generate code - instantly, tirelessly and at scale. Every command you issue becomes another learning opportunity, refining how a tool predicts, assists and adapts to your preferences. It doesn't tire, it doesn’t forget and it continually evolves, drawing on its ever-expanding universe of data. Properly applied, AI turns the world’s collective digital footprint into actionable insight.

How AI Interacts
Most complex AI tools interact with vast data networks, exchanging information across connected tools or “agents.” This sharing can vary: some systems are tightly contained, while others are designed to collaborate, learn and update through continual interaction. Even private or contained AI models may connect indirectly to broader ecosystems to complete assigned tasks.

Think of these AI agents as nosy neighbors: always observing, sometimes eavesdropping and comparing notes and learning from what others do. In modern society, nearly every action leaves a trace, and those traces collectively fuel AI insight.

Like cookies? AI does, too! Consider the ubiquitous cookie banner on websites you visit: even when you select “strictly necessary”, your activity still contributes data to networks of analytics, optimization and personalization. Those fragments may be anonymized, aggregated or shared with service providers that fuel AI-driven insights behind the scenes. So you click, and more data is banked.

We constantly feed AI with data, actively, passively and unwittingly. It can listen for what isn’t supplied directly, it can grab from places you wouldn’t consider and it is really, really good at piecing it together for outcomes, from basic search results to the probable success of portfolio investments. What could possibly go wrong?

Garbage In, Garbage Out
For all its sophistication, AI depends entirely on the quality of its inputs. Publicly available data, its primary training material, is a mixture of gold and garbage. Inaccuracies, bias and misinformation coexist with legitimate facts. Where low-quality data is used to train or retrain models, flawed patterns multiply.

As AI-generated content becomes more common, models increasingly train on material produced by other AIs, leading to an increasing problem known as “AI inbreeding” - a decline in originality and factual accuracy. The resulting low-grade content is referred to as “slop” (seriously). Add to that the well-known issue of “hallucinations”, where AI fabricates plausible but false answers. The need for human oversight is obvious, and while we might snicker about false cases being cited in legal briefs, far more serious harm could result.

For all of its intelligence, (most) AI lacks common sense that (most) humans have; it is not very good at weeding out bad information. Perhaps it’s best to think of AI as the world’s fastest junior analyst: it can review every filing and dataset in an instant, but it still needs direction and supervision. You may come to trust it but always verify the output.

Sharing Isn’t Always Caring
Sharing might be nice when you’re working with non-digital cookies, but it isn’t so when private business information is on offer. Using public AI tools for professional purposes can create serious exposure. Information entered into open models - like ChatGPT - is never forgotten, becoming a persistent part of the vast dataset that can be shared in ways you never intended.

Although privacy controls can be added to enterprise-grade AI tools, public versions are not designed for sensitive information. Fund managers must therefore treat public AI systems as public spaces, with anything entered being considered as potentially visible, retrievable or replicable. Uploading client data, proprietary algorithms, internal analysis and the like into public tools risks confidentiality or intellectual property rights. From violating contractual employment clauses to breaking privacy laws to leaking material non-public information, public tools can be an abyss of risk.

Regulatory Tripwires
Although AI regulation is advancing in some parts of the world, it is stalled or absent in  others.  This lag is not entirely bad, as prematurely written regulation risks being obsolete.  However, it creates uncertainty for firms that must anticipate future  compliance obligations yet still want to invest in tools to reap potential benefits.

There are other practical concerns, with regulatory “tripwires” that are not particularly obvious.  Financial regulations on communications, marketing, performance reporting and recordkeeping can be triggered unintentionally through AI use.  Consider transcription apps that summarize meetings.  These virtual assistants can be fantastic if you’re a poor notetaker or just want to be fully present in a meeting.  However, once a transcript is taken from the app and pasted into an email or text, it becomes a regulated record, subject to recording and retention requirements.

Other tripwires are more subtle. If AI is used to inform investment decisions, firms must be able to explain the rationale; “the tool selected it” won’t satisfy a regulatory inquiry. If AI assists with performance calculations, could you prove how you derived that performance years down the road? Fund managers should carefully consider how AI usage and output may intersect with existing regulatory obligations and proactively address those findings accordingly.

Do What You Say, Say What You Do
Transparency is essential.  If you tell investors - or regulators in a filing - that your firm uses AI, or that it doesn’t, you must be accurate.  Overstating or misrepresenting your AI usage could constitute a false or misleading statement under anti-fraud provisions. Ensure that everyone involved is on the same page as to what AI is in your world, what you are using it for and what you say about your AI use.

Develop policies that reasonably account for your practices while allowing flexibility for technology and usage change.  Remember, the only thing worse than not having a policy is having a policy that isn’t followed.  Establish clear guidelines on acceptable tools, use cases and data boundaries.  Train staff to understand not just how AI can be used at your firm but also why compliance and discretion matter.  Monitor usage as well as access to external tools (your IT department can help), considering that too much restriction may drive employees to unmonitored, higher-risk alternatives as seen with “off-channel” communications.

If violations occur, document, remediate and learn from the experience.  Adapt your policy if there is a helpful takeaway that can benefit - and if a termination resulted, that’s a strong lesson for everyone.

Vendors and Fourth-Party Risk
Vendor oversight is equally critical.  Even if a direct vendor is compliant, its subcontractors - “fourth parties” - may not be.  At the very least, ask them questions about their practices that you would reasonably want answered by your direct vendors.  How do they handle data retention, access and sharing?  What assurances exist around model transparency or explainability?  What happens if their AI vendor changes practices?  Diligence is your best defense, and while you cannot control all vendor practices, you can demand visibility and accountability or take your business elsewhere.

The New Reality
AI is the new infrastructure of financial intelligence.  For fund managers, success will depend on mastering its potential while understanding its limitations and risks.  Use it, but know what it’s doing.  Benefit from its speed, but verify its output.  Use it to amplify human judgment rather than replace it.  With proper governance, AI can free talent from repetitive work, accelerate insight generation and deepen analytical rigor.

A common question on AI is whether it will take your job.  My reply always begins with a simple question: “Are you good at your job?”  AI is ruthlessly capable of mediocrity, so those not striving to improve and add value may have reason to worry.  For others?  It’s certainly possible in the long run; businesses make bad workforce decisions all the time.  However, instead of focusing on AI as competition, learn how to make it enhance your skill and value.  Used thoughtfully, AI won’t replace the human “edge" - it will sharpen it.

Get ready, because AI is about to put human potential on overdrive.
 


To learn more about the above article or to get involved in AIMA's AI efforts, please contact Suzan Rose ([email protected]).