By Ana Castillo and Cesar Rosales
More and more often, in meetings between entrepreneurs and investors, we hear about startups that have already incorporated artificial intelligence (AI) solutions.
As a team, a few days ago we discussed some of our experiences. For example, a startup working in agrifoodtech that presented at a demo day an AI solution to predict pests in small-scale crops. The model looked robust, trained with satellite imagery and historical weather data.
One of the investors asked a direct question: “How does the model work when the context changes, for example, in another country or with a different type of client?” The response revealed a significant fragility: they hadn’t tested it yet. When the startup finally conducted initial tests in rural areas with low connectivity and limited coverage, the system started to generate false alerts, leading farmers to apply unnecessary pesticides.
This is not an isolated case, and it led us to think: what role can Venture Capital (VC) funds play to ensure these questions don’t come too late?
A wave that doesn't stop. In 2024, VC funds invested over USD 131.5 billion globally in AI startups, according to PitchBook. This amount represented 46% of the total global VC capital, marking a historical milestone. In LAC, interest from these funds in startups using emerging technologies like AI is growing rapidly.
The magnitude of the capital invested clearly shows a genuine wave of enthusiasm for AI. The opportunity is immense. This pace means that funds—and their startups—must ask solid questions about how this technology works outside the lab and what the challenges are: bias, inexplicable failures, privacy breaches, misuse of data, misinformation.
As the Responsible AI Playbook for Investors by the World Economic Forum (2024) notes, generative and predictive AI pose risks “that can affect not only the company, but also individuals, communities, and entire systems.”
So, what role can VC funds play? Beyond using AI for their own management or operations, funds have a much more powerful lever: their relationship with the portfolio.
From the first pitch to the exit, they can act as catalysts for a culture of responsible AI, helping their startups build safer, more explainable, and fairer products. It's about asking the right questions, introducing ethical frameworks, and supporting governance from day one.
“VC funds have a unique opportunity to influence how AI is designed in early stages. Supporting with ethical criteria is part of their duty.” (Responsible Venture Capital, CDC & FMO, 2020)
What can a fund do? Stage by stage. Here's a visual summary of the catalytic effect a VC fund can have at each stage of the investment cycle by integrating responsible AI principles.
Investment Stage | VC Fund Role in Responsible AI (RAI) | What the Startup Gains | What the Fund Gains |
---|---|---|---|
Deal flow | Introduces ethical AI criteria in opportunity assessment | Access to more inclusive and transparent selection processes | Alignment with ESG and impact theses |
Due diligence | Assesses technical, social, and regulatory risks in AI models | Early detection of vulnerabilities in data and algorithms and business model improvement opportunities | Lower exposure to reputational and regulatory risks |
Post-investment | Provides frameworks, tools, and RAI principles | Best practices to scale ethical and safe products | Stronger portfolio, better positioned for follow-ons or acquisitions |
Growth | Encourages development of auditable and responsible solutions | Greater stakeholder trust | More attractive startups for co-investors and LPs |
Exit | Helps demonstrate ESG compliance and algorithmic governance | Better valuation and preparation for next rounds or M&A | Reputation as an impact fund, higher exit success rate |
fAIr LAC Advisory services for VC is born, as part of the Impact Enhancement Program. This 2025, we launched this new line of work to promote AI adoption, but also to foster more strategic, ethical, and responsible investments in AI.
From fAIrLAC, IDB Lab’s initiative to promote responsible AI adoption in Latin America and the Caribbean, we launched fAIr LAC Advisories: a series of tools, methodologies, and technical support spaces designed for venture capital funds seeking to integrate responsible AI principles throughout their investment cycle.
Among these tools stands out fAIr Venture, a solution already tested with more than 5 funds that helps identify non-financial risks associated with AI use in startups. Based on the 3S model (Solution, System, Society), it evaluates key aspects during the due diligence process.
What does this new advisory line include?
Application of the fAIr Venture tool
Practical due diligence guides focused on responsible AI
Advisory for model clauses in term sheets
Support to develop the RAI Framework
Access to world-class technical assistance
As the CFA Institute (2024) points out: “Ethical frameworks applied to AI not only mitigate risks: they also protect long-term value, improve professional judgment, and elevate governance culture.”
AI is a transformational technology, and as investors, we now have both the opportunity and the responsibility to help startups scale with purpose, transparency, and care.