MAKR operates with multiple partners, each using AI agents through PureBrain for daily operations. Beyond the agents themselves, the fund relies on a broader technology stack — platforms like PureBrain where agents operate, infrastructure systems that support fund operations, and third-party tools and integrations that touch fund data. Together, these form the AI operations ecosystem.
This ecosystem creates a governance challenge that must be resolved before the first LP dollar arrives:
The AI Ethics, Privacy & Data Sharing working paper (April 29, 2026) established the foundational principles — what data can and cannot be shared, the regulatory landscape, and where the lines are drawn.
The Ethics paper defined a four-tier data classification:
| Tier | Classification | AI Sharing Rule | Examples |
|---|---|---|---|
| Tier 1 | Restricted | Never share with AI | LP personal data, KYC/AML, privileged legal, MNPI |
| Tier 2 | Confidential | Share with care, redact when possible | Portfolio company financials, deal pipeline, partner comms |
| Tier 3 | Internal | Share on enterprise platform | Fund operations, workflows, templates, aggregate performance |
| Tier 4 | Public | Share freely | Market research, industry analysis, preferences, scheduling |
The Ethics paper also identified four pre-requisites before first close:
The challenge is not just "what data can AI see" — it's "how do multiple AI agents coexist in a shared fund operation while maintaining personal privacy and regulatory compliance?"
Each partner's AI agent accumulates deep context over time. That context is the source of its value — and the source of the segregation challenge. A partner's AI that has been working on deal analysis for 6 months knows things that should stay within the fund's governance perimeter, but the same AI also knows personal preferences and activities unrelated to the fund.
Each partner operates across at least three domains:
| Domain | Scope | Visibility | AI Access |
|---|---|---|---|
| Personal | Personal interests, finances, learning, day-to-day life | Private to individual | Full — no restrictions |
| Other Business | Advisory roles, board seats, other business ventures | Private to individual | Full — no restrictions |
| MAKR Fund | Fund operations, deals, LPs, portfolio, IC decisions | Shared among GPs (with role-based controls) | Governed by AI Use Policy + data classification |
The AI agent must serve all three domains seamlessly without leaking information between them. A partner researching a topic for personal interest should not be blocked because the topic overlaps with MAKR's sectors. Conversely, MAKR deal data should not appear in a deliverable for another business entity.
Each option represents a different answer to the question: "How many AI agents does each partner need, and how do they interact with shared fund resources?"
Concept: Each partner keeps their single AI agent. The platform provides access zones that control what data the agent can read/write in the shared MAKR workspace. The agent operates freely in personal/other business zones with no restrictions.
Concept: Each partner operates two separate AI agents. One for personal/other business (fully private), one dedicated to MAKR (shared workspace). The MAKR agent can request information from the personal agent through a controlled channel.
Concept: Each partner keeps their single AI agent, but accesses it through two separate portals. A private portal for personal/other business, and a MAKR portal where outputs are visible to partners. The agent itself is unified — it knows everything — but the publishing layer controls what reaches the shared workspace.
| Criterion | A: One Agent, Zones | B: Two Agents | C: One Agent, Two Portals |
|---|---|---|---|
| Separation strength | Medium | Strong | Medium |
| Context preservation | Full | Fragmented | Full |
| Partner friction | Low | High | Low |
| LP audit confidence | Medium | High | Medium |
| Cost per partner | 1x | 2x | 1x |
| Platform dependency | High (needs zone support) | Low | Medium |
| Accidental leak risk | Medium | Low | Higher |
| Knowledge retention if partner exits | Leaves with partner | MAKR agent stays | Leaves with partner |
| Implementation complexity | Medium | Medium | Low |
The architecture options above address agent-level segregation. But agents operate within a broader technology stack, and each layer carries its own governance requirements.
The platform is where agents live, conversations persist, and memory accumulates. Governance questions at this layer:
Fund operations rely on systems beyond the AI platform — cloud hosting, document management, communication tools, and purpose-built infrastructure. Each integration point is a potential data flow that requires governance.
| System Type | Examples | Data Flow Risk | Governance Need |
|---|---|---|---|
| Cloud hosting | Cloudflare, AWS, cloud storage | Deployed content visibility, access controls | Account separation, deployment authorization |
| Document management | Google Drive, SharePoint, VDRs | Shared folders, permission inheritance | Folder-level access controls, sharing audit |
| Communication | Email, messaging, video conferencing | AI agents sending/receiving on behalf of partners | Authorization scope, audit trail |
| Fund operations infrastructure | Purpose-built systems, integrations | Centralized data processing, cross-system data flows | Integration authorization, data flow mapping |
| Third-party tools | Analytics, research platforms, APIs | Data shared with external services | Vendor assessment, DPA requirements |
A complete governance framework requires mapping how data moves across the entire stack — not just within the agent layer:
Without this mapping, governance is applied in silos — agent access is controlled but the same data flows freely through infrastructure with no oversight.
Whichever option the partners choose, the following governance elements are non-negotiable for institutional credibility:
Every AI agent action that touches MAKR data must be logged to a shared, tamper-proof audit trail:
The four-tier classification from the AI Ethics paper must be enforced technically, not just by policy:
Someone or something must watch the watchers:
A shared document repository with intelligent search would solve multiple problems simultaneously:
RAG is not a separate system — it is the middleware layer that serves as access controller, search engine, and audit logger in one.
The platform hosting AI agents requires its own governance layer, separate from agent-level controls:
Every system that touches fund data must be inventoried and authorized:
| Requirement | Details |
|---|---|
| System inventory | Maintain a register of all systems, platforms, and tools in the fund's technology stack. Include: system name, purpose, data types processed, data residency, vendor, contract terms, last review date. |
| Integration authorization | New tools or integrations require approval before connecting to fund systems. Individual partner decision for personal-zone tools; GP vote for MAKR-zone integrations. |
| Data flow documentation | Map how data moves between systems. Identify every point where fund data crosses a system boundary. Each crossing point needs a classification check and audit log. |
| Annual review | Review all integrations annually. Remove unused connections. Re-assess vendor security posture. Update data flow maps. |
All partners must explicitly agree on:
| Topic | Agreement Needed |
|---|---|
| What each partner can share with their AI | Follows the four-tier classification. No exceptions without GP vote. |
| What personal data is off-limits to MAKR workspace | Everything not explicitly published to the shared workspace. |
| Who owns MAKR agent data if a partner exits | MAKR-zone data stays with the fund. Personal-zone data leaves with the partner. |
| Who serves as AI Governance Owner | Named individual responsible for policy, audit review, incident response. |
| How disputes are resolved | If partners disagree on data classification, the Governance Owner decides. Appeals to full GP vote. |
These cannot be decided unilaterally. Each requires explicit agreement from all GPs.
If not, Option A requires custom middleware development. Option B becomes more practical despite its higher friction. This is a factual question to put to PureBrain's team before the partners meeting.
Self-hosting (e.g., private Claude API deployment) eliminates third-party data concerns entirely but adds operational complexity and cost. At what AUM does this become justified? The AI Ethics paper noted PureBrain lacks SOC 2 and ISO 27001 certification — does this matter pre-first-close or only post-institutional scale?
Platform administration (user accounts, permissions, settings) is a technical role. AI Governance (policy, audit review, incident response) is a compliance role. They may be the same person in a lean team, but the responsibilities are distinct. Which partner takes this on?
If a partner wants to connect a new research tool, analytics platform, or third-party API to their workflow, does that require GP approval? What if the tool touches MAKR data indirectly? A lightweight approval process prevents shadow IT without creating bureaucratic overhead.
Different systems may store data in different jurisdictions. The AI Ethics paper identified this as a gap for PureBrain specifically, but the same question applies to every system in the stack. What is the fund's position on acceptable data residency jurisdictions?
Systems built specifically for fund operations may serve as the middleware/RAG layer discussed in this paper, or may operate independently. The governance framework needs to account for these systems — who builds them, who maintains them, how they connect to the agent and platform layers, and how data flows through them.
Some partners may prefer complete separation (Option B) for peace of mind, even at the cost of efficiency. This is a values question, not a technical one.
If a partner's personal AI produced a deal memo that became a MAKR deliverable, does MAKR retain it? What about research notes? Strategic analyses? This needs to be addressed in the partnership agreement.
Example: A partner researches a topic for personal interest that turns out to be directly relevant to MAKR's thesis. Under the chosen architecture, how does that research move from personal zone to MAKR zone? Automatically? By explicit publication? With attribution? This is a common real-world scenario and needs a clear, low-friction process.
This person reviews audit logs, handles incidents, updates the AI Use Policy, and answers LP questions about AI governance during ODD. It should be someone who understands both the technology and the regulatory landscape.
Options range from ~$1,200/year (policy + basic tools) to $50K+ (custom middleware, RAG infrastructure, compliance tooling). The Ethics paper estimated $5-15K for outside counsel review. Where does this sit in the fund's operational budget?
Minimum before first close: AI Use Policy, LP disclosure language, NDA provisions, partner boundaries agreement. The full technical architecture (RAG, middleware, audit system) can be phased.
Start with Option A if PureBrain can support access zones (or if custom middleware is built). Fall back to Option B if platform limitations make zone-based access impractical.
The competitive advantage of MAKR's AI-first operating model comes from context richness. Each partner's AI has been learning for months — understanding their thinking patterns, research preferences, communication style, strategic instincts. Splitting that into two agents (Option B) doesn't just double the cost — it halves the intelligence. The MAKR agent starts from zero while the personal agent retains everything.
The cross-pollination between personal research and fund thinking is not a bug — it's the feature. The partner who explores a topic for personal interest and then brings that knowledge to a MAKR deal discussion is exactly the behavior we want to encourage, not architect around.
Option C (one agent, two portals) is the lowest-friction approach but relies on the agent correctly identifying which mode it's in. Consider a scenario where a partner's content is accidentally published to the shared fund workspace through a default configuration. The failure mode isn't malice; it's defaults. Option A's technical zone enforcement is a harder boundary than Option C's trust-based approach.
If PureBrain cannot support access zones and custom middleware is not justified pre-first-close, Option B provides the cleanest separation. The context loss is real but manageable — partners can explicitly share relevant research with their MAKR agent. The key advantage: when an LP asks "how is partner data segregated?", the answer is simple and auditable: "separate agents, separate data stores, logged interactions."
| Phase | What | When | Cost Estimate |
|---|---|---|---|
| Phase 1 | AI Use Policy + LP disclosure language + NDA provisions + partner boundaries agreement | Before first close | $5-15K (outside counsel) |
| Phase 2 | Determine platform capabilities. Choose Option A or B. Implement basic audit logging. Complete system inventory. | Within 60 days of partner agreement | $2-5K (platform assessment) |
| Phase 3 | RAG infrastructure for shared document repository. Middleware layer if Option A. Data flow mapping across full stack. | Within 6 months of first close | $10-30K (development + infrastructure) |
| Phase 4 | Full monitoring, automated alerts, quarterly compliance reviews. SOC 2 push for platform or evaluate self-hosting. Annual integration review. | Ongoing | $5-10K/year |