MAKR AI & Technology Operations Framework

Working Paper — Agents, Platforms & Infrastructure: Architecture, Governance & Access Controls
May 3, 2026 — For Partners Discussion

The Situation

MAKR operates with multiple partners, each using AI agents through PureBrain for daily operations. Beyond the agents themselves, the fund relies on a broader technology stack — platforms like PureBrain where agents operate, infrastructure systems that support fund operations, and third-party tools and integrations that touch fund data. Together, these form the AI operations ecosystem.

This ecosystem creates a governance challenge that must be resolved before the first LP dollar arrives:

  • Each partner's AI may handle both personal and fund business — research that starts as personal curiosity often feeds directly into fund strategy
  • Fund data must be segregated and governed — LP data, deal pipeline, IC deliberations cannot leak across boundaries
  • Partners should not see each other's non-MAKR work — personal projects, advisory work, and other business activities are private
  • No partner should be blocked from legitimate research — an AI that prevents you from exploring a topic because it "touches MAKR" is useless
  • Platforms and infrastructure carry the same risks as agents — data residency, access controls, and audit trails must be governed across the entire technology stack, not just at the agent layer

What the AI Ethics Paper Established

The AI Ethics, Privacy & Data Sharing working paper (April 29, 2026) established the foundational principles — what data can and cannot be shared, the regulatory landscape, and where the lines are drawn.

Relationship between the two papers: The Ethics paper answers "should we share this data?" This paper answers "how do we technically enforce those decisions across agents, platforms, and infrastructure?" This paper implements the principles established there. It does not revisit the classification decisions — it operationalizes them.
Core principle from the Ethics paper: "Share what makes you effective, protect what could harm others, and document what you decided and why."

The Ethics paper defined a four-tier data classification:

TierClassificationAI Sharing RuleExamples
Tier 1RestrictedNever share with AILP personal data, KYC/AML, privileged legal, MNPI
Tier 2ConfidentialShare with care, redact when possiblePortfolio company financials, deal pipeline, partner comms
Tier 3InternalShare on enterprise platformFund operations, workflows, templates, aggregate performance
Tier 4PublicShare freelyMarket research, industry analysis, preferences, scheduling

The Ethics paper also identified four pre-requisites before first close:

  1. An AI Use Policy (1-2 pages, GP-approved)
  2. LP Disclosure Language in PPM/LPA
  3. AI-specific NDA provisions
  4. Partner agreement on boundaries — this paper addresses this item directly

The Real-World Complexity

The challenge is not just "what data can AI see" — it's "how do multiple AI agents coexist in a shared fund operation while maintaining personal privacy and regulatory compliance?"

Hypothetical scenario: A partner deploys a personal project to a shared cloud account instead of a private one. Other partners can now see work unrelated to the fund. The error is discovered and corrected, but it illustrates how separation failures happen in practice — not through malice, but through default configurations.

Each partner's AI agent accumulates deep context over time. That context is the source of its value — and the source of the segregation challenge. A partner's AI that has been working on deal analysis for 6 months knows things that should stay within the fund's governance perimeter, but the same AI also knows personal preferences and activities unrelated to the fund.

Three Worlds, One Person

Each partner operates across at least three domains:

DomainScopeVisibilityAI Access
PersonalPersonal interests, finances, learning, day-to-day lifePrivate to individualFull — no restrictions
Other BusinessAdvisory roles, board seats, other business venturesPrivate to individualFull — no restrictions
MAKR FundFund operations, deals, LPs, portfolio, IC decisionsShared among GPs (with role-based controls)Governed by AI Use Policy + data classification

The AI agent must serve all three domains seamlessly without leaking information between them. A partner researching a topic for personal interest should not be blocked because the topic overlaps with MAKR's sectors. Conversely, MAKR deal data should not appear in a deliverable for another business entity.

Three Architecture Options

Each option represents a different answer to the question: "How many AI agents does each partner need, and how do they interact with shared fund resources?"

Option A: One Agent, Access Zones

Concept: Each partner keeps their single AI agent. The platform provides access zones that control what data the agent can read/write in the shared MAKR workspace. The agent operates freely in personal/other business zones with no restrictions.

How it works:
Personal Zone — Partner's private workspace. Only their agent can access. Invisible to others.
MAKR Shared Zone — All partners' agents can access. All actions logged. Data classified by tier.
MAKR Restricted Zone — Role-based access (e.g., only compliance officer can access LP KYC data).
• A middleware layer (access gateway) sits between agents and the MAKR zones, enforcing rules and writing audit logs.

Advantages

  • Simplest for partners — one AI, one relationship, full accumulated context
  • No handoff friction between agents
  • Personal research naturally feeds into fund thinking (the cross-pollination advantage)
  • Lower cost — one platform subscription per partner
  • The AI Ethics paper's data classification maps directly to zone permissions

Risks & Challenges

  • Requires platform support for multi-tenant access controls (PureBrain may not support this yet)
  • A misconfigured zone could expose personal data to the shared workspace
  • Agent memory is unified — it "knows" things from all zones even if it can't share them
  • Audit trail must be robust enough for LP due diligence

Option B: Two Agents Per Partner

Concept: Each partner operates two separate AI agents. One for personal/other business (fully private), one dedicated to MAKR (shared workspace). The MAKR agent can request information from the personal agent through a controlled channel.

How it works:
Personal Agent — Operates independently. Full personal context. No MAKR data.
MAKR Agent — Operates in shared MAKR workspace. All partners' MAKR agents coexist. Actions logged.
• Communication between agents is explicit and logged — partner must approve each information transfer.
• MAKR agents can see shared fund data; personal agents cannot.

Advantages

  • Clean separation — no risk of personal data leaking into MAKR workspace
  • Partners never see each other's personal AI or its outputs
  • MAKR agent can be standardized — same tools, same policies, same audit trail
  • Easier to demonstrate governance to LPs during ODD
  • If a partner leaves, the MAKR agent's knowledge stays with the fund

Risks & Challenges

  • Double the cost — two subscriptions, two agents to maintain per partner
  • Context fragmentation — the MAKR agent doesn't benefit from personal research unless explicitly transferred
  • Handoff friction — "I researched this on my personal agent, now I need to re-explain it to the MAKR agent"
  • Partners may resist the workflow overhead
  • The MAKR agent starts with zero context — months of ramp-up

Option C: One Agent, Two Interfaces

Concept: Each partner keeps their single AI agent, but accesses it through two separate portals. A private portal for personal/other business, and a MAKR portal where outputs are visible to partners. The agent itself is unified — it knows everything — but the publishing layer controls what reaches the shared workspace.

How it works:
Private Portal — Partner interacts freely. Nothing published to MAKR workspace.
MAKR Portal — Partner explicitly publishes deliverables/data to the shared fund workspace.
• The AI has unified memory but applies context-awareness — it knows when it's operating in "MAKR mode" vs "personal mode."
• Only MAKR Portal outputs are logged and visible to other partners.

Advantages

  • One brain, two faces — no context loss, no handoff friction
  • Agent can draw on personal research when working on MAKR deliverables
  • Partners choose what to publish to the shared workspace
  • Lower cost than Option B

Risks & Challenges

  • Discipline-dependent — the agent must correctly identify which mode it's in
  • Risk of accidental publication (default configuration errors)
  • LPs may question whether a "trust the AI to know the difference" approach is robust enough
  • No hard technical barrier between personal and MAKR data in memory
  • Harder to audit — "how do we know the agent didn't use personal data in a MAKR deliverable?"

Side-by-Side Comparison

Criterion A: One Agent, Zones B: Two Agents C: One Agent, Two Portals
Separation strengthMediumStrongMedium
Context preservationFullFragmentedFull
Partner frictionLowHighLow
LP audit confidenceMediumHighMedium
Cost per partner1x2x1x
Platform dependencyHigh (needs zone support)LowMedium
Accidental leak riskMediumLowHigher
Knowledge retention if partner exitsLeaves with partnerMAKR agent staysLeaves with partner
Implementation complexityMediumMediumLow

Beyond Agents: Platform & Infrastructure Considerations

The architecture options above address agent-level segregation. But agents operate within a broader technology stack, and each layer carries its own governance requirements.

Platform Layer (e.g., PureBrain)

The platform is where agents live, conversations persist, and memory accumulates. Governance questions at this layer:

  • Administration: Who controls user accounts, permissions, and platform settings? Is there a single admin or per-partner control?
  • Data residency: Where does the platform store conversation history, uploaded files, and agent memory? Which jurisdiction applies?
  • Platform updates: When the platform provider pushes updates, who reviews them for impact on fund operations? How are breaking changes managed?
  • Multi-tenancy: Does the platform natively separate partner workspaces, or is everything in a shared environment?
  • Backup & recovery: If the platform goes down, what is the recovery process? Who owns the data export?
Current state: The AI Ethics paper identified that PureBrain lacks SOC 2 and ISO 27001 certification, has no explicit data residency commitment, and no disclosed at-rest encryption standard. These gaps apply to the platform layer regardless of how agents are configured.

Infrastructure Layer (Connected Systems)

Fund operations rely on systems beyond the AI platform — cloud hosting, document management, communication tools, and purpose-built infrastructure. Each integration point is a potential data flow that requires governance.

System TypeExamplesData Flow RiskGovernance Need
Cloud hostingCloudflare, AWS, cloud storageDeployed content visibility, access controlsAccount separation, deployment authorization
Document managementGoogle Drive, SharePoint, VDRsShared folders, permission inheritanceFolder-level access controls, sharing audit
CommunicationEmail, messaging, video conferencingAI agents sending/receiving on behalf of partnersAuthorization scope, audit trail
Fund operations infrastructurePurpose-built systems, integrationsCentralized data processing, cross-system data flowsIntegration authorization, data flow mapping
Third-party toolsAnalytics, research platforms, APIsData shared with external servicesVendor assessment, DPA requirements

Data Flow Mapping

A complete governance framework requires mapping how data moves across the entire stack — not just within the agent layer:

Example data flow:
Partner researches a topic (agent) → saves analysis to document store (infrastructure) → publishes deliverable to cloud hosting (infrastructure) → shares link with other partners via platform (platform) → other partners' agents access the deliverable (agent)

Governance checkpoints needed at each arrow: Who authorized the transition? Was the data classification maintained? Is the action logged?

Without this mapping, governance is applied in silos — agent access is controlled but the same data flows freely through infrastructure with no oversight.

Practical Governance: What Must Be True Regardless of Architecture

Whichever option the partners choose, the following governance elements are non-negotiable for institutional credibility:

1. Centralized Audit Log

Every AI agent action that touches MAKR data must be logged to a shared, tamper-proof audit trail:

  • What data was accessed (file name, classification tier)
  • Which agent/partner initiated the action
  • What was produced (deliverable, analysis, communication)
  • Timestamp and session identifier
Why this matters for LPs: 85% of LPs reject managers over operational concerns alone (ODD data). A documented AI audit trail demonstrates institutional-grade governance. Most competitors cannot show this.

2. Data Classification Enforcement

The four-tier classification from the AI Ethics paper must be enforced technically, not just by policy:

  • Tier 1 (Restricted): Technical block — AI agents cannot access. Stored in separate, encrypted repository. Human-only access with 2FA.
  • Tier 2 (Confidential): AI can access with logging. Automatic redaction prompts for names/numbers when producing external-facing documents.
  • Tier 3 (Internal): AI has full access. All actions logged.
  • Tier 4 (Public): No restrictions.

3. Monitoring & Alerting

Someone or something must watch the watchers:

  • Automated alerts for: agent accessing data outside its authorized zone, unusual volume of data exports, access to Tier 2 data outside business hours
  • Weekly digest to the designated AI governance owner summarizing all agent activity across the MAKR workspace
  • Quarterly review of audit logs as part of standard compliance cycle
The monitoring gap today: Currently, no one monitors what each partner's AI agent does. There is no centralized log. Each agent operates independently with full access to whatever its partner shares. This is the gap this framework aims to close.

4. RAG Architecture (Retrieval-Augmented Generation)

A shared document repository with intelligent search would solve multiple problems simultaneously:

  • Efficiency: Agents query a central knowledge base instead of each loading full documents into their context window. Reduces token consumption by 60-80%.
  • Access control: The RAG layer enforces who can search which documents. Tier 1 documents excluded entirely.
  • Consistency: All agents reference the same source of truth for fund documents, policies, and precedents.
  • Audit: Every query is logged — who searched for what, when, what was returned.

RAG is not a separate system — it is the middleware layer that serves as access controller, search engine, and audit logger in one.

5. Platform Governance

The platform hosting AI agents requires its own governance layer, separate from agent-level controls:

  • Platform administrator role: A designated individual (may overlap with AI Governance Owner) responsible for user management, permission settings, and platform configuration
  • Change management: Platform updates reviewed before adoption. Configuration changes logged with before/after state
  • Vendor management: Annual review of platform provider's security posture, certifications, and terms of service. Push for SOC 2 / ISO 27001
  • Data portability: Regular export of critical data (conversation history, agent memory, deliverables) to fund-controlled storage. Prevents vendor lock-in and ensures business continuity

6. Infrastructure Register & Integration Authorization

Every system that touches fund data must be inventoried and authorized:

RequirementDetails
System inventoryMaintain a register of all systems, platforms, and tools in the fund's technology stack. Include: system name, purpose, data types processed, data residency, vendor, contract terms, last review date.
Integration authorizationNew tools or integrations require approval before connecting to fund systems. Individual partner decision for personal-zone tools; GP vote for MAKR-zone integrations.
Data flow documentationMap how data moves between systems. Identify every point where fund data crosses a system boundary. Each crossing point needs a classification check and audit log.
Annual reviewReview all integrations annually. Remove unused connections. Re-assess vendor security posture. Update data flow maps.
Why this matters: LP ODD questionnaires increasingly ask about technology infrastructure, not just AI use. A documented system inventory with data flow maps demonstrates operational maturity that most emerging managers cannot show.

7. Partner Boundaries Agreement

All partners must explicitly agree on:

TopicAgreement Needed
What each partner can share with their AIFollows the four-tier classification. No exceptions without GP vote.
What personal data is off-limits to MAKR workspaceEverything not explicitly published to the shared workspace.
Who owns MAKR agent data if a partner exitsMAKR-zone data stays with the fund. Personal-zone data leaves with the partner.
Who serves as AI Governance OwnerNamed individual responsible for policy, audit review, incident response.
How disputes are resolvedIf partners disagree on data classification, the Governance Owner decides. Appeals to full GP vote.

Questions Requiring Partner Alignment

These cannot be decided unilaterally. Each requires explicit agreement from all GPs.

Platform & Technology

Q1: Does PureBrain support multi-tenant access controls?

If not, Option A requires custom middleware development. Option B becomes more practical despite its higher friction. This is a factual question to put to PureBrain's team before the partners meeting.

Q2: Should MAKR pursue its own AI infrastructure (self-hosted) or rely on PureBrain?

Self-hosting (e.g., private Claude API deployment) eliminates third-party data concerns entirely but adds operational complexity and cost. At what AUM does this become justified? The AI Ethics paper noted PureBrain lacks SOC 2 and ISO 27001 certification — does this matter pre-first-close or only post-institutional scale?

Q3: Who administers the platform, and is that the same person as the AI Governance Owner?

Platform administration (user accounts, permissions, settings) is a technical role. AI Governance (policy, audit review, incident response) is a compliance role. They may be the same person in a lean team, but the responsibilities are distinct. Which partner takes this on?

Q4: How are new tools and integrations approved?

If a partner wants to connect a new research tool, analytics platform, or third-party API to their workflow, does that require GP approval? What if the tool touches MAKR data indirectly? A lightweight approval process prevents shadow IT without creating bureaucratic overhead.

Q5: What is the data residency position across the full technology stack?

Different systems may store data in different jurisdictions. The AI Ethics paper identified this as a gap for PureBrain specifically, but the same question applies to every system in the stack. What is the fund's position on acceptable data residency jurisdictions?

Q6: How does purpose-built infrastructure fit into the governance framework?

Systems built specifically for fund operations may serve as the middleware/RAG layer discussed in this paper, or may operate independently. The governance framework needs to account for these systems — who builds them, who maintains them, how they connect to the agent and platform layers, and how data flows through them.

Partner Alignment

Q7: Is each partner comfortable with their AI agent operating in a shared workspace?

Some partners may prefer complete separation (Option B) for peace of mind, even at the cost of efficiency. This is a values question, not a technical one.

Q8: What happens to AI-generated work product if a partner leaves?

If a partner's personal AI produced a deal memo that became a MAKR deliverable, does MAKR retain it? What about research notes? Strategic analyses? This needs to be addressed in the partnership agreement.

Q9: How should partners handle research that crosses personal/MAKR boundaries?

Example: A partner researches a topic for personal interest that turns out to be directly relevant to MAKR's thesis. Under the chosen architecture, how does that research move from personal zone to MAKR zone? Automatically? By explicit publication? With attribution? This is a common real-world scenario and needs a clear, low-friction process.

Governance & Roles

Q10: Who should be the AI Governance Owner?

This person reviews audit logs, handles incidents, updates the AI Use Policy, and answers LP questions about AI governance during ODD. It should be someone who understands both the technology and the regulatory landscape.

Q11: What is the budget for AI governance infrastructure?

Options range from ~$1,200/year (policy + basic tools) to $50K+ (custom middleware, RAG infrastructure, compliance tooling). The Ethics paper estimated $5-15K for outside counsel review. Where does this sit in the fund's operational budget?

Implementation

Q12: Timeline — what must be in place before first close vs. can evolve after?

Minimum before first close: AI Use Policy, LP disclosure language, NDA provisions, partner boundaries agreement. The full technical architecture (RAG, middleware, audit system) can be phased.

Tarin's Recommendation

Recommended approach: Option A (One Agent, Access Zones) with Option B as fallback.

Start with Option A if PureBrain can support access zones (or if custom middleware is built). Fall back to Option B if platform limitations make zone-based access impractical.

Why Option A

The competitive advantage of MAKR's AI-first operating model comes from context richness. Each partner's AI has been learning for months — understanding their thinking patterns, research preferences, communication style, strategic instincts. Splitting that into two agents (Option B) doesn't just double the cost — it halves the intelligence. The MAKR agent starts from zero while the personal agent retains everything.

The cross-pollination between personal research and fund thinking is not a bug — it's the feature. The partner who explores a topic for personal interest and then brings that knowledge to a MAKR deal discussion is exactly the behavior we want to encourage, not architect around.

From the Ethics paper: "The question isn't whether to use AI in fund management — 85% of VCs already do. The question is whether to do it thoughtfully or carelessly. There is an opportunity to get this right from day one."

Why Not Option C

Option C (one agent, two portals) is the lowest-friction approach but relies on the agent correctly identifying which mode it's in. Consider a scenario where a partner's content is accidentally published to the shared fund workspace through a default configuration. The failure mode isn't malice; it's defaults. Option A's technical zone enforcement is a harder boundary than Option C's trust-based approach.

Why Option B as Fallback

If PureBrain cannot support access zones and custom middleware is not justified pre-first-close, Option B provides the cleanest separation. The context loss is real but manageable — partners can explicitly share relevant research with their MAKR agent. The key advantage: when an LP asks "how is partner data segregated?", the answer is simple and auditable: "separate agents, separate data stores, logged interactions."

Phased Implementation

PhaseWhatWhenCost Estimate
Phase 1AI Use Policy + LP disclosure language + NDA provisions + partner boundaries agreementBefore first close$5-15K (outside counsel)
Phase 2Determine platform capabilities. Choose Option A or B. Implement basic audit logging. Complete system inventory.Within 60 days of partner agreement$2-5K (platform assessment)
Phase 3RAG infrastructure for shared document repository. Middleware layer if Option A. Data flow mapping across full stack.Within 6 months of first close$10-30K (development + infrastructure)
Phase 4Full monitoring, automated alerts, quarterly compliance reviews. SOC 2 push for platform or evaluate self-hosting. Annual integration review.Ongoing$5-10K/year

What This Paper Is Not

  • This is not a final policy document — it is a working paper for partner discussion
  • This does not prescribe a single answer — it presents options with trade-offs for collective decision
  • This does not replace legal counsel — the chosen architecture should be reviewed by fund formation counsel
  • This does not commit to any platform — the framework is platform-agnostic
What this paper does: It ensures that when the partners discuss AI and technology operations, everyone is looking at the same options, the same trade-offs, and the same open questions. Alignment on the framework precedes alignment on the answer.
This document is confidential and intended solely for the use of the intended recipient(s). It does not constitute an offer, solicitation, or recommendation to buy or sell any securities or investment products. The information contained herein is provided for informational purposes only, may not be complete or accurate, and is subject to change without notice. Past performance is not indicative of future results. Unauthorized reproduction or distribution is prohibited.