Skip to content

Plugins

The plugin system allows extending Fundament with installable plugins that integrate into the platform’s console UI, RBAC, and lifecycle management.

┌──────────────────────────────────────────────────────────────────────────┐
│ Fundament Cluster │
│ │
│ fundament namespace │
│ ┌────────────────────────────────────────────────────────────────────┐ │
│ │ PluginInstallation CRs Plugin Controller │ │
│ │ ┌──────────────────────┐ ┌──────────────────────────────┐ │ │
│ │ │ cert-manager-test │──────►│ Watches CRs │ │ │
│ │ │ another-plugin │ │ Creates plugin namespaces │ │ │
│ │ └──────────────────────┘ │ Manages RBAC + deployments │ │ │
│ │ │ Polls plugin status │ │ │
│ │ └──────┬───────────────────────┘ │ │
│ └────────────────────────────────────────┼──────────────────────────┘ │
│ │ creates │
│ ┌─────────────────────┼─────────────────────┐ │
│ ▼ ▼ ▼ │
│ plugin-cert-manager-test plugin-another-plugin plugin-... │
│ ┌───────────────────────┐ ┌───────────────────────┐ │
│ │ SA + RoleBinding │ │ SA + RoleBinding │ │
│ │ Deployment + Service │ │ Deployment + Service │ │
│ │ (+ ClusterRoleBinding │ │ │ │
│ │ if requested) │ │ │ │
│ └───────────────────────┘ └───────────────────────┘ │
└──────────────────────────────────────────────────────────────────────────┘
ComponentPurpose
Plugin SDKGo framework that plugins implement. Handles HTTP, health probes, metadata API, logging, and lifecycle.
Plugin ControllerKubernetes controller that watches PluginInstallation CRs and manages plugin namespaces, RBAC, and deployments.
Plugin (e.g. cert-manager)A container image that uses the SDK. Implements business logic (install software, manage CRDs, serve console UI).

The SDK provides all the boilerplate so plugin authors only implement business logic.

type Plugin interface {
Definition() PluginDefinition // Static metadata (from definition.yaml)
Start(ctx context.Context, host Host) error // Main logic, block until ctx cancelled
Shutdown(ctx context.Context) error // Graceful cleanup
}
type Reconciler interface { // Periodic health checks (default: every 5m)
Reconcile(ctx context.Context, host Host) error
}
type Installer interface { // Structured install/uninstall/upgrade
Install(ctx context.Context, host Host) error
Uninstall(ctx context.Context, host Host) error
Upgrade(ctx context.Context, host Host) error
}
type ConsoleProvider interface { // Serve UI assets at /console/
ConsoleAssets() http.FileSystem
}

When a plugin binary calls pluginsdk.Run(plugin), the SDK:

pluginsdk.Run(plugin)
├─ Parse environment config (cluster ID, org ID, log level, etc.)
├─ Initialize structured JSON logger
├─ Initialize OpenTelemetry (tracing + metrics)
├─ Create Host (provides logger, telemetry, status reporting)
├─ Start HTTP server on :8080
│ ├─ GET /healthz ──────── Liveness probe (always 200)
│ ├─ GET /readyz ───────── Readiness probe (200 after ReportReady())
│ ├─ ConnectRPC ─────────── PluginMetadataService (status + definition)
│ └─ GET /console/ ──────── Static UI assets (if ConsoleProvider)
├─ Call plugin.Start(ctx, host)
│ └─ Plugin does its work, calls host.ReportReady() when ready
├─ Start reconciliation loop (if Reconciler interface implemented)
├─ Wait for SIGTERM/SIGINT
└─ Call plugin.Shutdown(ctx) with deadline

The Host is passed to Start() and Reconcile():

type Host interface {
Logger() *slog.Logger // Structured logger
Telemetry() TelemetryService // Tracing + metrics
ReportStatus(status PluginStatus) // Update status (visible to controller)
ReportReady() // Flip readiness probe to healthy
}
Installing ──► Running ◄──► Degraded
│ │
▼ ▼
Failed Uninstalling
PhaseMeaning
installingPlugin is setting up (e.g. running Helm install)
runningPlugin is healthy and operational
degradedPlugin is running but something is wrong (transient error)
failedUnrecoverable error (permanent error)
uninstallingPlugin is cleaning up before shutdown

The SDK provides error classification to drive retry behavior:

// Transient: retryable, plugin stays "degraded"
return pluginerrors.NewTransient(fmt.Errorf("CRDs not yet ready: %w", err))
// Permanent: non-retryable, plugin goes to "failed"
return pluginerrors.NewPermanent(fmt.Errorf("invalid configuration: %w", err))
HelperPurpose
helpers/helmWrapper around helm upgrade --install and helm uninstall
helpers/crdVerify that required CRDs exist in the cluster
helpers/controllerruntimeScaffold a controller-runtime manager
consoleConvert embedded FS to http.FileSystem for console assets
authJWT validation middleware for Connect RPC interceptors

The controller runs in the fundament namespace and watches PluginInstallation CRs.

apiVersion: plugins.fundament.io/v1
kind: PluginInstallation
metadata:
name: cert-manager-test
namespace: fundament
spec:
image: ghcr.io/fundament-oss/fundament/cert-manager-plugin:v1.0.0
pluginName: cert-manager-test
version: v1.17.2
clusterRoles: # Optional: bind SA to these ClusterRoles
- cluster-admin
config: # Optional: extra env vars for the container
LOG_LEVEL: debug

For each PluginInstallation, the controller creates:

plugin-{pluginName} namespace
├─ ServiceAccount/plugin-{pluginName}
├─ RoleBinding ──► ClusterRole/admin (always, namespace-scoped)
├─ Deployment (runs the plugin image)
└─ Service (:8080)
ClusterRoleBinding (only if spec.clusterRoles is set)
└─ Binds SA to requested ClusterRoles at cluster scope
┌─────────────────────────────────────┐
│ DEFAULT (always) │
│ │
│ RoleBinding in plugin namespace │
│ → ClusterRole/admin │
│ │
│ Plugin can manage all resources │
│ within its own namespace. │
└─────────────────────────────────────┘
+
┌─────────────────────────────────────┐
│ OPTIONAL (spec.clusterRoles) │
│ │
│ ClusterRoleBinding │
│ → ClusterRole/{requested} │
│ │
│ For plugins that need cluster-wide │
│ access (CRDs, webhooks, resources │
│ in other namespaces). │
└─────────────────────────────────────┘
PluginInstallation CR event
Add finalizer ──► Create Namespace ──► Create SA
├──► Create RoleBinding (→ admin)
├──► Create ClusterRoleBindings (if spec.clusterRoles)
├──► Create Deployment
├──► Create Service
Poll plugin metadata API
GET http://plugin-{name}.plugin-{name}.svc.cluster.local:8080
└─ PluginMetadataService.GetStatus()
Update CR .status
(phase, ready, message, pluginVersion)
RequeueAfter (poll interval)
CR deleted ──► Finalizer triggers:
├─ Delete ClusterRoleBindings (if any)
├─ Delete Namespace (cascades to all resources)
└─ Remove finalizer → CR garbage collected
apiVersion: fundament.io/v1
kind: PluginDefinition
spec:
metadata:
name: my-plugin
displayName: My Plugin
version: v1.0.0
description: Does something useful
author: My Team
license: Apache-2.0
icon: puzzle
tags:
- example
permissions:
capabilities:
- internet_access
rbac:
- apiGroups: ["my-api.io"]
resources: ["myresources"]
verbs: ["get", "list", "watch"]
menu:
project:
- crd: myresources.my-api.io
list: true
detail: true
create: true
icon: box
uiHints:
myresources.my-api.io:
statusMapping:
jsonPath: ".status.phase"
values:
"Ready":
badge: success
label: Ready
"Failed":
badge: danger
label: Failed
package main
import (
"context"
"fmt"
"log"
pluginsdk "github.com/fundament-oss/fundament/plugin-sdk"
)
type MyPlugin struct {
def pluginsdk.PluginDefinition
}
func (p *MyPlugin) Definition() pluginsdk.PluginDefinition {
return p.def
}
func (p *MyPlugin) Start(ctx context.Context, host pluginsdk.Host) error {
host.ReportStatus(pluginsdk.PluginStatus{
Phase: pluginsdk.PhaseInstalling,
Message: "setting up",
})
// Do setup work...
host.ReportReady()
host.ReportStatus(pluginsdk.PluginStatus{
Phase: pluginsdk.PhaseRunning,
Message: "operational",
})
<-ctx.Done()
return nil
}
func (p *MyPlugin) Shutdown(_ context.Context) error {
return nil
}
func main() {
def, err := pluginsdk.LoadDefinition("definition.yaml")
if err != nil {
log.Fatal(err)
}
pluginsdk.Run(&MyPlugin{def: def})
}
FROM golang:1.26-alpine AS builder
WORKDIR /build
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /bin/my-plugin ./plugins/my-plugin
FROM alpine:3.21
# Add any CLI tools your plugin needs (e.g. helm)
COPY --from=builder /bin/my-plugin /my-plugin
COPY plugins/my-plugin/definition.yaml /app/definition.yaml
WORKDIR /app
ENTRYPOINT ["/my-plugin"]
apiVersion: plugins.fundament.io/v1
kind: PluginInstallation
metadata:
name: my-plugin
namespace: fundament
spec:
image: registry.example.com/my-plugin:v1.0.0
pluginName: my-plugin
version: v1.0.0
# Only if your plugin needs cluster-wide access:
# clusterRoles:
# - cluster-admin

The cert-manager plugin is a reference implementation that installs and manages cert-manager.

  1. Start: Runs helm upgrade --install cert-manager from the Jetstack Helm repo
  2. Verify: Checks that all cert-manager CRDs exist (certificates, issuers, clusterissuers, certificaterequests)
  3. Reconcile: Periodically re-checks CRD availability, reports degraded if missing
  4. Console: Serves a placeholder console UI at /console/
plugins/cert-manager/
├── main.go # Entry point: load definition, call pluginsdk.Run()
├── plugin.go # Plugin implementation (Start, Install, Reconcile, etc.)
├── console.go # Embeds console/ directory as http.FileSystem
├── definition.yaml # Plugin metadata, permissions, menu entries, UI hints
├── console/
│ └── placeholder.html
├── plugin_test.go # Unit tests
└── Dockerfile # Multi-stage build (Go build + alpine with helm)

cert-manager installs cluster-scoped resources that require broad permissions:

  • CRDs (certificates.cert-manager.io, etc.)
  • ClusterRoles and ClusterRoleBindings
  • ValidatingWebhookConfigurations / MutatingWebhookConfigurations
  • Resources across multiple namespaces

The default namespace-admin RoleBinding only covers the plugin’s own namespace. The clusterRoles: [cluster-admin] field in the PluginInstallation grants the additional access.

plugins/cert-manager/install.yaml
apiVersion: plugins.fundament.io/v1
kind: PluginInstallation
metadata:
name: cert-manager-test
namespace: fundament
spec:
image: localhost:5111/cert-manager-plugin:latest
pluginName: cert-manager-test
version: v1.17.2
clusterRoles:
- cluster-admin
Container starts
pluginsdk.Run()
├─ HTTP server on :8080
Start()
├─ ReportStatus("installing", "installing cert-manager")
├─ helm upgrade --install cert-manager jetstack/cert-manager
├─ Create k8s client
├─ crd.VerifyAll([certificates, certificaterequests, issuers, clusterissuers])
├─ ReportReady()
├─ ReportStatus("running", "cert-manager is running")
└─ Block until SIGTERM
Reconcile() (every 5 minutes)
├─ crd.VerifyAll(...)
├─ If OK: ReportStatus("running")
└─ If not: ReportStatus("degraded")

Every plugin exposes a ConnectRPC service that the controller and console consume:

service PluginMetadataService {
rpc GetStatus(GetStatusRequest) returns (GetStatusResponse);
rpc GetDefinition(GetDefinitionRequest) returns (GetDefinitionResponse);
}
ConsumerMethodPurpose
Plugin ControllerGetStatusPoll phase, message, version → write to CR .status
Console FrontendGetDefinitionFetch menu entries, UI hints, CRDs → render plugin UI

A self-contained development environment lives in plugins/sandbox/. It creates an isolated K3D cluster with only the plugin controller — no database, auth services, or other Fundament components needed. The sandbox cluster (fundament-plugin) uses a separate registry on port 5112, so it can coexist with the main Fundament cluster without conflicts.

Terminal window
cd plugins
just cluster-create # Create K3D cluster + registry (~10s)
just dev # Build + deploy plugin-controller with file watching
# In another terminal:
cd plugins
just plugin-install cert-manager # Build plugin, push to registry, apply CR
just plugin-status # Check PluginInstallation status
just logs # Watch controller logs
# Verify cert-manager actually works:
just cert-manager test # Creates a self-signed ClusterIssuer + Certificate
just cert-manager test-cleanup # Remove test resources
# Cleanup:
just plugin-uninstall cert-manager
just cluster-delete
CommandDescription
just cluster-createCreate a K3D cluster for plugin development
just cluster-startStart the cluster (creates if it doesn’t exist)
just cluster-stopStop the cluster without deleting it
just cluster-deleteDelete the cluster and registry
just devDeploy plugin-controller with file watching (auto-rebuild)
just deployDeploy plugin-controller (one-time)
just undeployRemove the deployment
just plugin-install <plugin>Build plugin image, push to registry, apply CR
just plugin-uninstall <plugin>Delete PluginInstallation CR
just plugin-logs <plugin>Stream a specific plugin’s logs
just plugin-statusShow all PluginInstallation CRs
just logsStream plugin-controller logs
just cert-manager testVerify cert-manager with a self-signed certificate
just cert-manager test-cleanupRemove cert-manager test resources