Security Hardening
TLS encryption, RBAC, certificate chain management, memory protection, and network segmentation — all implemented in the MacLab cluster.
Security Overview
Encryption in Transit
TLS 1.2+
Transport + HTTP layers
Authentication
Native Realm
Built-in user store
Authorization
RBAC
Role-based access control
TLS Configuration
All communication in the cluster is encrypted. The transport layer (node-to-node) and HTTP layer (client-to-node) both use TLS certificates signed by a shared Certificate Authority.
# Transport layer (inter-node communication)
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.key: certs/es01/es01.key
xpack.security.transport.ssl.certificate: certs/es01/es01.crt
xpack.security.transport.ssl.certificate_authorities: certs/ca/ca.crt
xpack.security.transport.ssl.verification_mode: certificate
# HTTP layer (client communication)
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: certs/es01/es01.key
xpack.security.http.ssl.certificate: certs/es01/es01.crt
xpack.security.http.ssl.certificate_authorities: certs/ca/ca.crtVerification Modes
certificate — Validates that the certificate is signed by the trusted CA but does not check the hostname. Used for transport layer in Docker environments where container hostnames may change.
full — Validates both the certificate chain and the hostname (SAN must match). Recommended for production HTTP endpoints. Requires proper DNS or SAN configuration.
none — No verification. Never use in production. Only for debugging during initial setup.
Certificate Chain
CA (Certificate Authority)
├── ca.crt (self-signed root certificate)
└── ca.key (CA private key — never shared outside es-setup)
│
├── es01/
│ ├── es01.crt (node certificate, signed by CA)
│ └── es01.key (node private key)
│
├── es02/
│ ├── es02.crt
│ └── es02.key
│
└── es03/
├── es03.crt
└── es03.key
Trust model:
- Each node trusts any certificate signed by ca.crt
- Kibana also trusts ca.crt to verify es01's HTTP certificate
- The CA key is only available in the es-setup init containerRBAC: Role-Based Access Control
| User | Role | Privileges |
|---|---|---|
| elastic | superuser | Full cluster administration, all indices, all operations |
| kibana_system | kibana_system | Kibana internal operations, .kibana index management |
In production, you would create dedicated users with minimal privileges: read-only users for dashboards, ingest-only users for log shippers, and monitoring-only users for alerting. The principle of least privilege is critical — a compromised Beats agent should not have cluster admin access.
# Create a role with read-only access to support-tickets
curl -X POST "https://localhost:9200/_security/role/support_reader" \
--cacert ca.crt -u elastic:$PASSWORD \
-H 'Content-Type: application/json' -d '{
"indices": [{
"names": ["support-tickets"],
"privileges": ["read", "view_index_metadata"]
}]
}'
# Create a user with that role
curl -X POST "https://localhost:9200/_security/user/dashboard_user" \
--cacert ca.crt -u elastic:$PASSWORD \
-H 'Content-Type: application/json' -d '{
"password": "secure_password_here",
"roles": ["support_reader"],
"full_name": "Dashboard Viewer"
}'Memory Protection
# elasticsearch.yml
bootstrap.memory_lock: true
# docker-compose.yml
ulimits:
memlock:
soft: -1
hard: -1bootstrap.memory_lock calls mlockall() on startup, preventing the JVM heap from being swapped to disk. When the OS swaps Elasticsearch memory, GC pauses explode from milliseconds to seconds, causing nodes to appear unresponsive and potentially triggering false master elections. This is one of the first things to check when a customer reports intermittent cluster instability.
Snapshot Repository
curl -X PUT "https://localhost:9200/_snapshot/maclab-backup" \
--cacert ca.crt -u elastic:$PASSWORD \
-H 'Content-Type: application/json' -d '{
"type": "fs",
"settings": {
"location": "/usr/share/elasticsearch/backups"
}
}'
# Create a snapshot
curl -X PUT "https://localhost:9200/_snapshot/maclab-backup/snapshot_1?wait_for_completion=true" \
--cacert ca.crt -u elastic:$PASSWORDSnapshots are incremental — only segments that changed since the last snapshot are copied. The shared filesystem path must be accessible from all nodes (mounted via Docker volume). For cloud deployments, S3, GCS, or Azure Blob repositories are recommended for durability and cross-region backup.
Network Segmentation
maclab network
Application-facing network. Kibana, Traefik, and external clients connect through this network. HTTP traffic (port 9200) is exposed here.
data network
Internal cluster network. Transport traffic (port 9300) for shard replication, master election, and cluster state propagation flows through this network. Not accessible from application containers.
Network segmentation limits the blast radius of a compromised container. If an application container is breached, the attacker cannot directly reach the Elasticsearch transport layer — they would need to exploit the HTTP API, which is protected by authentication and TLS.