Skip to main content

Internal Architecture

Sessions

Sessions refer to the management of interactions within secure enclaves, ensuring continuous and secure communication while prioritising scalability and reliability. These sessions encompass various elements described in more detail below including TLS with attestation for establishing trust and secure communication, persistent sessions for handling dynamic data interactions, and advanced load balancing and scaling mechanisms for optimising performance and security.

TLS with Attestation

In enclave computing, ensuring absolute certainty about the contents of an enclave before transmitting any data is paramount. The attestation process plays a pivotal role in achieving this.

The attestation process is facilitated through the OBLV Client. Users connecting to the enclave via the OBLV Client can verify that the enclave's configuration meets the expected requirements. This verification is essential for preventing man-in-the-middle attacks, including those potentially originating from infrastructure owners.

Establishing End-to-End TLS Connections

End-to-end TLS connections are established when connecting to an enclave, ensuring that even DevOps engineers deploying the enclave cannot access the traffic. During this process, a certificate is generated inside the enclave with the public key embedded in the attestation document.

There are multiple approaches to managing TLS certificates in the enclave including certificate chaining, self-signed certificates and ACM-provided certificates. Please refer to the guides for a practical step-by-step guide to these.

Linking Enclave Ownership and Configuration

Within the enclave, the TLS certificate generated alongside the embedded attestation document links two critical aspects: ownership and configuration. The ownership is established based on the parent certificate provided by a trusted certificate authority, while the configuration aspect is verified by the attestation document and secondary manifest.

This linkage ensures that the TLS certificate and attestation documents inside the enclave serve as comprehensive representations of the enclave's identity and operational state. Thus, the end-to-end TLS connection is not only encrypted but also intimately tied to the verified and trusted state of the enclave, providing a higher level of security assurance.

Persistent Sessions

In enclave computing, operations are traditionally perceived as atomic while real-world scenarios often require a more flexible approach. Data interactions within enclaves may involve pulling in data, responding to queries, and interacting with external services like Open Banking endpoints.

Persistent sessions within enclaves enable a series of related communications or transactions to be treated as part of a continuous session, offering flexibility without compromising security.

OBLV Deploy allows for persistent communication with enclaves while ensuring appropriate routing without disrupting end-to-end TLS sessions.

Load Balancing with End-to-End TLS

OBLV Deploy leverages load balancing to ensure efficient and scalable deployments of enclaves. While mechanisms like sticky sessions enhance enclave communication, effective load balancing distributes traffic and resources optimally across enclave instances.

Traditional load balancers exhibit limitations in managing traffic for enclaves since they lack the advanced routing logic required at the TCP Level 4. OBLV Deploy implements a more tailored approach that can adapt to dynamic requirements. This implementation ensures that load balancing does not break end-to-end TLS.

Scaling

Scalability is a crucial aspect of cloud deployments, allowing systems to adapt to changing workloads and resource demands.

The scaling process in OBLV Deploy uses a telemetric service within the enclave to collect data. Based on these metrics, the Kubernetes controller defines whether scaling up or down is needed.

This ensures that scaling operations are driven by real-time data insights, enhancing the efficiency and responsiveness of the system.

Auto-scaling decisions are typically based on specific metrics such as memory usage, CPU load, or number of HTTP requests. Users can configure scaling based on these metrics to ensure optimal resource allocation.

Furthermore, users can use custom metrics, capacity-based scaling, and standby servers:

  • Custom Metrics: In addition to standard metrics, custom metrics play a vital role in auto-scaling. For instance, capacity-based scaling allocates resources based on predefined limits, such as the number of sessions per server.

  • Proactive Scaling: Systems can be configured to scale proactively before reaching predefined limits. This proactive approach ensures that additional resources are available when needed, minimising response time.

  • Standby Servers: Users have the option to maintain standby servers to handle sudden spikes in demand. This configuration ensures that resources are readily available during peak usage periods, enhancing system reliability.

What's Next?

Refer to the Attested Connection Process page to learn about the flow that underpins the establishment of secure connections between your local environment and an enclave.