Using AWS Cloudfront as HTTPS proxy to HTTP load balancer

Mark Boyd
3 min readNov 3, 2021

Background

For one of my current projects, we have an AWS application load balancer which:

  • Accepts incoming traffic only on port 80 (HTTP)
  • Is deployed in subnets and security groups that block public traffic on port 80
  • Forwards traffic on request paths to individual microservices (e.g. requests on /search go to the search microservice)

Problem

We wanted to submit requests to the load balancer from the public internet and preferably over HTTPS, which was not possible given the default load balancer deployment configuration.

Solution: Use Cloudfront to proxy requests to load balancer

After some initial research, one option that emerged was running a Cloudfront distribution over HTTPS that proxies requests to the load balancer.

But my first question was: is it possible to connect a Cloudfront distribution running on HTTPS to a load balancer running on HTTP (port 80)?

The answer is: yes, you can have Cloudfront distribution on HTTPS connect to a load balancer over HTTP.

The reason why this is possible is that a Cloudfront distribution distinguishes between the viewer protocol policy and origin protocol policy. The viewer policy controls how the Cloudfront URL itself is accessed over the internet and supported options include HTTP or HTTPs, redirect HTTP to HTTPS, or require HTTPS. The origin policy controls how Cloudfront talks to your origin (in our case the load balancer) and communicating over HTTP is a supported option.

With that question answered, I setup the Cloudfront distribution to connect to our load balancer over HTTP and expected everything to work. However, requests to my Cloudfront distribution were failing with errors about being unable to reach the origin. After getting some technical support, I learned that the reason why requests were failing is that Cloudfront can only be used to reach origins that are publicly accessible from the internet.

To make the load balancer publicly accessible, I needed to update the subnets and security groups to allow public traffic (0.0.0.0/0) on port 80 (HTTP).

Once those network infrastructure changes were made, Cloudfront could reach the origin successfully and requests were successful.

Why use Cloudfront?

You may be wondering: once the load balancer itself was reachable on the public internet, why did I bother with Cloudfront? Because at that point we could reach the load balancer over the public internet via the DNS name that AWS gives it by default.

The answer is that I wanted to have requests from the internet happen over HTTPS and our load balancer was only listening on HTTP by default. I could have added a listener on the load balancer itself for port 443/HTTPS, but that would have required a lot more customizations to the load balancer deployment (which comes from third-party code) that I wanted to avoid.

I know that Cloudfront internally still connects to our origin over HTTP, but since that traffic originates within AWS itself, there seemed to be some minor security benefit to having public traffic over HTTPS (feel free to correct me if I’m wrong).

Also, for extra security you can restrict your load balancer to only be accessible via Cloudfront. If you do that, then the load balancer can only be reached via Cloudfront over HTTPS and not over HTTP via its DNS name.

Summary

The key lessons learned for me from this experience were:

  • Cloudfront can be used to proxy traffic to an origin over HTTP or HTTPS
  • Cloudfront can only connect to an orign that is already accessible over the public internet

--

--