Category Archives: Uncategorized

CDK S3 ‘BucketDeployment’ doesn’t have to be slow – increase its memoryLimit parameter

If you’re deploying a static site to Cloudfront via CDK, you might be using the BucketDeployment construct to combine shipping a folder to S3 and causing a Cloudfront invalidation.

Behind the scenes, BucketDeployment creates a custom resource, a Lambda, that wraps a call to the AWS SDK’s s3 cp command to move files from the CDK staging area to the target S3 bucket.

While that’s happening within AWS’s infrastructure, the speed of that copy depends very strongly on the amount of resources the Lambda has – just like any other Lambda, CPU and network bandwidth scale with the requested memory limit.

The default memory limit for the custom resource Lambda is 128MiB – which is the smallest Lambda you can get, and accordingly the performance of that copy might be terrible if you have a lot of files, or large files, to transfer.

I’d strongly recommend upping that limit to 2048MiB or higher. This radically improved upload performance on two applications I deploy, with the upload rate going from @=~700KiB/s to >10MiB/s – a 10x increase.

This has a negligible cost implication as this Lambda only runs during a deployment, so shouldn’t be running all too frequently anyway. However the performance improvement is potentially dramatic for complex apps. We saw one build go from ~280s uploading to S3 come down to ~45s – an 84% reduction in that deployment step’s execution time, and about a 15% reduction in the deployment time of that stack overall – just for changing one parameter.

Bucket named ‘cdk-abcd12efg-assets-123456789-eu-west-1’ exists, but not in account 123456789. Wrong account?

When deploying a stack via CDK, you may encounter an error such as

Bucket named 'cdk-abcd12efg-assets-123456789-eu-west-1' exists, but not in account ***. Wrong account?

The most likely culprit here is that the role you’re using to deploy doesn’t have the right permissions on the staging bucket. CDK requires:

  • getBucketLocation
  • *Object
  • ListBucket

We hit this recently, and the underlying cause was that the IAM role used to deploy the stack had been amended to have a restricted set of permissions per least-privilege best practice. We’d deployed updates to the stack a number of times, but in this instance the particular change we were making required a re-upload of assets to the staging bucket, which uncovered the missing permission.

Cognito error: “Cannot use IP Address passed into UserContextData”

When using Cognito’s Advanced Security and adaptive authentication features, you need to ship contextual data about the logging-in user via the UserContextData type.

Some of this type data is collected via a Javascript snippet. However, you can also ship the user’s IP address (which the snippet cannot collect) in the same payload.

When doing so, you may get an error from Cognito:

“Cannot use IP Address passed into UserContextData”

Unhelpful error from Cognito

This is likely because you’ve not enabled ‘Accept additional user context data‘ on your user pool client – though the error message is pretty opaque.

You can do this in a number of ways:

  • Via the AWS console
  • Via the UpdateUserPoolClient CLI function
  • Via CDK, if you drop down to the Level 1 construct and set “enablePropagateAdditionalUserContextData: true” on your CfnUserPoolClient

Even the latest L2 constructs for Cognito don’t seem to support setting enablePropagateAdditionalUserContextData when controlling a user pool client via CDK, but using the L1 escape hatch is easy enough:

const cfnUserPoolClient = userPoolClient.node.defaultChild as CfnUserPoolClient;
cfnUserPoolClient.enablePropagateAdditionalUserContextData = true;

GitHub Actions, ternary operators and default values

Github Actions ‘type’ metadata on custom action or workflow inputs is, pretty much, just documentation – it doesn’t seem to be enforced, at least when it comes to supplying a default value. That means that just because you’ve claimed it’s a bool doesn’t make it so.

And worse, it seems that default values get coerced to strings if you use an expression.

At TILLIT we have custom GitHub composite actions to perform various tasks during CI. We recently hit a snag with one roughly structured as follows

name: ...
inputs:
   readonly:
      type: boolean
      default: ${{ some logic here }}

runs:
  using: "composite"
  steps:
    - name: ...
      uses: ...
      with:
        some-property: ...${{ inputs.readonly && 'true-val' || 'false-val' }}...

That mess in the some-property definition is the closest you can get in Github Actions to a ternary operator in the absence of any if-like construct, where you want to format a string based on some boolean.

In our case – the ‘true’ path was the only path ever taken. Diagnostic logging on the action showed that inputs.readonly was ‘false’. Wait, are those quotes?

Of course they are! The default value ended up being set to be a string, even though the input’s default value expression is purely boolean in nature and it’s specified as being a boolean.

The fix then is to our ternary, and to be very explicit as to the comparison being made.

with:
  some-property: ...${{ inputs.readonly == 'true' && 'true-val' || 'false-val' 

AWS SAM error “[ERROR] (rapid) Init failed error=Runtime exited with error: signal: killed InvokeID=” in VS Code

When debugging a lambda using the AWS Serverless Application Model tooling (the CLI and probably VS Code extensions), you might find that your breakpoint isn’t getting hit and you instead see an error in the debug console:

[ERROR] (rapid) Init failed error=Runtime exited with error: signal: killed InvokeID=" in VS Code

A thing to check is whether you’re running out of RAM or timing out in execution:

  • Open your launch.json file for the workspace
  • In your configuration, under the lambda section, add a specific memoryMb value – in my case 512 got me moving

This is incredibly frustrating because the debug console gives you no indication as to why the emulator terminated your lambda – but also helpful, because you can tell how large you need to specify your lambda when you deploy it ahead of time.

Invalid Request error when creating a Cloudfront response header policy via Cloudformation

I love Cloudformation and CDK, but sometimes neither will show an issue with your template until you actually try to deploy it.

Recently we hit a stumbling block while creating a Cloudfront response header policy for a distribution using CDK. The cdk diff came out looking correct, no issues there – but on deploying we hit an Invalid Request error for the stack.

An error displayed in the Cloudfront 'events' tab, indicating that there was an Invalid Request but giving no further clues
Cloudformation often doesn’t give much additional colour when you hit a stumbling block

The reason? We’d added a temporarily-disabled XSS protection header, but kept in the reporting URL so that when we turned it on it’d be correctly configured. However, Cloudfront rejects the creation of the policy if you spec a reporting URL on a disabled header setup.

The Cloudfront resource policy docs make it pretty clear this isn’t supported, but Cloudformation can’t validate it for us

A screenshot of a validation error message indicating that X-XSS-Protection cannot contain a Report URI when protection is disabled
Just jumping into the console to try creating the resource by hand is often the most effective debugging technique

How to diagnose Invalid Request errors with Cloudformation

A lot of the time the easiest way to diagnose a Invalid Request error when deploying a Cloudformation is to just do it by hand in the console in a test account, and see what breaks. In this instance, the error was very clear and it was a trivial patch to fix up the Cloudformation template and get ourselves moving.

Unfortunately, Cloudformation often doesn’t give as much context as the console when it comes to validation errors during stack creation – but hand-cranking the affected resource both gives you quicker feedback and a better feel for what the configuration options are and how they hang together.

A rule of thumb is that if you’re getting an Invalid Request back, chances are it’s essentially a validation error on what you’ve asked Cloudformation to deploy. Check the docs, simplify your test case to pinpoint the issue and don’t be afraid to get your hands dirty in the console.

DMARC failures even when AWS SES Custom Mail-From domain used

I was caught out by this, this week, so hopefully future-me will remember quicker how to fix this one.

Scenario

  • You want to get properly configured for DMARC for a domain you’re sending emails from via AWS SES
  • You’ve verified the sender domain as an identity
  • You’ve set up DKIM and SPF
  • You’ve set up a custom MAIL FROM
  • You’re still seeing SPF-related DMARC failures when sending emails

In my case, those failures were caused because I was sending email from a different identity that uses the same domain.

For example, I had ‘example.com’ set up as a verified identity in SES allowing me to send email from any address at that domain, and I configured a sender identity ‘contact@example.com’ to be used by my application to send emails so that I could construct an ARN for use with Cognito or similar.

What isn’t necessarily obvious is that you need to enable the custom MAIL FROM setting for the sender identity, and not just for the domain identity that you’ve configured assuming you have multiple. AWS SES does not fall back to the configuration for the domain identity and you have to individually enable custom MAIL FROM for each sender identity – even if the configuration is identical.

So in my case, the fix was:

  • Edit the Custom MAIL FROM setting for contact@example.com
  • Enable it to use mail.example.com (which was already configured)
  • Save settings

Using an AWS role to authenticate with Google Cloud APIs

I recently had a requirement to securely access a couple of Google Cloud APIs as a service account user, where those calls were being made from a Fargate task running on AWS. The until-relatively-recently way to do this was:

  • Create a service account in the Google Cloud developer console
  • Assign it whatever permissions it needs
  • Create a ‘key’ for the account – in essence a long-lived private key used to authenticate as that service account
  • Use that key in your Cloud SDK calls from your AWS Fargate instance

This isn’t ideal, because of that long-lived credential in the form of the ‘key’ – it can’t be scoped to require a particular originator and while you can revoke it from the developer console, if the credential leaks you’ve got an infinitely long-lived token usable from anywhere – you’d need to know it had leaked to prevent its use.

Google’s Workload Identity Federation is the new hotness in that regard, and is supported by almost all of the client libraries now. Not the .NET one though, irritatingly, which is why this post from Johannes Passing is, if you need to do this from .NET-land, absolutely the guide to go to.

The new approach is more in line with modern authentication standards and uses federation between AWS and Google Cloud to support generating short-lived, scoped credentials that are used for the actual work and no secrets needing to be shared between the two environments.

The docs are broadly excellent, but I was pleased at how clever the AWS <-> Google Cloud integration is given that there isn’t any AWS-supported explicit identity federation actually happening, in the sense of established protocols (like OIDC, which both clouds support in some fashion).

How it works

On the Google Cloud side, you set up a ‘Workload identity pool’ – in essence a collection of external identities that can be given some access to Google Cloud services. Aside from some basic metadata, a pool has one or more ‘providers’ associated with it. A provider represents an external source of identities, for our example here AWS.

A provider can be parameterised:

  • Mappings translate between the incoming assertions from the provider and those of Google Cloud’s IAM system
  • Conditions restrict the identities that can use the identity pool via a rich syntax

You can also attach Google service accounts to the pool, allowing those accounts to be impersonated by identities in the pool. You can restrict access to a given service account via conditions, in a very similar way to restricting access to the pool itself.

To get an access token on behalf of the service account, a few things are happening (in the background for most client libraries, and explicitly in the .NET case).

Authenticating with the pool

In AWS land, we authenticate with the Google pool by asking it to exchange a provider-issued token for one that Google’s STS will recognise. For AWS, the required token is (modulo some encoding and formatting) a signed ‘GetCallerIdentity’ request that you might yourself send to the AWS STS.

Our calling code in AWS-land doesn’t finish the call – we don’t need to. Instead, we sign a request and then pass that signed request to Google which makes the call itself. We include in the request (and the fields that are signed over) the URI of the ‘target resource’ on the Google side – the identity pool that we want to authenticate to.

The response from AWS to Google’s call to the STS will include the ARN of the identity for whom credentials on the AWS side are available. If you’re running in ECS or EC2, these will represent the IAM role of the executing task.

We need share nothing secret with Google to do this, and we can’t fake an identity on AWS that we don’t have access to.

  • The ARN of the identity returned in the response to GetCallerIdentity includes the AWS account ID and the name of any assumed role – the only thing we could ship to Google is proof of an identity that we already have access to on the AWS side.
  • The Google workflow identity pool identifier is signed over in the GetCallerIdentity request, so the token we send to Google can only be used for that specific user pool (and Google can verify that, again with no secrets involved). This means we can’t accidentally ship a token to the wrong pool on the Google side.
  • The signature can be verified without access to any secret information by just making the request to the AWS STS. If the signature is valid, Google will receive an identity ARN, and if the payload has been tampered with or is otherwise invalid then the request will fail.

None of the above requires any cooperation between AWS and Google cloud, save for AWS not changing ARN formats and breaking identity pool conditions and mappings.

What happens next?

All being well, the Google STS returns to us a temporary access token that we can then use to generate a real, scoped access token to use with Google APIs. That token can be nice and short lived, restricting the window over which it can be abused should it be leaked.

What about for long-lived processes?

Our tokens can expire in a couple of directions:

  • Our AWS credentials can and will expire and get rolled over automatically by AWS (when not using explicit access key IDs and just using the profile we’re assuming from the execution role of the environment)
  • Our short-lived Google service account credential can expire

Both are fine and handled the same way – re-run the whole process. Signing a new GetCallerIdentity request is quick, trivial and happens locally on the source machine. And Google just has to make one API call to establish that we’re still who we said we were and offer up a temporary token to exchange for a service account identity.

How to (not) do depth-first search in Neo4j

I found a Stack Overflow question with no answers that seemed like it should be straightforward – how can you traverse a tree-like structure in depth-first order. The problem had a couple of features:

  • Each node had an order property that described the order in which sibling nodes should be traversed
  • Each node was connected to its parent via a PART_OF relationship

A depth-first traversal of a tree is pretty easy to understand.

Whenever we find a node with children, we choose the first and explore as deep into the tree as we can until we can’t go any further. Next we step up one level and choose the next node we haven’t explored yet and go as deep as we can on that one until we’ve traversed the graph.

Neo4j supports a depth-first traversal of a graph by way of the algo.dfs.stream function.

Given some tree-like graph where nodes of label ‘Node’ are linked by relationships of type :PART_OF:

// First, some test data to represent a tree with nodes connected by a
// 'PART_OF' relationship:
// N1 { order: 1 }
//  N2 { order: 1 }
//    N4 { order: 1 }
//      N5 { order: 1 }
//      N6 { order: 2 }
//  N3 { order: 2 }
//    N7 { order: 1 }
MERGE (n1: Node { order: 1, name: 'N1' })
MERGE (n2: Node { order: 1, name: 'N2' })
MERGE (n3: Node { order: 2, name: 'N3' })
MERGE (n4: Node { order: 1, name: 'N4' })
MERGE (n5: Node { order: 1, name: 'N5' })
MERGE (n6: Node { order: 2, name: 'N6' })
MERGE (n7: Node { order: 1, name: 'N7' })
MERGE (n2)-[:PART_OF]->(n1)
MERGE (n4)-[:PART_OF]->(n2)
MERGE (n5)-[:PART_OF]->(n4)
MERGE (n6)-[:PART_OF]->(n4)
MERGE (n3)-[:PART_OF]->(n1)
MERGE (n7)-[:PART_OF]->(n3)

We can see which nodes are visited by Neo4j’s DFS algorithm:

MATCH (startNode: Node { name: 'N1' } )
CALL algo.dfs.stream('Node', 'PART_OF', 'BOTH', id(startNode))
YIELD nodeIds 
UNWIND nodeIds as nodeId
WITH algo.asNode(nodeId) as n
RETURN n

The outupt here will vary – possibly even between runs. While we’ll always see a valid depth-first traversal of the nodes in the tree, there’s no guarantee that we’ll always see nodes visited in the same order. That’s because we’ve not told Neo4j in what order to traverse sibling nodes.

If you need control over the order siblings are expanded, you should use application code to write the DFS yourself.

But: given some constraints and accepting some caveats…

  • That’s there’s only one relationship that links nodes in the tree
  • That sibling nodes are sortable by some numeric property – here ‘order`, which is mandatory
  • There are not more than 1,000,000 sibling nodes for any given parent
  • Sibling nodes all have a distinct order property value
  • That this will perform like a dog on large graphs – potentially not completing, given it has some N^2 characteristics…

…you can do this in pure Cypher. Here’s one approach, which we’ll then break down
to see how it works:

MATCH (root: Node { name: 'N1' }), pathFromRoot=shortestPath((root)<-[:PART_OF*]-(leaf: Node)) WHERE NOT ()-[:PART_OF]->(leaf)
WITH nodes(pathFromRoot) AS pathFromRootNodes
WITH pathFromRootNodes, reduce(pathString = "", pathElement IN pathFromRootNodes | pathString + '/' + right("00000" + toString(pathElement.order), 6)) AS orderPathString ORDER BY orderPathString
WITH reduce(concatPaths = [], p IN collect(pathFromRootNodes) | concatPaths + p) AS allPaths
WITH reduce(distinctNodes = [], n IN allPaths | CASE WHEN n IN distinctNodes THEN distinctNodes ELSE distinctNodes + n end) AS traversalOrder
RETURN [x in traversalOrder | x.name]

Finding the deepest traversals

Given some root node, we can find a list of traversals to each leaf node using shortestPath. A leaf node is a node with no children of its own, and shortestPath (so long as we’re looking at a tree) will tell us the series of hops that get us from that leaf back to the root.

Sorting the paths

We’re trying to figure out the order in which these paths would be traversed, then extract the nodes from those paths to find the order in which nodes would be visited.

The magic is happening in this line:

WITH pathFromRootNodes, reduce(pathString = "", pathElement IN pathFromRootNodes | pathString + '/' + right("00000" + toString(pathElement.order), 6)) AS orderPathString ORDER BY orderPathString

The reduce is, given a node from root to leaf, building up a string that combines the order property of each node in the path with forward-slashes to separate them. This is much like folder paths in a file system. To make this work, we need each segment of the path to be the same length – therefore we pad out the order property with zeroes to six digits, to get paths like:

/000001/000001/000001
/000001/000001/000002
/000001/000002

These strings now naturally sort in a way that gives us a depth-first traversal of a graph using our order property. If we order by this path string we’ll get the order in which leaf nodes are visited, and the path that took us to them.

Deduplicating nodes

The new problem is extracting the traversal from these paths. Since each path is a complete route from the root node to the leaf node, the same intermediate nodes will appear multiple times across all those paths.

We need a way to look at each of those ordered paths and collect only new nodes – nodes we haven’t seen before – and return them. As we do this we’ll be building up the node traversal order that matches a depth-first search.

WITH reduce(concatPaths = [], p IN collect(pathFromRootNodes) | concatPaths + p) AS allPaths
WITH reduce(distinctNodes = [], n IN allPaths | CASE WHEN n IN distinctNodes THEN distinctNodes ELSE distinctNodes + n end) AS traversalOrder

First we collect all the paths (which are now sorted by our traversal ordering) into one big list. The same nodes are going to appear more than once for the reasons above, so we need to remove them.

We can’t just DISTINCT the nodes, because there’s no guarantee that the ordering that we’ve worked hard to create will be maintained.

Instead, we use another reduce and iterate over the list of nodes, only adding a node to our list if we haven’t seen it before. Since the list is ordered, we take only the first of each duplicate and ignore the rest. Our CASE statement is doing the heavy lifting here:

WITH reduce(distinctNodes = [], n IN allPaths | CASE WHEN n IN distinctNodes THEN distinctNodes ELSE distinctNodes + n end) AS traversalOrder

Equivalently:

  • Create a variable called distinctNodes and set it to be an empty list
  • For each node n in our flattened list of nodes in each path from root to each leaf:
  • If we’ve seen n before (if it’s in our ‘distinctNodes’ list) then set distinctNodes = distinctNodes – effectively a no-op
  • If we haven’t seen n before, set distinctNodes = distinctNodes + n – adding it to the list

This is a horrendously inefficient operation – for a very broad, shallow tree (one where each node has many branches) we’ll be doing on the order of n^2 operations. Still, it’s only for fun.

We’re done! From our original graph, we’re expecting a traversal order of:

N1, N2, N4, N5, N6, N3, N7

And our query?

["N1","N2","N4","N5","N6","N3","N7"]

Another for the annals of ‘Just because you can, doesn’t mean you should’.

Mirroring the NuGet Catalog API locally

TL;DR: You can pull the 3GB clone of the Catalog API , though note that it will unpack to 4.8 million files over 1.6 million folders and weigh in at about 52GB uncompressed.

You can pull the .NET Core console application that produced the clone from GitHub.

NuGet is the .NET package manager, and https://nuget.org hosts almost all publicly-published NuGet packages. A package is essentially a ZIP file with a metadata component and some binaries. The metadata portion details version and author information, descriptions and so on – it also lists the dependencies of the package upon other packages in the ecosystem using SemVer.

Package management data sources like this are interesting for playing around with graphs. They’re big, well-used, well-structured data sets of highly connected data . At time of writing, there are 167,733 unique packages published with over 1.8 million versions of those packages listed.

As part of an upcoming series, I wanted to load that information into a Neo4j graph database to see if there were any interesting insights or visualisations we could create when given access to such a big data set. Unfortunately each call to the API takes between 100ms and 500ms – doesn’t sound like much, but if you’re pulling 4.8 million documents you’re looking at around 23 days of time just sequentially pulling files.

You also have to process the files sequentially – each catalog page has a commit timestamp that gives it a strong ordering, and catalog entries are essentially events that have happened to a package version. It’s possible a single package version has multiple different package metadata pages associated with it spanning an arbitrary period of time as the package is listed, de-listed or metadata amended.

I wanted to have Neo4j load the data via REST API calls, rather than going with a standard file load as that was the point of the exercise. This meant that not only did I have to clone the dataset, but I had to host it locally so that it looked like the live API.

Catalog API

The Catalog API exposes every version of every package published to NuGet. Publishes and updates to published packages are recorded as separate documents, and the catalog is paged into batches of 500 or so changes per batch, plus or minus.

There are nearly 9,000 batches reported by the Catalog API, and a total of around 4.8 million documents. Some of those documents relate to the same version of a package – for example, when a package gets de-listed for some reason there will be an updated document in one of the Catalog API pages detailing the new state of the package.

There’s no rate limit on the Catalog API – it’s just files hosted in Azure blob storage – and each document contains navigable URLs to related information. If we wanted to pull a clone of the Catalog API, we could just start from the root document and crawl the links found.

Cloning the documents

Even though Neo4j needs to process the files sequentially, we don’t need to clone them sequentially. To retrieve 4.5 million documents in any sensible amount of time, we need to go multi-threaded. I bashed together a quick .NET console app to do just that.

The app just a pretty simple breadth-first search of links found in documents. We start off with the catalog root URL which details the URL of each of the 8,800 ish catalog pages:

We then search for anything that looks like a URI within the file, check we’ve not already processed that URI and then add it to a ConcurrentQueue. Since the .NET BCL doesn’t have a ConcurrentSet, we use a ConcurrentDictionary and just care about the keys within it.

The queue ends up containing 4.8 million items at its peak, and the ‘processed files set’ slowly grows as the queue is drained.

We spin out 64 System.Threading.Tasks.Task objects in an array. Each Task takes a single URI from the queue, quickly validates that we actually have work to do, pulls the contents of the file and parses out any new URIs. Each new URI is added to the processing queue, and the contents of the file are written to disk in a folder structure that mirrors the segments of the URI so that we can easily host it later. The Task then polls the queue again and waits until there’s more work to do, or until the queue is drained of work items.

The method doing the work just spins on the queue until we run out of work to do

Every minute or so a watchdog Task clones the work queue and the ‘processed’ set and persists them to disk. Fun fact – awaiting a WriteLineAsync is incredibly slow relative to just letting it block, especially when you’re calling it millions of times in a loop.

On startup, the app looks for these checkpoint files and reloads its state from them if found – that way we can pause the processing by just killing the process.

At peak the process was using around 2GB of RAM and pulling down files at around 10MBps.

Image

You can find the source for the application on GitHub, as well as a link to the but the tool itself would need more work before you could for example resume from a snapshot tarball such as this.

24 hours later, what I ended up with was… extensive.

Just calculating the summary took twelve minutes
The unnerving feeling of seeing 1.6 million folders in a single folder doesn’t really go away

Now what?

Serving a mirror of the Catalog API with Nginx

Now that we’ve got our archive of files, we can spin out a super simple docker-compose script to host Nginx, serve the files from some URL and then replace our usages of api.nuget.org with localhost:8192 or whatever.

First off, the docker-compose file – we’re assuming that the file lives in the same folder as the api.nuget.org root folder of documents:

web:
  image: nginx
  volumes:
   - ./api.nuget.org:/usr/share/nginx/html:ro
   - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
  ports:
   - "8192:80"

You’ll have to make sure Docker has access to the drive where the mirrored files are stored to pull this off. We just serve that folder directly as Nginx’s default document base, but then also map in a file to configure Nginx to rewrite any URL that looks like ‘api.nuget.org’ to point to localhost:

server {
    listen 80;
    
    location / {
    	root /usr/share/nginx/html;
        sub_filter 'https://api.nuget.org/'  'http://$http_host/';
        sub_filter_types 'application/json';
        sub_filter_once off;
    }
}

Because the crawler just saved files to a directory structure that matches the URI components, we can trivially serve the files without needing to translate names.

A quick docker-compose up and we’re off to the races:

Of course, we could just use the .JSON files directly, without getting Nginx in the way but since the Neo4j post I did this for was about crawling a REST API, it seemed like a bit of a cheat to just index files on disk instead.

Downloads

You can download the archive – it’ll take… some time to decompress, and you’ll end up with around 52GB of space consumed on the drive once you’ve done so, as well as 4.8 million new files and 1.6 million new folders so maybe do this on a drive you don’t care that much about.

The source for the crawler is available on GitHub though it’s super rough-and-ready and provided with absolutely no guarantees, aside from that it’s likely to make your day worse by running it.