Disclaimer: clearly we're a bit partial to Payload, but we've done our best to make sure this test is as fair as possible. We are only comparing features that Payload, Directus, and Strapi equally support and are attempting to run the most fair comparison possible. That said, the results are very interesting and the test repos themselves serve as a little bonus showing how vastly different the real-world project workflows are from Payload to Directus to Strapi.
Links to seed scripts:
Links to GraphQL queries used:
And finally, here's a link to the performance testing script itself.
With this performance test, we wanted to see how a real-world, complex document query might fare while retrieved from the three different CMS' GraphQL endpoints. Let's consider a complex "mega menu" document, where there may be 30-50 "links" to other pages / posts / etc. and lots of media relations like icons, images, etc. that need to be rendered in a given mega menu. Just with that one mega menu document, we might have to retrieve a ton of "related" documents, with lots of JSON coming back from the response.
In our past experience, this can quickly become problematic (especially if you are server-side rendering) because that mega menu "document" is used by and needs to be retrieved for every single server-rendered view of a given app or website. That means that unless your CMS is heavily optimized, you're going to need to shell out some cash to make sure your server can handle this type of request. To make matters worse, modern frontend frameworks like Gatsby or Next often pre-render views, which means that during the build process, your server could get hammered with requests to your API.
To reflect a moderately complex real-world query, we designed a document structure that features 60+ relationships as well as complex data structures like groups, arrays, nested arrays, and blocks. The document itself is seeded predictably and exactly in the same manner through all three content management systems, and the GraphQL queries that are run are exactly the same outside of specific CMS syntax differences.
In all CMS benchmarks, we worked with a local dev environment and used local databases so that latency was eliminated as a factor. The database contents for each CMS benchmark were closely controlled so as to ensure that the number of documents / rows within each CMS database was as similar to one another as possible. The machine we used to test for all three vendors was a 16" Macbook Pro 2021, M1 Max with 32GB of RAM. Node version was 16.13.1
.
We then wrote a simple script that could be shared by all three CMS benchmarks to hammer out 100 fetch requests sequentially, each with the same query, to the GraphQL HTTP endpoint. We then report on total test time, min response time, max response time, and average response time.
In Payload
Because everything in Payload is code-based, seeding is super easy. We find that for local development, seeding is an absolute must - because that way you don't need to manually click around and create documents each time to test with, and the codebase can be spun up quickly by as many team members as necessary.
For this reason, in our own projects, we typically set the env
variable PAYLOAD_DROP_DATABASE=true
so that the database is dropped upon every server start, and then we seed an initial set of documents to test and build with. This is a super awesome dev pattern and really increases our team's velocity. It's also reusable for test suites and can help big time with automated testing.
In Directus
With Directus, seeding is quite a bit more challenging because Directus is not code-based. Rather, your field configs are stored in your database itself, so there were a lot of steps for us to get this up and running.
We first had to initialize a project and create a first user to authenticate with. Then we had to design the field schema via pointing and clicking in the Directus UI (not in code). We then wrote a seed script to use the Directus SDK in order to generate some documents via the REST API. The seed script itself was pretty tricky to write because the data that we needed to pass to "relationship" fields is pretty Directus-specific and we had to do some reverse-engineering to figure it out.
Finally, we had to create an SQL dump of the database, which at this point contains all our fields and our first user. We stored the SQL dump in our repo for developers to be able to easily replicate this test without having to manually create a new project and configure all fields. But once you import the database dump, you still need to manually run the seed script to populate the database with documents to test against.
This is all quite a bit more complex than what you have to do in Payload. With Payload, a developer coming into the project for the first time just runs yarn dev
. That's it. With Directus, there are quite a few more steps. We spent about a day trying to figure out if there was any way to export collection configs / re-import them into a new project but we gave up because we couldn't find anything in the docs. There are some discussions about adding import / export endpoints which would be a great feature, but as of now the process was a bit difficult for us.
In Strapi
It took us quite a while to figure out some oddities of the way everything works in Strapi. For example, the concept of an "admin user" is completely different than a "regular user", and we needed to write a shell script to create the admin user from the Strapi CLI. We got stuck for a while trying to authenticate via REST with our "admin user", only to find out that those users are different from regular "users". That's certainly a "gotcha".
For seeding, we opted not to use a config/functions/bootstrap.js
, which seems to be the recommended solution. We found this to be an incomplete solution as it didn't allow us to create users or modify permissions. We ended up using a combination of scripts to create an admin user as well as authenticated user, load permissions using the Config Sync plugin's import command, then seeding the complex documents using the REST API. Again, with Strapi, there are quite a few hoops to jump through.
Now that we had documents of the same data shape seeded within each CMS, we set off to run the benchmark tests themselves.
We knew that Payload was fast and we place a lot of emphasis on ensuring that it performs as quickly as possible, but with our recent addition of the dataloader pattern, our result surprised even us.
Metric | Time |
---|---|
Average response time | 15ms |
Min response time | 8ms |
Max response time | 43ms |
Total test duration | 1,513ms |
Directus came in second with some interesting results. The max response time was quite high, but average was not bad. Quite a bit slower than Payload, but still reasonable.
Metric | Time |
---|---|
Average response time | 45ms |
Min response time | 24ms |
Max response time | 139ms |
Total test duration | 4,459ms |
Strapi fell quite a bit behind both Payload and Directus and this is interesting to us because of the SQL-based nature of Strapi 4, and the relational nature of our tests.
Metric | Time |
---|---|
Average response time | 102ms |
Min response time | 77ms |
Max response time | 353ms |
Total test duration | 10,172ms |
We're super proud of the efficiency that we've been able to produce with Payload and this is only the beginning. Our team is expanding, and we have lots of plans in store over the next few months including even more UI optimizations, new features, and lots of tutorials / example boilerplates. Oh, and Payload Cloud. Keep an eye out for that one because it's going to be awesome.
If you haven't yet given Payload a shot, you can get started with one command: