If you run or manage a web application that uses a database in some way, you may find some SQL queries run slower than expected. This can be especially evident when your dataset grows or a speed comparison is run between a development copy of the database (with less rows) and a production database.

Generally, this is only an issue where there has been custom development or a plugin or extension added to an existing Content Management System (CMS) which may not be optimised.

Conetix runs highly optimised infrastructure, which includes the latest Xeon Gold SP processors, high speed RAM and the latest enterprise grade NVMe storage to provide a highly performant environment for your application and database. However, no amount of hardware nor server level optimisation can fix poorly designed queries or data schemas.

Diagnosing the issue

Contact your developer

In nearly every instance, the first point of call for slow database queries should be your developer. You can pass this guide onto them, however they should already be aware of the issues and solutions raised in this article.

Basic diagnostics

Depending on the database type, each database engine generally has an EXPLAIN function which will detail how the SQL query is going to be executed.

MariaDB / MySQLhttps://mariadb.com/kb/en/explain/

This will provide an explanation or execution plan for your query and let you know how it’s accessing the tables. For MariaDB, you can run this via phpMyAdmin and for MSSQL based systems you may need SQL Server Management Studio to perform the execution plan.

In many instances, the use of temporary tables can be an indication that your database schema is lacking database indexing.

Enabling database indexing

When queries span multiple tables, they need a way to reference the data between the two. For example, if you have a table with customer information and a table with order information, there needs to be a JOIN between the two tables formed to select the correct information.

If you don’t have any indexing, the database must sequentially search every row until it finds the correct one every time you run a query.

The larger your dataset grows and the more complex the query, the slower therefore the query will become.

A database index is one way to help significantly speed up these queries, as it will create a rapid lookup per table based on the particular key used (eg the customer ID). This is like having an indexing system used by your local library, where instead of searching through every book, indexes allow you to quickly select the right row and shelf to locate it.

As an example of the performance difference, this was a production MSSQL query which was taking up to 20 seconds to complete without indexing an an average of 40ms to complete after indexing was setup:

diagnosing slow sql queries
Before and after indexing

We’ve included links to creating database indexes for both MSSQL and MySQL/MariaDB below.

MariaDB / MySQLhttps://dev.mysql.com/doc/refman/8.0/en/optimization-indexes.html

Use a framework

The days of hand written database queries and database structures have been made mostly obsolescent by modern development frameworks, which incorporate object-relational mapping (ORM) interfaces and/or higher level query abstraction and optimisation.

While some of these used to be less efficient than hand written queries, the majority now have had significant optimisation and can in many instances outperform hand written queries due to this. Some example frameworks are:

These frameworks with their database abstraction layers will take care of query structure, schema migration and database indexing on your behalf. This leads to greater time savings, better security and greater database optimisation.


It’s good practice to take a backup of your database before attempting any changes:

Was this article helpful?