Building Performant REST Services with Oracle REST Data Services
12 min read
Table of contents
- Start with the SQL
- ORDS Configuration
- Composite Services
In my previous post, I talked about performance considerations when consuming REST web services from Oracle APEX. In this post, I will discuss strategies for improving the performance of REST services that you build using Oracle REST Data Services (ORDS).
I discovered ORDS when version 2 came out (aka the APEX Listener). It revolutionized my approach to building integrations to the Oracle Database. ORDS provides a simple, secure, scalable solution for exposing your Oracle database to other systems. As my use of ORDS has grown over the years, performance has become a critical factor in providing scalable integrations to other systems.
In this post, I will cover five areas to look at when considering the performance of ORDS REST services:
- Start with the SQL
- ORDS Configuration
- Composite Services
This post assumes you are already familiar with developing REST services with ORDS.
Start with the SQL
Whenever someone asks me to look at a poorly performing APEX page, the first thing I look at is the SQL (or PL/SQL). APEX itself is rarely the problem. The same advice goes for ORDS. Nine times out of ten, performance issues can be tied to poorly performing SQL or PL/SQL.
Run a SQL Explain Plan for every SQL statement involved in the REST service. You may think you wrote the best SQL statement. Until you know what the optimizer will do with it, you never really know. Take time to understand the explain plan and tune your code accordingly.
Use SQL if You Can
If you are building a GET web service and can generate the response using a SQL statement, then do so. ORDS generates the JSON response for you and even handles pagination. You can also be sure it generates the JSON using the most efficient method. Generating a response using a PL/SQL package/procedure can be powerful, but it incurs overheads. These include your code having to build and format the JSON response and context switching between PL/SQL and SQL.
JDBC Connection Pool
ORDS utilizes a JDBC connection pool to service inbound requests. This is what makes ORDS (and APEX) so scalable. ORDS passes out connections, reclaims them when they are freed up, and then passes them out to subsequent requests as they come in.
Many articles have been written on tuning the JDBC connection pool, so I won't go into detail here. Suffice it to say that most articles focus on fine-tuning the maximum number of connections in the pool (jdbc.MaxLimit). Although it may be tempting to set this very high, you should not. The increased load on the database to maintain these connections can become significant. If there is a very high number of connections all doing work, the database can start to struggle. Each new request is processed more slowly, increasing the number of connections to process as more connections continue to come in. This can send the database into a death spiral. Getting the connection pool size right gives the database room to service requests quickly so their connections can be released back to the pool. Additional requests may get turned away, but at least the database will remain stable.
You should also pay attention to the minimum number of connections (jdbc.MinLimit). This is the number of connections that will be kept open even if there are no recent requests. You may think this is unimportant as ORDS will spin up connections when needed. The problem is that there is an overhead (all be it small) in creating a database connection. Keeping the right number of connections open all the time reduces this overhead. For example, if you expect a load of 10 requests per second with spikes up to 50, then you may want to set an initial limit of 10. This way, there will always be ten open connections ready to go.
The below configuration instructs ORDS to start up with 10 connections (jdbc.InitialLimit), keep 10 connections open at all times (jdbc.MinLimit) and cycle up to 50 connections (jdbc.MaxLimit) if necessary.
<entry key="jdbc.InitialLimit">10</entry> <entry key="jdbc.MinLimit">10</entry> <entry key="jdbc.MaxLimit">50</entry>
You should tune connection pools based on the number of concurrent users/consumers, not the total number of users. With the above configuration, you must receive 51 requests at the same millisecond to break the MaxLimit. If your web services are tuned well, you could be processing requests in 20 milliseconds (or five requests per second).
This gives you a theoretical limit of processing five requests / second * 50 connections = 250 requests per second with a connection pool MaxLimit of just 50.
When dealing with connection pools, you also need to be aware of the average duration of your REST services. In the above example, I said we could handle 250 requests per second if each request took 20ms and we had 50 available connections. What if each request took 2 seconds instead of 20ms? Using these metrics, we go from servicing 250 requests a second down to 25 (0.5 requests/second * 50 connections).
The same principle goes for tuning the connection pool for APEX. If you have an APEX page that runs 10,000 times a day and takes half a second to load, getting the page to run in a quarter of a second can significantly impact throughput.
REST Service Definition Caching
As of ORDS version 21.1, you can cache ORDS REST service definitions on the ORDS server. ORDS REST Service definitions are maintained in the ORDSMETADA schema of the database in the form of modules, templates, and resources. Typically, when you make a request, ORDS first has to look up the code for the REST service in ORDSMETADATA. Then, it can execute the code and service the response. Web services cached on the ORDS server save precious milliseconds by not having to look up the web service definition and related privileges. Kris Rice has a write-up on how to set this up and some benchmarks in this post. Documentation link.
In Kris's post, he experienced a 5x improvement in throughput when caching was set to 1 minute (vs. no caching).
This is a big deal, especially if you have a frequent number of short-running calls to your web services. You won't see this kind of improvement if you have long-running web services. Even so, performance is a game of inches. For every ten milliseconds, ORDS is not spending on one request, it can be working on a different request. In short, you should switch caching on immediately and fine-tune the cache duration based on your needs.
Note: Remember, when you deploy an update to a web service definition, consumers won't pick up the changes until the cache is refreshed. Because of this, I would not set the cache duration any longer than 1 minute.
ORDS Server Sizing
ORDS is a pretty lightweight solution. I have had customers running production systems with hundreds of thousands of APEX and REST Service requests per day on two load balanced Linux VMs with 2 CPUs and 4 GB ram each. Ensure you understand your expected load and adjust your hardware appropriately before problems occur.
Load Test your Environment
All too often, performance is a problem that you find out about because someone is shouting at you. Instead of letting performance issues happen to you, I advise that you proactively load test your environment. This allows you to discover areas for improvement, adjust server sizing, connection pools, etc., and see how your environment will behave under certain conditions. Make load testing a formal project and dedicate enough resources to do a thorough job. Also, don't just do it once and assume you are done. The number of users changes over time, as does the number of records in tables, which can change your environment's overall balance. Make load testing a regular event.
Finally, make sure you load test on hardware similar to your production environment. That way, you can be confident your results will translate to your production environment.
Load Testing Tools
If I am load testing ORDS REST services, I use Siege. Although Siege has a few dependencies, it is straightforward to use. I recommend installing it on a server close to your ORDS server. Doing this removes the network from the equation. It also allows you to configure a VM that can open sufficient threads to create a representative load.
You can also use Siege to do basic load testing of APEX applications. To do this, you will need to temporarily make some key pages public. If you want to do more complex load testing with APEX that involves logging in to your application, etc., I suggest using Apache JMeter.
Keep ORDS Up to Date
As we saw with the REST service definition caching, new features are always being added to ORDS. My advice is never to be more than one year out of date with the latest version of ORDS (the same goes for APEX). I would plan annual upgrades for ORDS (and APEX) if it were my company. More frequent upgrades offer the following advantages:
- Uptake security fixes, new features, and performance improvements sooner
- They are generally less risky because the software has not changed as much
- They allow you to get good at upgrades and make them routine. This includes having well-scripted regression tests and automated upgrade scripts
I have read many opinions on adhering to REST principles (it must be stateless, use nouns for resource names, etc.). One of these principles is to organize your REST services around resources. For example, an order capture application may have resources for items, item availability, shopping cart, customers, orders, etc. These principles are great, and I generally advise you to follow them. Having said that, if the consumer of your web service has to wait 20 seconds for a response just because of these principles, then the consumer will go elsewhere. Occasionally, you need to combine data from multiple resources into one 'composite service'. Composite Services prevent the consumer from making multiple requests (with multiple network round trips) to get a related data set.
Let's use a shopping cart as an example. If you adhere strictly to REST principles, the consumer would need to call a cart, customer, item, item price, and item on hand services to get all the data required to display the cart. If you instead created one composite service that combined all of this data in a single request, you would eliminate all but one network round trip.
You can still use DRY (don't repeat yourself) principles on the service's back end. i.e., Use common views or PL/SQL APIs to collate the item, item on hand, item price... data. Think of a composite service as a wrapper around code you already have.
There are two payloads to consider. The first is in the request from the REST service consumer, e.g., when calling a web service to place an order, the consumer has to send you the details of their shopping cart. The second is the response you send back to the consumer.
Most POST and PUT requests involve the consumer sending you data in the request body. When considering performance, one of the most significant overheads for an ORDS REST service is parsing the payload so you can use it in your PL/SQL logic. As I mentioned in this post, parsing JSON using the native database JSON parser (i.e., JSON_TABLE and JSON_OBJECT_T) is orders of magnitude faster than parsing using APEX_JSON.
On the other side, GET services typically involve receiving some parameters and generating a JSON response. If your ORDS service is based on a SELECT statement, then there is nothing to do here as ORDS will generate the JSON for you and handle pagination, etc. If you need to use PL/SQL to build the JSON response (e.g., a composite service), then you should utilize native database functionality (e.g., JSON_OBJECT) to generate the JSON.
The other influence you have concerning the response payload is its size. The larger your payload, the longer it takes to get back to the caller and the slower your web service appears. There are two things you can do to influence payload size in GET services:
You should offer parameters to allow the consumer to query just the data they need, thus slimming down the payload. If your service is based on a SELECT statement, then ORDS has you covered as it has its own query filtering syntax that includes advanced filtering conditions and even sorting. Remember, just because it is there doesn't mean people know it is there, so educate consumers of your web service on how they can filter and sort on your web service using this syntax.
If your web service is based on a PL/SQL procedure that generates a JSON response, then you will need to code the filtering and sorting logic yourself.
'Narrow' Data Sets
Sometimes, your consumers need a few key columns from the response instead of all of them. Returning five columns of data instead of 20 can significantly cut the payload's size. This ability for the consumer to select just the columns they want is one of the significant advantages of GraphQL. For ORDS, the only way to get close to this is to create multiple Handlers with different sets of columns, e.g., a light, medium, or heavy version.
Enter REST Enabled SQL - REST enabled SQL allows you to send just about any SQL to the Oracle database via REST. If the SQL is a SELECT statement, you get a JSON response with the result of the SQL statement. REST-enabled SQL allows the consumer to select just the columns they want (just like GraphQL). REST-enabled SQL sounds like the perfect solution. There are two problems with this:
- Consumers now need to know SQL and your data model. ORDS REST services insulate consumers from this complexity.
- You are opening up the entire REST Enabled SQL schema to any SQL statement, including DELETE, UPDATE, etc. This may be OK if the schema is limited to just a set of views you want consumers to access, but you need to be sure of this. On the other hand, ORDS REST services allow you to apply credentials and roles on a service-by-service basis if necessary.
There is hope on the horizon. According to a tweet from Kris Rice on June of 2022, GraphQL support in ORDS is on the ToDo List, which is pretty exciting. It goes without saying this is not a guarantee.
Most REST services should have some form of Authentication. With ORDS, you can use Basic Authentication (Username and Password), OAuth2, or roll your own. I generally advise against Basic Authentication as it is less secure than OAuth2. Another reason to choose OAuth2 is that it is much faster than Basic Authentication. Based on results from Jeff Smith's post, it is up to 3X faster. The other positive side effects of choosing OAuth2 are that your consumers will be familiar with it and be comforted by the fact you are using an industry standard.
ORDS makes building REST services on your Oracle database easy. This doesn't mean you don't have to consider performance. Spending a little extra effort when designing and building your REST services will pay dividends as more and more consumers use your web services.