<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Innovative Insights into AI, Oracle APEX, ORDS, Database and OCI]]></title><description><![CDATA[Posts about AI and delivering innovative enterprise-grade solutions with Cloud and Oracle Database technologies.]]></description><link>https://blog.cloudnueva.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 14:05:35 GMT</lastBuildDate><atom:link href="https://blog.cloudnueva.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[If English is the New Programming Language, then Markdown is the New Format]]></title><description><![CDATA[Introduction
AI is changing how we build software. We are moving from a world where developers primarily describe systems in code to one where we increasingly describe intent in natural language. Prom]]></description><link>https://blog.cloudnueva.com/markdown-is-the-new-format-for-ai</link><guid isPermaLink="true">https://blog.cloudnueva.com/markdown-is-the-new-format-for-ai</guid><category><![CDATA[orclapex]]></category><category><![CDATA[apex_lang]]></category><category><![CDATA[markdown]]></category><category><![CDATA[ai agents]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 16 Apr 2026 11:57:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/a942bbdd-d5da-4392-a8b5-01f0d0c0c4fe.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Introduction</h1>
<p>AI is changing how we build software. We are moving from a world where developers primarily describe systems in code to one where we increasingly describe intent in natural language. Prompts, instructions, specifications, and structured text are becoming part of the development process itself. In that sense, English is becoming a specification interface for software generation.</p>
<blockquote>
<p>But English alone is not enough.</p>
</blockquote>
<p>Natural language is flexible, expressive, and easy for humans. It is also messy. It drifts. It is inconsistent. It leaves room for interpretation. That is fine for conversation. It is less fine when you want an AI system to reliably generate an application, a page definition, a requirements document, or a presentation.</p>
<blockquote>
<p>That is why Markdown matters.</p>
</blockquote>
<p>If English is becoming the language of AI-driven development, Markdown is becoming one of the most practical formats for making that language usable. It gives natural language just enough structure to be repeatable, parsable, lightweight, and machine-readable without turning it back into code.</p>
<p>For APEX developers, this matters more than most people realize. APEX has always been about metadata, declarative development, and reducing friction between business intent and working software. APEXlang appears to push that same idea further. Instead of hand-building every artifact, we will increasingly define applications, pages, workflows, and requirements in structured natural language and let AI turn those definitions into implementation.</p>
<p>That is why Markdown is such a strong fit for the AI era, and especially for where APEX appears to be heading.</p>
<h1>Markdown hits the sweet spot</h1>
<blockquote>
<p>Markdown is powerful because it is simple.</p>
</blockquote>
<p>A heading is a heading. A list is a list. A table is a table. A code block is a code block. You can read it as plain text, write it quickly, version it easily, and transform it into other formats without carrying the overhead of a heavyweight document format.</p>
<blockquote>
<p>That makes Markdown ideal for AI.</p>
</blockquote>
<p>Large language models work best when the input is mostly meaning rather than formatting noise. Markdown preserves structure, but it does not bury the meaning inside layers of layout instructions, visual positioning, embedded objects, theme metadata, and export artifacts. The model sees the content clearly.</p>
<p>This is where Markdown has a big advantage over Word documents, slide decks, and PDFs. Those formats were designed primarily for human consumption and visual rendering. Markdown is much closer to an authoring format for both humans and machines.</p>
<p>For APEX, this is especially interesting because so much of what we build already begins as semi-structured intent: application descriptions, page definitions, data requirements, business rules, acceptance criteria, UX notes, and workflow descriptions.</p>
<p>Traditionally, those things are scattered across Word docs, slides, emails, tickets, and whiteboards. In an AI-driven workflow, that fragmentation becomes a real problem. AI works better when the source material is clean, consistent, and structured.</p>
<p>Markdown gives you that structure without forcing you into a rigid syntax that business users or developers will resist.</p>
<h2>Structure</h2>
<p>Large language models do not benefit from Markdown just because it removes formatting noise. They also benefit from the predictable hierarchy Markdown provides. Headings define topic boundaries, sections group related ideas, nested lists show parent-child relationships, tables make structured comparisons explicit, and code fences clearly separate executable or literal content from prose. That consistent structure makes the content easier for a model to parse, chunk, and reason over. In practice, Markdown works well because it preserves meaning in a form that is both human-readable and machine-readable.</p>
<h1>APEXlang &amp; Blueprints</h1>
<p>APEXlang (available in APEX 26.1) will be the new syntax for APEX. At APEX World this year (also mentioned in the APEX <a href="https://apex.oracle.com/en/learn/resources/roadmap/">statement of direction</a>), we learned a little about another aspect of APEXlang called Blueprints.</p>
<p>Based on what was shown and what Oracle has signaled publicly, Blueprints are a move toward more structured, specification-driven app generation. Blueprints will likely depend on a defined Markdown structure or syntax. The fact that APEX Blueprints can be created in Markdown should mean they are both human and machine-readable.</p>
<h2>Getting off to a fast start</h2>
<p>Based on the information available, I assume Blueprints will help you accelerate version 1 of your app, and then you can iterate from there. Iteration on top of the initial build would then happen in APEX Builder, VS Code, or APEXlang.</p>
<blockquote>
<p>A business analyst writes a Blueprint spec in Markdown. This is converted to a first draft of an APEX app, SQL objects, validations, and test cases. The developer reviews and refines. The Markdown spec remains the source of intent.</p>
</blockquote>
<h2>Benefits</h2>
<p>This approach has several benefits:</p>
<ul>
<li><p>A blueprint-driven approach has the potential to be more deterministic than unconstrained AI generation.</p>
</li>
<li><p>It could shift more early-stage specification work toward analysts and product owners.</p>
</li>
<li><p>You can use version control and diffs on Blueprints as you evolve the first version of your app.</p>
</li>
</ul>
<h1>A Practical Example: Marp</h1>
<p>A tangible example of this approach (which I recently started using) is the Markdown Presentation Ecosystem, or <a href="https://marp.app/">Marp</a>. Marp is an open-source Markdown presentation ecosystem that lets you write slide decks in Markdown and turn them into presentation-ready output. It includes tools, a CLI, and can export decks to HTML, PDF, and PowerPoint. As with APEX Blueprints, you write the content, and the Marp CLI converts it to HTML, PDF, or PPTX.</p>
<blockquote>
<p>Building presentations in Markdown allows you to focus completely on the content of your presentation rather than the format.</p>
</blockquote>
<h2>Using Marp</h2>
<p>Using Marp is straightforward. You write a normal Markdown file, and each slide is separated by a horizontal rule (---). That means a deck is just a sequence of Markdown sections. You can then add Marp front matter and directives for things like theme selection, pagination, background images, layout tweaks, and presenter-friendly formatting. The official ecosystem includes the Marp CLI for converting Markdown files from the command line.</p>
<img src="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/4c1dd8bd-6c73-414b-92b7-e71e42c532b4.png" alt="How Marp Works" style="display:block;margin:0 auto" />

<div>
<div>🚀</div>
<div>The fact that Marp has a CLI means you can integrate it with your LLM via <a target="_blank" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://agentskills.io/home" style="pointer-events:none">agent skills.</a> This allows you to generate professional presentations from a prompt!</div>
</div>

<h3>Simple Example</h3>
<pre><code class="language-markdown">---
marp: true
theme: default
paginate: true
---

# My Presentation

A slide written in Markdown.

---

## Second Slide

- Bullet one
- Bullet two
- Bullet three
</code></pre>
<h3>Generate Output</h3>
<p>Once you have your markdown, you can convert it to PDF, HTML, or PPTX from the command line.</p>
<pre><code class="language-shell"># Generate an HTML presentation using a custom CSS style
marp --theme nueva.css AI_Functions_Presentation.md -o AI_Functions_Presentation.html

# Generate a PDF presentation using a custom CSS style
marp --theme nueva.css AI_Functions_Presentation.md -o AI_Functions_Presentation.pdf

# Generate a PPTX presentation using a custom CSS style
marp --theme nueva.css AI_Functions_Presentation.md -o AI_Functions_Presentation.pptx
</code></pre>
<h1>Markdown Saves on Tokens</h1>
<p>The unit of measure of AI is tokens. Tokens are the small chunks of text that an AI model reads and generates, such as words, parts of words, punctuation, or symbols. They are the basic units of input and output, so token count affects cost, speed, and the amount of context the model can handle at once.</p>
<blockquote>
<p>The fewer tokens you use, the less your AI costs and the faster it runs.</p>
</blockquote>
<p>To test this theory, I used the Codex CLI to build a deck in Markdown and another in PPTX format. I ran both scenarios using the <code>gpt-5.4-mini</code> with <code>high</code> reasoning.</p>
<details>
<summary>Markdown/Marp Prompt</summary>
<p>Create a 10-slide presentation in valid Marp markdown.</p><p>Topic: Quarterly AI Product Strategy Review Audience: senior leadership Style: concise, analytical, executive-ready</p><p>Requirements:</p><ul><li><p>Output markdown only</p></li><li><p>Use <code>---</code> between slides</p></li><li><p>Include: title, agenda, 3 market slides, 2 product slides, 1 architecture slide, 1 roadmap slide, 1 risks slide, 1 summary slide</p></li><li><p>Use short bullets, not paragraphs</p></li><li><p>Use markdown tables where helpful</p></li><li><p>Add speaker notes for the architecture and roadmap slides</p></li><li><p>Include footer text: Cloud Nueva | Q2 Review</p></li></ul>
</details><details>
<summary>PPTX Prompt</summary>
<p>Create a 10-slide presentation as an actual PPTX file, not markdown, not HTML, and not JSON.</p><p>Topic: Quarterly AI Product Strategy Review Audience: senior leadership Style: concise, analytical, executive-ready</p><p>Requirements:</p><ul><li><p>Generate the presentation in .pptx format as part of the process</p></li><li><p>Include exactly 10 slides:</p><ol><li><p>Title</p></li><li><p>Agenda</p></li><li><p>Market Trends</p></li><li><p>Competitive Landscape</p></li><li><p>Customer Demand Signals</p></li><li><p>Product Priorities</p></li><li><p>Product Gaps and Risks</p></li><li><p>Architecture Overview</p></li><li><p>Roadmap</p></li><li><p>Summary</p></li></ol></li><li><p>Use short bullets, not paragraphs</p></li><li><p>Use a professional business theme</p></li><li><p>Add footer text on each slide: Cloud Nueva | Q2 Review</p></li><li><p>Add speaker notes for the Architecture Overview and Roadmap slides</p></li><li><p>Include at least one comparison table where appropriate</p></li><li><p>Keep wording consistent across slides</p></li><li><p>Return only the content needed to produce the PPTX file and complete the PPTX generation workflow</p></li></ul>
</details>

<h3>Results / Token Usage</h3>
<table>
<thead>
<tr>
<th>Format</th>
<th>Input Tokens</th>
<th>Output Tokens</th>
</tr>
</thead>
<tbody><tr>
<td>Markdown</td>
<td>35.4K</td>
<td>1.73K</td>
</tr>
<tr>
<td>PPTX</td>
<td>293K</td>
<td>13.8K</td>
</tr>
</tbody></table>
<blockquote>
<p>The token savings are significant.</p>
</blockquote>
<p>Now, let's see what happens to token usage when we summarize the outputs from the above...</p>
<details>
<summary>Marp Prompt</summary>
<p>Summarize this Marp markdown presentation deck.</p><p>Requirements:</p><ul><li><p>Read the full deck, slide by slide</p></li><li><p>Produce a concise executive summary</p></li><li><p>Include:</p><ul><li><p>the main thesis of the deck</p></li><li><p>the key business priorities</p></li><li><p>the major risks or constraints</p></li><li><p>the roadmap or next-step themes</p></li></ul></li><li><p>Then provide a slide-by-slide summary with 1 to 2 sentences per slide</p></li><li><p>Preserve the terminology used in the deck</p></li><li><p>Do not rewrite the deck</p></li><li><p>Do not comment on formatting unless it affects meaning</p></li></ul>
</details><details>
<summary>PPTX Prompt</summary>
<p>Summarize this PowerPoint presentation deck.</p><p>Requirements:</p><ul><li><p>Read the full deck, slide by slide, including titles, bullets, tables, and speaker notes if present</p></li><li><p>Produce a concise executive summary</p></li><li><p>Include:</p><ul><li><p>the main thesis of the deck</p></li><li><p>the key business priorities</p></li><li><p>the major risks or constraints</p></li><li><p>the roadmap or next-step themes</p></li></ul></li><li><p>Then provide a slide-by-slide summary with 1 to 2 sentences per slide</p></li><li><p>Preserve the terminology used in the deck</p></li><li><p>Do not rewrite the deck</p></li><li><p>Do not comment on visual design unless it affects meaning</p></li></ul>
</details>

<h3>Results / Token Usage</h3>
<table>
<thead>
<tr>
<th>Format</th>
<th>Input Tokens</th>
<th>Output Tokens</th>
</tr>
</thead>
<tbody><tr>
<td>Markdown</td>
<td>110K</td>
<td>1.99K</td>
</tr>
<tr>
<td>PPTX</td>
<td>232K</td>
<td>4.87K</td>
</tr>
</tbody></table>
<blockquote>
<p>Again, this test showed a dramatic reduction in token usage.</p>
</blockquote>
<p><strong>Note</strong>: Much of the additional token usage likely comes from the extra processing needed to extract usable structure and text from a binary PPTX workflow.</p>
<h1>Markdown for Specifications</h1>
<p>A useful Markdown specification does more than describe an idea at a high level. It should define the feature's purpose, business context, data involved, required behavior, constraints, and acceptance criteria. In practice, that means clearly naming entities, inputs, outputs, rules, edge cases, assumptions, and non-functional requirements where they matter. The goal is to remove ambiguity without making the document heavy or unreadable. A good Markdown spec gives both humans and AI a structured source of intent that can be reviewed, versioned, and turned into implementation with less guesswork.</p>
<p>I have <a href="https://blog.cloudnueva.com/avoiding-the-vibe-coding-rabbit-hole">written before</a> about providing AI with detailed specifications to improve AI outcomes. These specifications should be written in Markdown to allow the AI to focus on intent rather than formatting.</p>
<div>
<div>💡</div>
<div>I believe humans can also benefit from focusing on intent and not formatting!</div>
</div>

<h1>Markdown for Agents</h1>
<p>Markdown is also emerging as a practical format for presenting content to AI agents. Although HTML is well-structured, it is bulky and includes tags and formatting that add noise for agent workflows. Markdown offers a cleaner interchange format when the goal is to expose content rather than presentation.</p>
<p>Cloudflare is at the forefront of this transition. You can read more in their <a href="https://blog.cloudflare.com/markdown-for-agents/">blog post</a> on the subject.</p>
<h1>Caution</h1>
<h2>Markdown is not enough on its own</h2>
<p>Markdown is useful because it adds structure without adding much friction. But on its own, it is still just text. If you want reliable AI output, Markdown usually requires conventions.</p>
<p>That may include standard section headings, front matter, templates, naming rules, required fields, examples, and acceptance criteria. Without that extra discipline, two Markdown documents about the same thing can still vary wildly in quality and completeness.</p>
<p>In other words, Markdown is not the full solution. It is the foundation. The real value comes when teams combine Markdown with consistent patterns that make intent easier for both humans and AI to interpret.</p>
<h2>Where Markdown breaks down</h2>
<p>Markdown works best when the goal is to capture meaning, structure, and intent. It works less well when the output depends heavily on precise visual layout or rich interaction.</p>
<p>For example, Markdown is not a great fit for pixel-perfect UI design, complex diagrams, drag-and-drop experiences, or documents that rely on detailed formatting and review features such as tracked changes. It can describe those things, but it cannot fully replace the tools built for them.</p>
<p>That is the tradeoff. Markdown is an excellent lightweight source format, but not every artifact should remain in Markdown forever. In many cases, it is most valuable at the intent stage, before being transformed into something more specialized.</p>
<h1>Conclusion</h1>
<p>Markdown matters because it separates intent from presentation. It gives natural language enough structure to be reused, versioned, reviewed, and processed reliably by AI.</p>
<p>That makes it useful well beyond note-taking. It is a strong format for specifications, blueprints, prompts, presentations, and agent-facing content. Not because it is perfect, but because it is simple, structured, and efficient.</p>
<p>For APEX developers, that matters. APEX has always been about turning metadata and intent into working software. As AI and APEXlang push that model further, Markdown looks like a practical way to define what we want before tools generate what we build.</p>
]]></content:encoded></item><item><title><![CDATA[From Spreadsheet to Enterprise APEX System with AI]]></title><description><![CDATA[Introduction
This is not another post about creating an APEX app from a spreadsheet using the Create app Wizard. This post is about using AI to design and accelerate the build of an enterprise APEX sy]]></description><link>https://blog.cloudnueva.com/from-spreadsheet-to-enterprise-apex-system-with-ai</link><guid isPermaLink="true">https://blog.cloudnueva.com/from-spreadsheet-to-enterprise-apex-system-with-ai</guid><category><![CDATA[orclapex]]></category><category><![CDATA[#oracle-apex]]></category><category><![CDATA[generative ai]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Sat, 11 Apr 2026 02:41:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/6512a378-9d6e-45cd-b133-3a60e2c40ace.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Introduction</h1>
<p>This is not another post about creating an APEX app from a spreadsheet using the Create app Wizard. This post is about using AI to design and accelerate the build of an enterprise APEX system from a spreadsheet.</p>
<h1>Background</h1>
<p>A client recently asked me to build an APEX system to replace their existing spreadsheet-based expenses system. The goals were clear:</p>
<ul>
<li><p>Reduce the time from expense submission to payment</p>
</li>
<li><p>Improve the accuracy of taxes calculated for expenses</p>
</li>
<li><p>Allow management to track expenses</p>
</li>
<li><p>Reduce errors caused by manually entering AP invoices into Oracle E-Business Suite</p>
</li>
<li><p>Improve anomaly detection</p>
</li>
<li><p>Improve user experience</p>
</li>
</ul>
<p>The company’s expense rules were already embedded in the Excel template. The template included formulas for mileage reimbursement, Canadian tax calculations, and finance summaries used to enter AP invoices into Oracle EBS.</p>
<h1>An AI First Approach</h1>
<p>One of my goals for 2026 is to adopt an AI-first approach. It will not always work, but using it as the default starting point is helping me understand its limitations and get much more out of it.</p>
<blockquote>
<p>One word of caution. Just because AI cannot do something well today doesn't mean it won't be able to when the next frontier model is released. It is important that we regularly re-evaluate our perceptions of what AI is capable of.</p>
</blockquote>
<h1>Architecture</h1>
<p>At this stage, it is worth describing the client's APEX environment. They have an on-premises APEX instance running on the Oracle e-Business Suite (EBS) instance. This sits behind a firewall accessible only via VPN. They also have an OCI APEX Service instance running externally-facing APEX apps. The plan was to have expense report entry and approval run in the OCI instance, and then pull approved expense reports into the on-premises EBS instance for review and payment in Accounts Payable by finance.</p>
<ul>
<li><p>OCI owns the approval state of the expense reports</p>
</li>
<li><p>EBS owns the payment state of the expense reports</p>
</li>
<li><p>SharePoint owns receipt attachments</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/a12784d7-d4d1-428b-835a-23c3e23e9c70.png" alt="Oracle APEX Expenses System built with AI" style="display:block;margin:0 auto" />

<h1>Design</h1>
<h2>Business Rule Extraction</h2>
<p>The legacy expense report Excel template's tabs, tables, fields, and formulas essentially contained all the business rules. I started the design phase by asking Codex to analyze the Excel template and draft a Product Requirements Document (PRD) based on it. Codex produced a four-page Markdown PRD covering the business rules, entities, fields, data types, and lists of values.</p>
<h2>Verification</h2>
<p>I took the PRD and asked Codex to review it against business best practices and current Canadian tax rules. It augmented the design with rules not already in their spreadsheet. For example, Codex suggested additional mileage-rule considerations that were not explicit in the spreadsheet, which we then validated against current Canadian guidance and the client’s reimbursement policy.</p>
<h2>User Interface</h2>
<p>I then asked Codex to create graphical wireframes of the APEX pages needed for the solution. I attached several screenshots from existing APEX apps, so it could match the corporate look and feel. Codex created the wireframes using real data from an old expense report Excel. I incorporated screenshots into the design document.</p>
<h2>Business Review</h2>
<p>I then reviewed the PRD with business users to get their feedback. Being able to present the business rules in a clearly laid out document (instead of embedded in Excel formulas), and being able to show them what the new app was going to look like was significant. We made some minor updates to the design based on feedback from this review.</p>
<blockquote>
<p>At this stage, we had an approved design document and a clear path forward after only 10 hours of effort.</p>
</blockquote>
<h1>Build</h1>
<p>Using Codex, I attached the approved design document and provided an extensive prompt detailing what I wanted it to do. I split this into two prompts, one for the OCI expense entry side of the app (running on OCI) and a separate one for the on-premises side of the app. Each prompt requested:</p>
<ul>
<li><p>A comprehensive data model.</p>
</li>
<li><p>Secure views and views that abstract table-join complexity from APEX.</p>
</li>
<li><p>PL/SQL utility packages with APIs to manage email generation, REST integrations, workflow functions, and managing attachments in SharePoint.</p>
</li>
<li><p>ORDS APIs to allow the On-Premises EBS environment to fetch approved expenses for payment, and to post back to let employees know when their expenses have been paid.</p>
</li>
</ul>
<p>Along with the prompt, I included:</p>
<ul>
<li><p>Sample tables from previous apps to teach the model the table creation standards.</p>
</li>
<li><p>PL/SQL code from previous apps that had APEX workflow approvals and that used a SharePoint attachments common package that we use.</p>
</li>
<li><p>An <code>AGENTS.md</code> file to provide product versions, coding standards, formatting standards, etc.</p>
</li>
</ul>
<p>The outcome was:</p>
<ul>
<li><p>A roughly 90% complete data model with foreign keys, constraints, appropriate data types (and sizes), and comments.</p>
</li>
<li><p>Scripts to load the current tax and mileage rates from the template Excel into the new tables.</p>
</li>
<li><p>Abstraction of Canadian Provinces and Territories into Jurisdictions applicable to mileage and tax rates, which is something I would not have thought of.</p>
</li>
<li><p>Cloud and On-Premises PL/SQL packages with the helper procedures and functions that I requested.</p>
</li>
<li><p>A SQL script to create an ORDS OAuth2 Credential, ORDS module, privilege, templates and handlers.</p>
</li>
<li><p>Twenty unit test scripts to test both sides of the app.</p>
</li>
</ul>
<p>There were a few issues which were resolved with another hour or so of follow-up prompts and clarifications.</p>
<h2>APEX</h2>
<p>All that was left was to build the APEX app. This part was less fun because, at the time of writing in March 2026, <code>APEXlang</code> was not yet available. Frankly it took longer to build the APEX app than all of the other artifacts created up to this point.</p>
<p>Overall, I would estimate that what would normally have been an 80-hour project was reduced to about 40 developer hours with the help of AI. We will have to see how much lower this can go when <code>APEXlang</code> comes along.</p>
<h1>Keys to Success</h1>
<p>The following points were key to the success of this project:</p>
<ul>
<li><p>90% of the business rules were explicitly baked into the Expense Report Excel. This made it easy for the AI to extract the rules and for us to verify it had done it accurately.</p>
</li>
<li><p>Presenting the business with a PRD within a few days of starting the project (with realistic wireframes) inspired confidence that we are heading down the right path with minimal investment of time.</p>
</li>
<li><p>Splitting the build phase between on-premises and OCI allowed the model to focus on each build separately and reduced the risk of confusion between the different database versions, APEX versions, and EBS-specific coding standards.</p>
</li>
<li><p>Models respond really well to examples. Pointing AI to a package and saying "create a procedure to do X that follows the same pattern as procedure Y from another package" works very well.</p>
</li>
<li><p>Being specific about the output you expect is also key. For example, you need to specifically request test scripts, and request that they include edge cases as well as happy path tests. Combine this with SQLcl and its MCP server, and you can prompt the AI to run the test suite after every change.</p>
</li>
</ul>
<h1>Conclusion</h1>
<p>AI did not build this system on its own, but it removed a large amount of the slow, repetitive work at the start of the project. The spreadsheet already contained most of the business rules. AI helped extract those rules, turn them into a usable design, and generate much of the database and PL/SQL foundation.</p>
<p>The real value was speed and clarity. We were able to review a proper design with the business early, make corrections before build started, and cut the overall development effort significantly. The parts that still needed the most hands-on work were the APEX application itself, validation of the generated output, and the final handling of edge cases.</p>
<p>For this type of project, AI worked best as a force multiplier, not as a replacement for experience. It was useful because the source material was structured, the prompts were specific, and every output was reviewed before being used.</p>
]]></content:encoded></item><item><title><![CDATA[AI SKILLS as a Thin Layer Over MCP Tools]]></title><description><![CDATA[Introduction
I have been experimenting with using AI Skills as a thin layer on top of MCP-backed tools, and I think this pattern is more useful than it first appears.
At a technical level, MCP gives t]]></description><link>https://blog.cloudnueva.com/ai-skills-as-a-thin-layer-over-mcp-tools</link><guid isPermaLink="true">https://blog.cloudnueva.com/ai-skills-as-a-thin-layer-over-mcp-tools</guid><category><![CDATA[ords]]></category><category><![CDATA[skills]]></category><category><![CDATA[mcp]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 02 Apr 2026 11:49:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/4f7227df-25db-4377-8300-df1a1a1e7f48.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Introduction</h1>
<p>I have been experimenting with using AI Skills as a thin layer on top of MCP-backed tools, and I think this pattern is more useful than it first appears.</p>
<p>At a technical level, MCP gives the model standardized access to external tools and context. That is valuable, but raw tool access is not always enough. A model may know that a tool exists, but still needs guidance on when to use it, how to use it, what the tool is for, and what “good usage” looks like in the context of a specific prompt.</p>
<p>That is where I am finding Skills useful.</p>
<p>Rather than thinking of a Skill as replacing an MCP server, I think of it as a focused instructional layer on top of one or more MCP tools. The Skill captures intent, usage conventions, and domain-specific behavior. In practice, that makes tool use more reliable and reduces the amount of prompting I need to do each time.</p>
<div>
<div>💡</div>
<div>In Codex, you can invoke a Skill explicitly using <code>$skill_name</code>. MCP servers do not provide that same kind of direct user-facing invocation.</div>
</div>

<h2><strong>An example with Oracle ORDS</strong></h2>
<p>To make this more practical, I built a small STDIO MCP server that exposes an Oracle ORDS REST web service on a table called <code>JD_SB_ENTRIES</code>. This table stores records in a second brain app. By “second brain,” I mean the usual personal knowledge tasks: capturing notes, storing ideas, tracking follow-ups, organizing knowledge, and retrieving things later in a structured way.</p>
<p>The ORDS side is straightforward. I registered a template for the table with handlers for <code>GET</code>, <code>POST</code>, <code>PUT</code>, and <code>DELETE</code> that map to CRUD database operations. I secured the ORDS module that the template was created in using an OAuth 2.0 client.</p>
<div>
<div>💡</div>
<div>You could also use ORDS <code>ORDS.ENABLE_OBJECT</code> to <a target="_blank" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://www.thatjeffsmith.com/archive/2017/03/auto-rest-with-ords-an-overview-and-whats-next/" style="pointer-events:none">Auto-REST</a> enable the <code>JD_SB_ENTRIES</code> table. This generates the entire CRUD API instantly, allowing you to focus entirely on the MCP/Skill interaction layer rather than writing PL/SQL backend handlers.</div>
</div>

<p>I built an STDIO MCP Server in Python using the <a href="https://chatgpt.com/codex">Codex desktop app</a>. STDIO MCP servers run locally on your machine. The MCP server then exposes the REST APIs to the model through a tool interface.</p>
<p>The model can use the MCP tool to create, read, update, and delete rows through ORDS, without needing to know the low-level details of the HTTP call each time.</p>
<p>It works.</p>
<p>But I soon found that getting the MCP server to act on my prompts was erratic (at best). I also have Office 365 linked to my Codex desktop app setup, so the model would often choose Microsoft Planner over my tool. It would also conflict with Office 365 Calendar. It's understandable, really.</p>
<blockquote>
<p>A request like "add a task for tomorrow to clean the car" would make sense for MS Outlook just as much as for my second brain.</p>
</blockquote>
<p>This confusion from the LLM occurs despite clear instructions in the MCP server services definition on how to use the tool.</p>
<details>
<summary>YAML for the MCP Server Services</summary>
<pre class="not-prose"><code class="language-yaml">oauth:
  token_url: https://example.adb.us-chicago-1.oraclecloudapps.com/ords/demo/oauth/token
  scopes: ""

<p>services:</p>
<ul>
<li>id: jd_sb_entries
name: JD Second Brain Tasks, Notes, and Reminders
base_url: <a href="https://example.adb.us-chicago-1.oraclecloudapps.com/ords/demo/mcp/">https://example.adb.us-chicago-1.oraclecloudapps.com/ords/demo/mcp/</a>
description: "Manage second-brain entries from natural user requests. Use this service when the user wants to add, create, save, list, review, update, or delete notes, tasks, ideas, knowledge entries, or reminder-style entries with a due date. Create new entries at jd_sb_entries and update or delete existing entries at jd_sb_entries/{entry_id}. This stores reminders as second-brain tasks or notes; it does not create real Planner or calendar reminders. Infer the correct action from conversational requests whenever possible."
default_headers:
  Accept: application/json
timeout_seconds: 30
pagination:
  default_page_size: 100
  max_page_size: 250
  max_pages_per_call: 10
  max_items_per_call: 500
examples:<ul>
<li>"Use rest_mcp_server to add a todo for tomorrow: clean car."</li>
<li>Add a new task reminding me to review the ORDS spec tomorrow.</li>
<li>Save a reminder for tomorrow to review the ORDS spec.</li>
<li>Create a note about MCP server pagination and save the full details.</li>
<li>"Add this to my second brain: review the ORDS spec tomorrow."</li>
<li>Show me my second-brain entries.</li>
<li>Update entry 123 to mark it high urgency.</li>
<li>Delete entry 456.
columns:</li>
<li>name: entry_id
data_type: NUMBER
nullable: false
writable: false
description: Primary key identity column.</li>
<li>name: subject
data_type: VARCHAR2(255)
nullable: false
writable: true
description: Short subject line.</li>
<li>name: entry_type
data_type: VARCHAR2(30)
nullable: false
writable: true
description: Entry classification.
enum_values:<ul>
<li>IDEA</li>
<li>TASK</li>
<li>NOTE</li>
<li>KNOWLEDGE</li>
</ul>
</li>
<li>name: ai_summary
data_type: VARCHAR2(32767)
nullable: false
writable: true
description: AI-generated summary.</li>
<li>name: user_content
data_type: CLOB
nullable: false
writable: true
description: Full entry body.</li>
<li>name: urgency
data_type: VARCHAR2(30)
nullable: true
writable: true
description: Optional urgency.
enum_values:<ul>
<li>LOW</li>
<li>MEDIUM</li>
<li>HIGH</li>
</ul>
</li>
<li>name: action_required
data_type: VARCHAR2(1)
nullable: false
writable: true
description: Whether action is required.
enum_values:<ul>
<li>Y</li>
<li>N</li>
</ul>
</li>
<li>name: due_date
data_type: DATE
nullable: true
writable: true
description: Optional due date in YYYY-MM-DD format.</li></ul></li></ul></code></pre>



</details>

<p>The model still needs to understand what the API represents in business terms, how it should behave when used, and which requests should trigger a call. A generic CRUD interface is flexible, but also vague.</p>
<div>
<div>💡</div>
<div>One enhancement I thought of was to reference the OpenAPI/Swagger endpoint that ORDS makes available in the <a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="http://skill.md" style="pointer-events:none">SKILL.md</a> file. This makes the skill more resilient to changes in the API.</div>
</div>

<h2><strong>Using a Skill for a second brain workflow</strong></h2>
<p>To counter this vagueness, I decided to create a <a href="https://agentskills.io/home">skill</a> focused specifically on the second brain ORDS API.</p>
<blockquote>
<p>Agent Skills are folders of instructions, scripts, and resources that agents can discover and use to do things more accurately and efficiently.</p>
</blockquote>
<p>This turned out to be more useful than I expected.</p>
<p>The Skill did three important things.</p>
<h3><strong>1. It guided the use of the REST API</strong></h3>
<p>The MCP tool exposed the API's mechanics. The Skill explained how to use it.</p>
<p>That distinction matters.</p>
<p>The tool knew how to call the endpoint. The Skill told the model when to create a note, when to update an existing item rather than insert a new one, which fields mattered, and how to interpret user requests in the context of a second brain.</p>
<p>Without that layer, the model has to infer too much from the tool signature and endpoint description. Sometimes that works. Sometimes it does not. The more domain-specific the workflow becomes, the more that gap shows up.</p>
<p>In practice, the Skill reduced a lot of that ambiguity.</p>
<h3><strong>2. It documented second brain functionality</strong></h3>
<p>The Skill also became a compact form of documentation.</p>
<p>Instead of only documenting the REST API as a technical interface, the Skill documented the behavior around the API. It explained what the second brain supports, the kinds of operations it is intended for, and the conventions the model should follow.</p>
<p>That is useful for the model and for me.</p>
<p>It gave me a single place to describe the intended workflow in practical terms rather than just API terms. In other words, it documented capability, not just transport.</p>
<p>I think this is an underrated part of Skills. They are not only prompt helpers. They can also serve as executable documentation for an AI-facing workflow.</p>
<h3><strong>3. It allowed explicit invocation with $skill</strong></h3>
<p>This was the third benefit, and in some ways, the most practical.</p>
<p>Because the behavior was packaged as a Skill, I could explicitly invoke it with $skill_name.</p>
<p>That gave me a clean way to direct the model toward a very specific behavior package. I was not just hoping the model would choose the right MCP tool based on a vague request. I could point it at the exact Skill that I knew would work with that second brain API.</p>
<p>That explicit invocation made the interaction more predictable.</p>
<details>
<summary>SKILL.md</summary>
<pre class="not-prose"><code class="language-markdown">---
name: "second-brain"
description: "Use when the user wants to add, update, list, or delete second-brain notes, tasks, ideas, knowledge items, or reminder-style entries through the local rest-mcp server. Prefer this skill when the user explicitly says $second-brain."
---

<h1>Second Brain</h1>
<p>Use this skill for second-brain CRUD work through the local <code>rest-mcp</code> MCP server.</p>
<h2>Core Rules</h2>
<ul>
<li>Use <code>service_id: "jd_sb_entries"</code>.</li>
<li>Use <code>path: "jd_sb_entries"</code> for create and list. Use <code>path: "jd_sb_entries/{entry_id}"</code> for a specific row.</li>
<li>For filtered <code>GET</code> requests, use ORDS <code>q</code> filter syntax, not ad hoc column query params.</li>
<li>Preferred form: pass <code>query</code> as a native object and pass <code>query.q</code> as a native object. The MCP server will JSON-encode <code>q</code>.</li>
<li>Accepted alternate form: pass <code>query</code> as a raw query string such as <code>q={"entry_type":{"$eq":"TASK"}}&amp;amp;limit=25</code>.</li>
<li>Do not send second-brain filters as top-level keys like <code>"entry_type": "TASK"</code> unless the service explicitly documents that parameter.</li>
<li>For structured arguments, verify <code>body</code> and <code>headers</code> are native objects before calling the tool. For <code>query</code>, prefer a native object unless a raw query string is more direct.</li>
<li>Use <code>page_limit</code> and <code>item_limit</code> for pagination.</li>
<li>If fields, enum values, or filter keys are unclear, call <code>rest-mcp.describe_service</code> once. Do not retry blindly with alternate query formats.</li>
<li>Use ORDS operators inside <code>q</code> as needed: <code>\(eq</code>, <code>\)ne</code>, <code>\(instr</code>, <code>\)like</code>, <code>\(gte</code>, <code>\)lte</code>, <code>\(or</code>, <code>\)and</code>.</li>
<li>Keep list results compact and results-focused.</li>
<li>Never paste raw MCP response JSON into the user-facing reply. Extract the needed fields and summarize.</li>
</ul>
<h2>Fixed Playbooks</h2>
<ul>
<li>If the user asks to show, list, pull, or review active todos/tasks/reminders, make exactly one <code>GET</code> call with:</li>
</ul>
<pre><code class="language-json">{
  "service_id": "jd_sb_entries",
  "method": "GET",
  "path": "jd_sb_entries",
  "query": {
    "q": {
      "entry_type": {
        "$eq": "TASK"
      },
      "action_required": {
        "$eq": "Y"
      }
    }
  },
  "page_limit": "1",
  "item_limit": "25"
}
</code></pre>
<ul>
<li>For that active-task flow, do not probe with alternative query formats, do not call <code>describe_service</code>, and do not say "retrying" unless an unexpected runtime error actually occurred.</li>
<li>After fetching active tasks, sort by <code>due_date</code> ascending before replying unless the user asks for a different order.</li>
<li>Reply with only the compact task list: <code>#entry_id subject — due YYYY-MM-DD</code>.</li>
</ul>
<h2>Batch Rules</h2>
<ul>
<li>Default to single-item mode.</li>
<li>Enter batch mode only when the user clearly asks for multiple items or refers to a concrete earlier list.</li>
<li>For prior-thread items, restate a compact working list in the current turn before writing.</li>
<li>If the earlier items are missing or ambiguous, ask the user to narrow the scope or restate them.</li>
<li>Process at most 5 items per turn unless the user explicitly asks for more.</li>
<li>Create or update sequentially, one <code>request_resource</code> call per item.</li>
<li>If a batch partially succeeds, report completed items and the first failure clearly.</li>
</ul>
<h2>Field Mapping</h2>
<ul>
<li><code>todo</code>, <code>task</code>, <code>reminder</code> -&gt; <code>entry_type: "TASK"</code></li>
<li><code>note</code> -&gt; <code>entry_type: "NOTE"</code></li>
<li><code>idea</code> -&gt; <code>entry_type: "IDEA"</code></li>
<li><code>knowledge</code> -&gt; <code>entry_type: "KNOWLEDGE"</code></li>
<li>For todos/reminders, default <code>action_required</code> to <code>"Y"</code>.</li>
<li>Default <code>urgency</code> to <code>"LOW"</code> unless the user says otherwise.</li>
<li>Use title case for <code>subject</code> unless the user specifies exact casing.</li>
<li>Use the raw user text or a slightly cleaned version for <code>user_content</code>.</li>
<li>Create a short <code>ai_summary</code> from the request.</li>
<li>Convert relative dates like <code>tomorrow</code> into an absolute <code>YYYY-MM-DD</code> date using the user's locale timezone.</li>
<li>Treat returned <code>due_date</code> values as ISO timestamps and present them back to the user as dates when only the date matters.</li>
</ul>
<h2>Request Patterns</h2>
<p>Create:</p>
<pre><code class="language-json">{
  "service_id": "jd_sb_entries",
  "method": "POST",
  "path": "jd_sb_entries",
  "body": {
    "subject": "Clean car",
    "entry_type": "TASK",
    "ai_summary": "Reminder to clean the car tomorrow.",
    "user_content": "clean car",
    "urgency": "LOW",
    "action_required": "Y",
    "due_date": "2026-03-15"
  }
}
</code></pre>
<p>List active tasks:</p>
<pre><code class="language-json">{
  "service_id": "jd_sb_entries",
  "method": "GET",
  "path": "jd_sb_entries",
  "query": {
    "q": {
      "entry_type": {
        "$eq": "TASK"
      },
      "action_required": {
        "$eq": "Y"
      }
    }
  },
  "page_limit": "1",
  "item_limit": "25"
}
</code></pre>
<p>Search for entries containing a phrase:</p>
<pre><code class="language-json">{
  "service_id": "jd_sb_entries",
  "method": "GET",
  "path": "jd_sb_entries",
  "query": {
    "q": {
      "$or": [
        {
          "subject": {
            "$instr": "ORDS"
          }
        },
        {
          "user_content": {
            "$instr": "ORDS"
          }
        }
      ]
    }
  },
  "page_limit": "1",
  "item_limit": "25"
}
</code></pre>
<p>Read one row:</p>
<pre><code class="language-json">{
  "service_id": "jd_sb_entries",
  "method": "GET",
  "path": "jd_sb_entries/32"
}
</code></pre>
<p>Bad <code>query</code> examples:</p>
<pre><code class="language-json">"query": {
  "entry_type": "TASK",
  "action_required": "Y"
}
</code></pre>
<pre><code class="language-json">"query": "{\"entry_type\":{\"$eq\":\"TASK\"}}"
</code></pre>
<p>Good raw query-string example:</p>
<pre><code class="language-json">"query": "q={\"entry_type\":{\"\(eq\":\"TASK\"},\"action_required\":{\"\)eq\":\"Y\"}}&amp;amp;limit=25"
</code></pre>
<h2>Response Style</h2>
<ul>
<li>For simple creates, reply with the created <code>entry_id</code>, subject, and due date.</li>
<li>For list/read requests, return the concise result only. Do not echo tool payloads, headers, links, or pagination blobs.</li>
<li>Keep the response short.</li>
<li>If the request is ambiguous, ask one concise clarifying question.
</li></ul></code></pre></details>

<h1>Demo</h1>
<p>This recording shows a brief interaction with my 2nd brain after introducing the skill.</p>
<img src="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/9ee8b60f-d79b-40cb-9536-ba1035f8a6c8.gif" alt="Demo showing use of 2nd brain from the Codex app" style="display:block;margin:0 auto" />

<h1><strong>Why this pattern matters</strong></h1>
<p>The broader point is that MCP and Skills solve different problems.</p>
<div>
<div>💡</div>
<div>MCP is about tool access. Skills are about tool usage.</div>
</div>

<p>I like this analogy from the Anthropic "<a href="https://resources.anthropic.com/hubfs/The-Complete-Guide-to-Building-Skill-for-Claude.pdf">The Complete Guide to Building Skills for Claude</a>".</p>
<blockquote>
<p><strong>The kitchen analogy.</strong></p>
<p><strong>MCP provides the professional kitchen</strong>: access to tools, ingredients, and equipment. <strong>Skills provide the recipes</strong>: step-by-step instructions on how to create something valuable.</p>
</blockquote>
<p>If you only expose a tool, you are giving the model capability. If you add a Skill, you are giving it operating guidance. For simple tools, that extra layer may not matter much. For anything with workflow, conventions, or domain context, it matters a lot.</p>
<p>That is why I think Skills work well as a thin layer on top of MCP-backed tools.</p>
<ul>
<li><p>They do not replace the server.</p>
</li>
<li><p>They do not replace the API.</p>
</li>
<li><p>They do not replace good tool design.</p>
</li>
</ul>
<p>What they do is close the gap between “the model can call this” and “the model knows how this should be used here.”</p>
<h1><strong>Conclusion</strong></h1>
<p>A lot of MCP discussions focus on exposing tools, which makes sense. But once you start building real workflows, raw tool exposure is only the starting point. You also need a way to shape behavior around those tools.</p>
<p>For me, Skills are proving to be a good way to do that.</p>
<p>In this case, a simple STDIO MCP server exposed ORDS REST APIs for CRUD operations on a table. The Skill sitting on top of one of those APIs made the setup much more usable by guiding the workflow, documenting the behavior, and providing an explicit invocation surface via $skill.</p>
<p>That is a small design choice, but it has made the overall system feel much more intentional.</p>
]]></content:encoded></item><item><title><![CDATA[Will AI Agents Replace UI, or Redefine It?]]></title><description><![CDATA[Introduction
In a previous post, Adding an AI Agent to an Existing APEX App, I described how I added an AI agent to an existing APEX app. The goal was to simplify the user interface by providing an ag]]></description><link>https://blog.cloudnueva.com/will-ai-agents-replace-ui-or-redefine-it</link><guid isPermaLink="true">https://blog.cloudnueva.com/will-ai-agents-replace-ui-or-redefine-it</guid><category><![CDATA[orclapex]]></category><category><![CDATA[AI]]></category><category><![CDATA[generative ai]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 19 Mar 2026 11:11:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/df1eca2c-d5ac-456f-bca9-66a2463c3b70.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Introduction</h1>
<p>In a previous post, <a href="https://blog.cloudnueva.com/adding-ai-agent-to-apex-app"><strong>Adding an AI Agent to an Existing APEX App</strong></a><strong>,</strong> I described how I added an AI agent to an existing APEX app. The goal was to simplify the user interface by providing an agent driven by a simple text interface.</p>
<p>As an APEX developer, this got me thinking: Are we heading towards a future where there is less focus on building APEX pages and more focus on building AI agents and the controls they require? Could agents completely replace UI?</p>
<p>I do not think agents will replace the user interface. But I do think they will redefine it.</p>
<h1>Why enterprise apps look the way they do today</h1>
<p>For years, we have built APEX apps around a simple assumption: a user operates the app. They open a page. They find the right menu. They enter data into a form. They click save. Then they move to the next screen and repeat.</p>
<img src="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/7a83ee5e-4304-424e-b9fe-62dcd47435a2.png" alt="APEX Page Illustrating a Traditional Enterprise App" style="display:block;margin:0 auto" />

<p>That model is so familiar that it feels permanent. But it is not. It is mostly a workaround for the fact that traditional software has needed humans to drive every step.</p>
<div>
<div>💡</div>
<div>AI agents call that assumption into question.</div>
</div>

<p>If an agent can understand an instruction, gather context, decide what steps are required, and carry them out across one or more systems, what exactly is left for the user interface to do?</p>
<p>That is no longer a theoretical question. It is becoming a practical one.</p>
<p>A lot of enterprise software still revolves around transaction entry, status updates, approvals, routing, and repetitive record management. In many cases, the interface is not valuable because it is a great experience. It is valuable because it is the mechanism the system uses to make the user do the work.</p>
<p>That is where agents become disruptive.</p>
<p>Instead of forcing a user to navigate five screens and populate twelve fields, the interaction could start with something much closer to natural intent:</p>
<blockquote>
<p>Create a new customer for Acme (details for Acme can be found in the CRM system), generate a sales order for 1,000 Aztec 100's using standard new customer pricing, send it for approval, and remind me next Tuesday if it has not been signed.</p>
</blockquote>
<p>We don't need many APEX pages to implement this!</p>
<h1><strong>From manual operation to delegated execution</strong></h1>
<p>The most important change is not that software becomes conversational. The real change is that software no longer requires the user to translate business intent into system steps.</p>
<p>That translation has defined enterprise UX for decades. Users have had to know where to go, what fields matter, what sequence to follow, what validations apply, and which screen comes next. The interface has been the place where human intention gets broken down into machine-friendly actions.</p>
<div>
<div>💡</div>
<div>Agents can absorb a lot of that burden.</div>
</div>

<p>That means the APEX app no longer has to be organized primarily around pages. It can be organized around goals, actions, and outcomes.</p>
<p>That is a major shift.</p>
<h1>Where the “single text box” idea becomes useful</h1>
<p>Once you accept that agents can handle more of the operational work, the obvious next question is whether the app can be reduced to a simple prompt box.</p>
<img src="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/f1158f29-a5dd-4767-8651-f021449ae635.png" alt="APEX Page Showing Simple Text Box Agent" style="display:block;margin:0 auto" />

<p>Probably not, but for some tasks, that actually makes sense.</p>
<p>Routine work is a strong candidate:</p>
<ul>
<li><p>Create a new supplier using the attached Invoice.</p>
</li>
<li><p>Open a support request for this issue...</p>
</li>
<li><p>Summarize sales by cost center for last month and compare it to this time last year</p>
</li>
<li><p>Put a credit hold on Acme Corp</p>
</li>
<li><p>Inactivate Item ABC</p>
</li>
</ul>
<p>In those cases, the old interface often exists only because the system required structured, manual interaction. If the agent can reliably handle that structure, the screen becomes optional.</p>
<p>That is why this topic is not far-fetched. It points to a real weakness in much current software: too much of the interface exists because the software is rigid, not because the user actually benefits from the interaction.</p>
<h1><strong>The future is probably not just a text box</strong></h1>
<p>A text box is excellent for expressing intent. It is weak for verification, comparison, supervision, and control. That matters.</p>
<p>It is easy to say:</p>
<blockquote>
<p>Reconcile these transactions and close the period.</p>
</blockquote>
<p>It is much harder to trust that outcome without seeing:</p>
<ul>
<li><p>What exceptions were found</p>
</li>
<li><p>Which records were changed</p>
</li>
<li><p>What assumptions were made where confidence was low</p>
</li>
<li><p>What could not be completed cleanly</p>
</li>
<li><p>What still needs human approval</p>
</li>
</ul>
<p>That is why I do not buy the lazy version of the argument that “UI is dead” or that “everything becomes chat.”</p>
<p>The better argument is that AI agents may eliminate a large percentage of the UI that exists purely for manual execution, while making a different kind of UI more important than ever.</p>
<h1><strong>The UI does not disappear. Its job changes.</strong></h1>
<p>I think that is the real story. The interface of the future is less about entering data and more about supervising action.</p>
<p>That means the valuable parts of the UI become things like:</p>
<ul>
<li><p>previewing what the agent is about to do</p>
</li>
<li><p>approving consequential actions</p>
</li>
<li><p>inspecting reasoning or decision traces</p>
</li>
<li><p>handling exceptions</p>
</li>
<li><p>reviewing changes across systems</p>
</li>
<li><p>enforcing policy and permissions</p>
</li>
<li><p>reversing or correcting bad outcomes</p>
</li>
<li><p>understanding what happened and why</p>
</li>
</ul>
<p>That is still UI. It is just no longer centered on the idea that the user must manually drive every step of the workflow.</p>
<p>In fact, once agents take over more of the mechanical burden, the remaining interface becomes more strategic. It becomes the place where trust is earned.</p>
<h1><strong>Enterprise systems will change unevenly</strong></h1>
<p>Some interfaces are much more vulnerable than others.</p>
<p>Low-risk, repetitive, high-volume workflows are the easiest targets. Administrative tasks, routine service requests, report generation, standard approvals, record creation, and straightforward updates are all likely to be heavily compressed by agentic interaction.</p>
<div>
<div>💡</div>
<div>But high-stakes systems are different.</div>
</div>

<p>Finance, healthcare, procurement, compliance, and regulated workflows require more than just correct execution. They need visibility, auditability, traceability, and control.</p>
<p>In those environments, the agent may do more of the work, but the interface is not going away. It is becoming the control surface.</p>
<p>That is a very different design challenge from building page flows and forms, and it is more interesting.</p>
<h1><strong>What does this mean for us?</strong></h1>
<p>For a long time, the default design question has been: What pages do we need? That question is starting to look outdated.</p>
<p>A better set of questions is:</p>
<ul>
<li><p>Which parts of this workflow truly require human judgment?</p>
</li>
<li><p>Which inputs are genuinely necessary?</p>
</li>
<li><p>Which fields exist only because the system cannot infer context?</p>
</li>
<li><p>Where can intent replace navigation?</p>
</li>
<li><p>Where can the agent act safely on the user’s behalf?</p>
</li>
<li><p>What needs to be visible before a human will trust the result?</p>
</li>
<li><p>How do we design for intervention, not just execution?</p>
</li>
</ul>
<p>That changes how we think about app design.</p>
<p>It pushes us away from page-centric systems and toward systems built around delegation, observability, and recovery.</p>
<p>For enterprise platforms in particular, that is a serious shift. The future is not just better forms. It is designing the boundary between autonomous action and human control.</p>
<h1>What happens to APEX?</h1>
<p>If the future of enterprise software relies on agents executing tasks and humans supervising them, APEX is still positioned well; if we change how we build.</p>
<p>We need to stop thinking of APEX primarily as a rapid CRUD builder and start treating it as an <strong>Agent Control Plane</strong>. The infrastructure to build this supervisory UI already exists within the APEX ecosystem; it simply needs to be repurposed.</p>
<p>Here is how APEX architecture must adapt to an agent-driven model:</p>
<ul>
<li><p><strong>From Page Processes to Agent-Ready APIs:</strong> An agent cannot click a button to fire an APEX Page Process. Business logic must be rigorously decoupled from the UI. We need to expose strict, deterministic Oracle REST Data Services (ORDS) or self-contained PL/SQL packages. These become the literal "tools" the agent invokes to interact with the database.</p>
</li>
<li><p><strong>Human-in-the-Loop via the Approvals Component:</strong> When an agent attempts a high-stakes action or encounters ambiguity, it should not fail silently. Instead, the agent's backend process can start an APEX workflow instance. The Unified Task List becomes the "Supervisory UI," where humans review the agent's proposed action, inspect its reasoning, and approve or reject the action.</p>
</li>
<li><p><strong>Handling Asynchronous Agent State:</strong> Many AI agents operate asynchronously, often taking seconds or minutes to multi-step through a problem. Traditional APEX pages are synchronous. To bridge this gap, we can use APEX Background processes and APEX Automations to run agents in the background and use push notifications to send status updates to the client.</p>
</li>
<li><p><strong>Auditability:</strong> In regulated environments, auditability requires more than a record of what changed. Future APEX apps will need dedicated agent log tables to capture the task, supporting evidence, tool invocations, confidence signals, performed validations, and a concise decision summary. That trace should surface alongside the business record in the APEX UI to establish trust and traceability.</p>
</li>
</ul>
<h1>Conclusion</h1>
<div>
<div>❓</div>
<div>So is this the end of user interfaces as we know them?</div>
</div>

<p>If by “user interface” we mean page-heavy, form-heavy, navigation-heavy systems built around manual data entry and procedural interaction, then AI agents probably do mark the beginning of the end for that model in many cases.</p>
<p>But if by “user interface” we mean the layer where humans express intent, review actions, manage risk, resolve ambiguity, and stay in control, then no. The UI is not ending. It is being redefined.</p>
]]></content:encoded></item><item><title><![CDATA[APEX + OCI Email Logs: Track Bounces, Complaints, Suppression]]></title><description><![CDATA[Introduction
I am sure many of you are already using the OCI Email Delivery Service to send emails from your APEX Applications. It offers a convenient and inexpensive way to handle emails that integra]]></description><link>https://blog.cloudnueva.com/oci-email-service-next-level</link><guid isPermaLink="true">https://blog.cloudnueva.com/oci-email-service-next-level</guid><category><![CDATA[orclapex]]></category><category><![CDATA[OCI]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 12 Mar 2026 13:06:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/f3466926-c917-4cb9-a985-a60329f41a4b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Introduction</h1>
<p>I am sure many of you are already using the OCI Email Delivery Service to send emails from your APEX Applications. It offers a convenient and inexpensive way to handle emails that integrates easily with APEX. <a href="https://hashnode.com/@lufcmattylad" class="user-mention" data-type="mention" title="Matt Mulvaney">Matt Mulvaney</a> wrote a step-by-step guide to setting it up <a href="https://mattmulvaney.hashnode.dev/page/about">here</a>.</p>
<p>If you are using this service and have asked yourself these questions, then this post is for you:</p>
<ul>
<li><p>How do I know if my email was bounced?</p>
</li>
<li><p>How do I know if my emails are getting marked as spam?</p>
</li>
<li><p>Basically, did the recipient receive the email?</p>
</li>
</ul>
<p>To answer these questions, you must enable logging for your OCI Email Service. In this post, we will:</p>
<ul>
<li><p>Enable Email Delivery logs (OutboundAccepted/OutboundRelayed)</p>
</li>
<li><p>Query logs via Logging Search API</p>
</li>
<li><p>Surface results in APEX (Interactive Report) and/or sync to a table for history</p>
</li>
</ul>
<h1>Suppression Vs Bounce</h1>
<p>Before we start, it is important to understand what the two types of email delivery logs reveal.</p>
<p>A bounce is a downstream delivery failure reported by the recipient’s mail system after an attempt is made (typically seen in the <strong>OutboundRelayed</strong> log) and usually points to issues such as an invalid mailbox, a missing domain, or temporary recipient-side problems.</p>
<p>Suppression often happens before any delivery attempt; your send can look “fine” from APEX’s perspective, but Email Delivery may block or drop the message due to policy, reputation, or suppression-list conditions (often visible in <strong>OutboundAccepted</strong> and sometimes reflected in log messages indicating a suppressed recipient). Practically, this is the difference between “the destination rejected it” and “we never really tried,” and it changes your remediation: bounces drive address hygiene and retry rules, while suppression drives sender/domain configuration and suppression-list/deliverability review.</p>
<h1>What Can I Learn</h1>
<p>I use these logs for <a href="https://apps.cloudnueva.com/apexblogs">APEX Developer Blogs,</a> which has over 300 subscribers. Here are some examples of errors from these logs:</p>
<img src="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/bf2c012d-abcd-4009-9ff8-f319b190f969.png" alt="APEX Page Showing Email LOgs" style="display:block;margin:0 auto" />

<h1>Setup Logging</h1>
<p>Let's start by setting up logging from the OCI Console.</p>
<p>Navigation: Developer Services &gt; Email Delivery &gt; Click on Your Domain</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/f7a87b43-fa1d-4016-9441-a49746a38bf9.png" alt="OCI Email Delivery Setup for Domain" style="display:block;margin:0 auto" />

<p>Then click on the 'Monitoring' tab and scroll down to the 'Logs' section, click the ellipses for the 'Outbound Relayed' log, and click 'Enable Log'.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/2c769c0c-abc6-4c8e-b818-77257a3d363f.png" alt="OCI Email Delivery Monitoring Logging" style="display:block;margin:0 auto" />

<div>
<div>💡</div>
<div>Enable both <strong>OutboundAccepted</strong> and <strong>OutboundRelayed</strong> to detect both suppression and delivery outcomes.</div>
</div>

<p>If you don’t already have a log group set up, click 'Create new group':</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/ea69e33f-e16c-413a-81a0-1263897ae79a.png" alt="OCI Email Delivery - Enable Resource Log 1" style="display:block;margin:0 auto" />

<p>Enter a log group name and description, and click 'Create':</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/18281bdb-ea27-4b0e-aa1d-0ccfe5283fb6.png" alt="OCI Email Delivery - Enable Resource Log 2" style="display:block;margin:0 auto" />

<p>Once back on the Enable resource log page, click 'Enable log':</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/46c3026b-198e-4a2d-a12d-b97e0d6d2db9.png" alt="OCI Email Delivery - Enable Resource Log 3" style="display:block;margin:0 auto" />

<p>After a few seconds, your log should be active:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/5c8030e9-282b-4931-9a81-b6be5a288346.png" alt="OCI Email Delivery - Log Group and Log Active" style="display:block;margin:0 auto" />

<p>Adjust the retention period to match your audit needs/cost constraints.</p>
<div>
<div>💡</div>
<div>Make a note of the OCIDs for the log group and the log. We will use these later.</div>
</div>

<h2>Test the Logs</h2>
<p>Send a test email from your instance to make sure it shows up in the logs:</p>
<pre><code class="language-sql">DECLARE
  l_body  CLOB;
BEGIN
  l_body := '&lt;h1&gt;Testing APEX Mail&lt;/h1&gt;';
  apex_mail.send
   (p_to        =&gt; 'test@example.com',
    p_from      =&gt; 'info@example.com',
    p_body      =&gt; l_body,
    p_body_html =&gt; l_body,
    p_subj      =&gt; 'Testing APEX Mail');
  apex_mail.push_queue;
END;
</code></pre>
<div>
<div>💡</div>
<div>Remember to set <code>p_from</code> to an email address that is on your OCI Email Delivery Approved Sender List.</div>
</div>

<p>After a few seconds, you should see the message in the logs:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/d5bc4be6-b205-4da6-83cf-d4bb12c8a406.png" alt="OCI Email Delivery -Explore Log" style="display:block;margin:0 auto" />

<p>If we send an email to an invalid email address, we can see the bounce from the destination email server. <strong>Note</strong>: I have changed the OCIDs in the sample JSON below to 'AAA' and the domains to <a href="http://example.com">example.com</a>.</p>
<pre><code class="language-json">{
  "datetime": 1771705254189,
  "logContent": {
    "data": {
      "action": "bounce",
      "bounceCategory": "bad-mailbox",
      "bounceCode": "5.1.10",
      "errorType": "hard",
      "message": "Suppressed recipient sam@example.com for email from info@example.com: bad-mailbox hard bounce",
      "messageId": "4B5C41B678F782A0E063E815000AC99A@apps.example.com",
      "originalMessageAcceptedTime": "2026-02-21T20:20:39.614Z",
      "receivingDomain": "example.com",
      "recipient": "sam@example.com",
      "reportGeneratedTime": "2026-02-21T20:20:41Z",
      "sender": "info@example.com",
      "senderCompartmentId": "AAA",
      "senderId": "AAA",
      "smtpStatus": "550 5.1.10 RESOLVER.ADR.RecipientNotFound; Recipient sam@example.com not found by SMTP address lookup"
    },
    "id": "4e81ce60-b8c7-40be-8a19-79448a3f4f2d",
    "oracle": {
      "compartmentid": "AAA",
      "ingestedtime": "2026-02-21T20:20:56.815Z",
      "loggroupid": "AAA",
      "logid": "AAA",
      "tenantid": "AAA"
    },
    "source": "example.com",
    "specversion": "1.0",
    "time": "2026-02-21T20:20:54.189Z",
    "type": "com.oraclecloud.emaildelivery.emaildomain.outboundrelayed"
  },
  "regionId": "us-phoenix-1"
}
</code></pre>
<p>In the above example, we received a hard bounce, indicating that the email address was invalid.</p>
<div>
<div>💡</div>
<div>Knowing that an email was not delivered can be critical to your workflow. Knowing why it was not delivered allows you to address the issue.</div>
</div>

<h2>Documentation</h2>
<ul>
<li><p><a href="https://docs.oracle.com/en-us/iaas/Content/Logging/Reference/details_for_emaildelivery.htm">Details for Email Delivery Logging</a> - JSON Examples and field descriptions.</p>
</li>
<li><p><a href="https://docs.oracle.com/en-us/iaas/Content/Identity/policyreference/emailpolicyreference.htm">Email Delivery Policies</a> - Setting up access to view the logs.</p>
</li>
<li><p><a href="https://docs.oracle.com/en-us/iaas/Content/Email/Reference/log-guide.htm">Email Log Searching</a> - Syntax for searching the email logs.</p>
</li>
<li><p><a href="https://docs.oracle.com/en-us/iaas/api/#/en/logging-search/20190909/SearchResult/SearchLogs">Using the Logging Search API</a>.</p>
</li>
<li><p><a href="https://docs.oracle.com/en-us/iaas/Content/Logging/Reference/query_language_specification.htm">Logging Query Language Specification</a>.</p>
</li>
</ul>
<h1>Access the Logs from a REST API</h1>
<p>Even though the OCI console includes a deliverability dashboard and a UI to access logs, it would be much easier if we could get these logs into the database so we can view them from an APEX page. In this section, I will cover how to set up an OCI service account to access the OCI Logging REST API.</p>
<h2>Create an OCI User</h2>
<p>Navigation: Identity and Security &gt; Domains &gt; Select your domain &gt; Click Create</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/9a02aceb-edd5-4081-8fdc-46804aee2068.png" alt="Create OCI User - Step 1" style="display:block;margin:0 auto" />

<p>Enter a username and click 'Create':</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/9ad2e0ca-fb0a-4dbc-9a79-87627dbeb97a.png" alt="Create OCI User - Step 2" style="display:block;margin:0 auto" />

<p>Click Actions &gt; Edit User Capabilities:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/58ebe2ca-24ec-41ba-bdb0-b791f61b1e8d.png" alt="Create OCI User - Step 3" style="display:block;margin:0 auto" />

<p>Uncheck all options except 'API Keys' and click 'Save Changes':</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/abaff83b-eeaf-4f81-8d6c-2cd5b4c9d683.png" alt="Create OCI User - Step 4" style="display:block;margin:0 auto" />

<div>
<div>💡</div>
<div>Keep both key files safe. You will use the content of the private file in the APEX Web Credential below.</div>
</div>

<p>On the user page, select the 'API keys' tab, then click Actions &gt; Add API key</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/9e8a4a54-2a12-461c-8e22-746f714e34e2.png" alt="Create OCI User - Step 5" style="display:block;margin:0 auto" />

<p>Download the public and private key, then click 'Add':</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/aac90bc0-db27-44da-80f6-3707432ce20e.png" alt="Create OCI User - Step 6" style="display:block;margin:0 auto" />

<div>
<div>💡</div>
<div>Copy the resulting '<strong>Configuration file preview' </strong>details and keep them safe. You will use these values in the APEX Web Credential below.</div>
</div>

<h2>Create an OCI Group</h2>
<p>Back under the User Management tab, scroll down to Groups:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/a6c83c52-7370-4b7f-8a6c-17518477ffae.png" alt="Create OCI Group - Step 1" style="display:block;margin:0 auto" />

<p>Create a new Group:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/1c73010f-76a8-4a34-8663-f4bbe8c6a184.png" alt="Create OCI Group - Step 2" style="display:block;margin:0 auto" />

<p>Add the new user to the group:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/96a8dd97-b34a-44b1-a9ab-2dcc224a3179.png" alt="Create OCI Group - Step 3" style="display:block;margin:0 auto" />

<h2>Create an OCI Policy</h2>
<p>Navigate to: Identity &amp; Security &gt; Policies &gt; Click 'Create Policy':</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/0319416b-91ad-41e3-ac35-b37ce6aa5412.png" alt="Create OCI Policy - Step 1" style="display:block;margin:0 auto" />

<p>Complete the Policy details and click the 'Create' button:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/79899530-e479-4018-be0b-84db47d9bbad.png" alt="Create OCI Policy - Step 2" style="display:block;margin:0 auto" />

<p>Policy Statements:</p>
<pre><code class="language-plaintext">allow group apex_rest_api_access_grp to read log-groups in tenancy
allow group apex_rest_api_access_grp to read log-content in tenancy
</code></pre>
<div>
<div>💡</div>
<div>If you prefer least privilege, scope the policy to the compartment that contains the log group (instead of the tenancy-wide scope).</div>
</div>

<h1>The Logging REST API</h1>
<p>The OCI logging service offers a REST API you can use to consume any logs. My instance is in the Phoenix region, so my endpoint is:</p>
<p><a href="https://logging.us-phoenix-1.oci.oraclecloud.com/20190909/search">https://logging.us-phoenix-1.oci.oraclecloud.com/20190909/search</a></p>
<p>You can see a full list of Logging Endpoints <a href="https://docs.oracle.com/en-us/iaas/api/#/en/logging-search/20190909/">here</a>, and details on using the search API <a href="https://docs.oracle.com/en-us/iaas/api/#/en/logging-search/20190909/SearchResult/SearchLogs">here</a>. You will also need to understand the <a href="https://docs.oracle.com/en-us/iaas/Content/Logging/Reference/query_language_specification.htm">logging query language</a>. This query language allows you to filter results to see only bounces if that is what you are interested in.</p>
<p>The endpoint requires that you send a POST request with a payload like this:</p>
<pre><code class="language-json">{
  "timeStart": "2026-01-19T01:02:29.600Z",
  "timeEnd":   "2026-01-19T02:02:29.600Z",
  "searchQuery": "search \"&lt;tenancy_ocid&gt;/&lt;log_group_ocid&gt;/&lt;log_ocid&gt;\" | sort by datetime desc",
  "isReturnFieldInfo": false
}
</code></pre>
<div>
<div>💡</div>
<div>You must replace the &lt;value&gt; placeholders with the actual OCIDs for your Tenancy, Log Group, and Log, respectively.</div>
</div>

<h2>API Limits</h2>
<p>The Logging Search API returns up to 1000 entries per call and supports paging via a next-page token/header (client-managed). Searches/exports are limited to a maximum 14-day time window per request.</p>
<h1>Consuming the Logging API from APEX</h1>
<h2>Create an APEX Web Credential</h2>
<p>You will need an APEX Web Credential of type 'OCI Native Authentication' to access the REST API from APEX.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/c7c38c70-3f14-42a3-b800-6114fc33c332.png" alt="Create APEX Web Credential - Step 1" style="display:block;margin:0 auto" />

<p>Enter the details from your OCI user's 'Configuration file preview' above. For the OCI Private Key, copy and paste the value from the private key file downloaded above. Click Create to complete the creation of the APEX Web Credential.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/2fbbbf68-3909-491c-9e5c-795ba3554846.png" alt="Create APEX Web Credential - Step 2" style="display:block;margin:0 auto" />

<h2>Consumption Options</h2>
<h3>APEX REST Data Source</h3>
<p>The obvious first choice for consuming a REST API is to use a REST Data Source. I won't get into the step-by-step, but here are a few things that tripped me up when I created one.</p>
<p>In the 'POST' Operation, set the 'Database Operation' field to 'Fetch rows':</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/5673bb12-4ebc-4b0e-91c5-3feca31cacd8.png" alt="REST Data Source Setup - Step 1" style="display:block;margin:0 auto" />

<div>
<div>💡</div>
<div>You will want to replace hardcoded timeStart and timeEnd with variables such as #START_TS# and #END_TS#.</div>
</div>

<p>Delete the GET, PUT, and DELETE Operations; we do not need them.</p>
<p>Set up the following parameters:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/e9f72f3e-8fda-4f43-8342-1e995d9645e5.png" alt="REST Data Source Setup - Step 1" style="display:block;margin:0 auto" />

<p>In the Data Profile, set the 'Row Selector' field to 'results'.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/4e1d6e4b-8265-4dde-b7b1-0a3a180e5613.png" alt="REST Data Source Setup - Step 2" style="display:block;margin:0 auto" />

<p>Once all the above is in place, click 'Rediscover Data Profile', then click 'Replace Data Profile'.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/719fa155-5e52-43b0-9b49-9e9fd613d653.png" alt="REST Data Source Setup - Step 3" style="display:block;margin:0 auto" />

<p>Unfortunately, REST Source Types for 'Oracle Cloud Infrastructure (OCI) REST Service' do not automatically walk OCI Logging Search paging tokens. If you are only expecting a few hundred emails a week, that should not be an issue, as you can fetch up to 1,000 log entries at a time. You could create an Interactive Report based on the REST Source, add some Start and End Date Time parameters, and you have everything you need.</p>
<p>If you expect higher email volumes, you could use <a href="https://blog.cloudnueva.com/apexwebservice-the-definitive-guide">APEX_WEB_SERVICE</a> to retrieve the data and loop through the pages yourself, or build a <a href="https://blog.cloudnueva.com/apex-rest-source-connector-plug-ins">REST Source Connector plug-in</a>.</p>
<h2>Syncing to a Table</h2>
<p>If you expect to send fewer than 1,000 emails per hour, it may be easier to use a <a href="https://blog.cloudnueva.com/dynamic-parameters-in-oracle-apex-rest-data-source-synchronizations">REST Source Sync</a> to sync the last hour's logs to a local table. This has the advantage of circumventing the 14-day window limit of the logging API.</p>
<p>You will need to use the REST Source Sync 'Steps' feature to pass the limit, START_TS, and END_TS parameters. In the example below, I am looking back 1 day. You may need to adjust this based on the number of emails you expect to receive.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/626b62127d5d27b992e4cf90/5af3c39d-ff8a-401d-aff5-407ac0bd55fa.png" alt="REST Source Sync Steps" style="display:block;margin:0 auto" />

<div>
<div>💡</div>
<div>Remember to add a purge routine to periodically clear old records from the sync table.</div>
</div>

<h1>Conclusion</h1>
<p>Enabling OutboundAccepted and OutboundRelayed logs gives you a reliable way to determine whether an email was delivered, bounced, complained about, or suppressed.</p>
<p>For low-volume use, a REST Data Source plus an Interactive Report is usually sufficient, as long as you stay within the 1,000 records-per-call limit. For higher volume or longer retention, use a REST Source Sync (or PL/SQL paging) to persist results to a table and work around the 14-day query window. Once the data is local, you can join it to your tables and build deliverability views and alerts that meet your requirements.</p>
]]></content:encoded></item><item><title><![CDATA[Adding an AI Agent to an Existing APEX App]]></title><description><![CDATA[Introduction
Modern frontier LLMs are now reliable enough to support practical agent workflows when paired with strong orchestration and guardrails. Adding an agent to an existing APEX app allows you ]]></description><link>https://blog.cloudnueva.com/adding-ai-agent-to-apex-app</link><guid isPermaLink="true">https://blog.cloudnueva.com/adding-ai-agent-to-apex-app</guid><category><![CDATA[orclapex]]></category><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 05 Mar 2026 13:11:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/1211a45c-084c-4de8-841e-78fa4821f686.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Introduction</h1>
<p>Modern frontier LLMs are now reliable enough to support practical agent workflows when paired with strong orchestration and guardrails. Adding an agent to an existing APEX app allows you to:</p>
<ul>
<li><p>Simplify user workflows</p>
</li>
<li><p>Simplify the user interface</p>
</li>
<li><p>Automate repetitive tasks</p>
</li>
<li><p>Leverage your existing data model, views, APIs, etc.</p>
</li>
<li><p>Get the experience of building agents with minimal investment</p>
</li>
</ul>
<p>In this post, I will use an example of an APEX-based project management application I have been building for a client off and on over the past few years. Over time, it has grown to more than fifty pages and thousands of lines of PL/SQL. A month ago, we started on a project to introduce an AI agent to simplify the app.</p>
<h1>Introducing AI Agents</h1>
<p>By combining a frontier model with tools (PL/SQL APIs, Web Services) and strong governance (permissions, auditing, guardrails), you can build an AI agent that executes multi-step business tasks, not just chats, safely within defined constraints.</p>
<p>Agents built for APEX use a PL/SQL framework to manage the 'Agentic Loop'. That's right; when you boil it down, an agent is a loop. Within this loop, the LLM makes suggestions as to which tools it wants to run; your code decides whether to run them. <strong>Your code is in control</strong>.</p>
<p>The diagram below illustrates the agentic loop.</p>
<img src="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/1c3ea020-3c94-456f-ba76-1e8ef8fdd3ae.png" alt="Agentic Loop in APEX" style="display:block;margin:0 auto" />

<p>The <strong>orchestrator</strong> controls the agentic loop, maintains state (in a database table) between tool calls, and decides when to hand control back to the user and when the loop should finish. The <strong>dispatcher</strong> receives tool requests, checks what data the user is allowed to see, performs schema and business validations, calls the tool, and returns the response to the orchestrator to feed back to the LLM during the next iteration.</p>
<p>This table illustrates the differences between a standard APEX approach and an Agentic approach:</p>
<table style="min-width:75px"><colgroup><col style="min-width:25px"></col><col style="min-width:25px"></col><col style="min-width:25px"></col></colgroup><tbody><tr><td><p><strong>Feature</strong></p></td><td><p><strong>Standard APEX Integration</strong></p></td><td><p><strong>Agentic Framework</strong></p></td></tr><tr><td><p><strong>Logic Location</strong></p></td><td><p>Hardcoded in Page Processes</p></td><td><p>Dynamic in Orchestrator Loop</p></td></tr><tr><td><p><strong>User Input</strong></p></td><td><p>Structured (Forms/Pickers)</p></td><td><p>Unstructured (Natural Language)</p></td></tr><tr><td><p><strong>Validation</strong></p></td><td><p>On-Submit / Client-side</p></td><td><p>Dispatcher-level / Pre-execution</p></td></tr><tr><td><p><strong>Flexibility</strong></p></td><td><p>Rigid workflow</p></td><td><p>Multi-step "Reasoning" capability</p></td></tr><tr><td><p><strong>Security</strong></p></td><td><p>Session/ACL based</p></td><td><p>ACL + Intent Validation</p></td></tr></tbody></table>

<h1>Simplifying Project Management</h1>
<p>So, let's get back to the project management app use case. The app handles everything related to project management, including questions, risks, issues, requirements, design documents, emails, and meeting notes. As the app grew and we introduced new pages, fields, and buttons, users started to get frustrated by how long it takes to navigate to where they need to go. I am sure Jira users can relate!</p>
<div>
<div>💡</div>
<div>The AI agent reduces this complexity by providing a chat-style interface that makes it easier to find information, automate repetitive actions, and surface project insights that were previously buried behind layers of menus.</div>
</div>

<img src="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/6069dca1-1b3a-4043-811a-5186c5eaedff.png" alt="Screenshot of the APEX AI Agent Chat Interface" style="display:block;margin:0 auto" />

<h2>Agent Scope</h2>
<p>For phase one of this project, we decided to limit the scope to allowing users to inquire about, create, and update project questions, risks, and issues.</p>
<div>
<div>💡</div>
<div>Reducing the scope to this limited (<strong>but still useful</strong>) set of activities was critical to its success.</div>
</div>

<p>Too often, we try to cover every use case and end up shipping nothing. That risk is higher with emerging technologies, where outcomes are uncertain. Narrowing scope doesn’t mean lowering the bar, it means delivering a genuinely useful slice that also proves whether the technology will scale to the entire app.</p>
<h2>Tools</h2>
<p>We gave the Agent the following tools:</p>
<ul>
<li><p>Show the project structure. This tool provides the information about the project, such as the customer, the project's structure, the project lead, etc.</p>
</li>
<li><p>List project team members. This tool provides details of all the people associated with the project. This is used by the LLM to assign tasks, return tasks for specific people, etc.</p>
</li>
<li><p>Search questions, risks, and issues. This tool allows users to perform searches using assigned to, status, question, risk, or issues text (via Vector Search).</p>
</li>
<li><p>Create questions, risks, and issues</p>
</li>
<li><p>Update questions, risks, and issues</p>
</li>
</ul>
<p>Each tool is a PL/SQL function or procedure that either returns some JSON or performs an action. Because we started with a fully functional App, we were able to leverage existing tables, views, and PL/SQL APIs.</p>
<div>
<div>💡</div>
<div>The AI tools were essentially wrappers around the code we already had.</div>
</div>

<p>One of the most underrated parts of designing tools is the descriptive tool metadata that you send to the LLM with your prompt. The tool call metadata must clearly describe each tool along with its parameters. You should also resist the urge to include too many tools with each request to the LLM. Only send the tools that are relevant to the activity you are trying to perform. This prevents the LLM from having to look through and 'understand' tools it will never need.</p>
<div>
<div>⚠</div>
<div>Tools should be deterministic and side-effect controlled. The LLM should never be responsible for enforcing business rules or data integrity. That logic must live inside the tool implementation.</div>
</div>

<h2>The Orchestrator</h2>
<p>The orchestrator is a PL/SQL procedure that controls the agentic loop. Essentially, the process involves looping, calling tools, and providing results back to the LLM until the user's request is completed (i.e., no further tool requests from the LLM).</p>
<p>We use a database table to maintain state between tool calls. This table allows us to re-construct the chat history and pass it to the LLM with each API call.</p>
<h2>The Brain</h2>
<p>The orchestrator passes a system prompt to the LLM during each request. This acts as the agent's brain. The system prompt provides background about the Application, the agent's objectives, rules for tool use, required behaviors, and how responses should be formatted.</p>
<div>
<div>💡</div>
<div>Expect to iterate on the system prompt throughout the build of your agent. A well-formed system prompt is vital to the agents performance.</div>
</div>

<h2>Dispatcher</h2>
<p>When the LLM requests a tool, the dispatcher does the following:</p>
<ul>
<li><p>Verifies the tool exists</p>
</li>
<li><p>Verifies the validity of the parameters the LLM requested</p>
</li>
<li><p>Verifies the user has access to the action, and or data being requested</p>
</li>
<li><p>Executes the PL/SQL Function or Procedure</p>
</li>
<li><p>Shapes the JSON response and hands it back to the Orchestrator to pass back to the LLM</p>
</li>
</ul>
<h2>Creates and Updates</h2>
<p>With an emerging technology like this, you may be nervous about allowing the AI to request tool calls that create or update records in your database. Sure, you write the tool so you can ensure whatever record is created is valid, but what if the AI decides it wants to create 100 valid records when it should have been just one? To allay fears, we set up the agent and the tools with a flag indicating if a particular tool call requires human approval before it can run. Initially, we set all the create/updated tools to require human approval. Down the road, we expect that we may want to turn this confirmation off for some write tools.</p>
<h2>Vector Search</h2>
<p>I briefly mentioned that the Search tool uses Vector search so users can perform semantic searches on the text from questions, risks, issues (and the associated answers/responses). This allows users to perform powerful searches from a prompt. e.g., 'Find Open questions assigned to Jon related to California'.</p>
<p>We established a queueing mechanism that queues new and updated content. An APEX Automation picks up the queued content every 15 minutes. The automation chunks the content using the SQL function <code>VECTOR_CHUNKS</code>. The chunks are then vectorized using the SQL function <code>VECTOR_EMBEDDING</code>. We use the ONNX model <code>ALL_MINILM_L12_V2</code> (available <a href="https://blogs.oracle.com/machinelearning/use-our-prebuilt-onnx-model-now-available-for-embedding-generation-in-oracle-database-23ai">here</a>) in the database to create the embeddings (vectorize the chunks).</p>
<h2>Instrumentation</h2>
<p>One of the most useful things we included at the beginning is comprehensive logging and diagnostics. Every LLM API call, every tool request, and every tool response is logged for each conversation. This allows us to replay conversations for audit and troubleshooting purposes.</p>
<p>It even allows us to troubleshoot strange behaviors using AI. For example, using the Oracle SQLcl MCP tool connected to the Codex App. I can say something like, "Review conversation ID 123 and find out why only 1 of the 3 provided issues were created by the agent." Codex can then use SQLcl to query the conversations table, and iterate until it determines whether the issue is code-related or system-prompt-related.</p>
<h1>Lessons Learned</h1>
<ul>
<li><p>Never trust the model to handle security or data integrity. Your code (orchestrator and dispatcher) and your data model should handle them. Always!</p>
</li>
<li><p>Log everything and make conversations replayable for audit and troubleshooting.</p>
</li>
<li><p>Make tools configurable to easily toggle human-in-the-loop confirmations.</p>
</li>
<li><p>When writing CRUD PL/SQL APIs, don't assume the consumer is APEX. Your PL/SQL APIs must be hardened to handle calls from unexpected future sources, such as agents.</p>
</li>
<li><p>Watch the context. Each call to the LLM passes the completed conversation history. As a conversation builds (especially if you have multiple tool calls returning large amounts of JSON), the LLM has to wade through more and more context to figure out what the latest request is. Consider capping the number of turns or preventing further turns after the context reaches a certain size.</p>
</li>
<li><p>Enable parallel tool calls when calling the LLM API to reduce turns (switching between the user and the model in the Agent Loop). For example, if you want to copy-paste 10 questions to add, enabling parallel tool calls allows the agent to request that the create tool be called 10 times in one turn rather than one at a time. This allows the user to confirm creation of the items once, not 10 times, and reduces token usage. Parallel tool calls also reduce the amount of time the user must wait for their request to complete.</p>
</li>
<li><p>When something fails (e.g., you get a PL/SQL exception during a tool call), do not pass the Oracle error message back to the LLM. Instead pass something meaningful like "the project team tool is not responding". This allows the LLM to fail gracefully and inform the end user.</p>
</li>
<li><p>Do not allow your agentic loop to run forever. Set a maximum number of iterations where you end the loop no matter what. Make this configurable so you can adjust it during testing.</p>
</li>
<li><p>Each LLM API call takes between 2 and 10 seconds to run. If the agent has to call the LLM several times during a request, the overall duration can add up quickly. You can influence this by playing with the model and the reasoning level (the higher the reasoning level the more the model thinks and the longer it takes). Use the fastest model with the lowest reasoning level which still gives good results for your use case. You can also help by improving the user experience while they wait. As you will see in the demo below, we took the time to build a custom blocking spinner that is displayed while the agent is working.</p>
</li>
</ul>
<h1>Demo</h1>
<p>A picture is worth a thousand words, as they say. This short video shows a typical session with the Agent.</p>
<img src="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/00728044-633f-4078-8c58-73080a0a4d8c.gif" alt="Demo of Agent for Project Management App" style="display:block;margin:0 auto" />

<ul>
<li><p>Projects are structured by sections and sub-sections.</p>
</li>
<li><p>The context area provides context for the user's prompt.</p>
</li>
<li><p>I did not show it in the demo, but the response includes links that let users open Questions, Risks, and Issues directly from the agent. This makes use of existing APEX pages.</p>
</li>
</ul>
<p>As you can see, the user can take a question through its full lifecycle without leaving the page. This demo only shows you a fraction of what is possible with just five tools. Some other sample prompts:</p>
<ul>
<li><p>Find questions related to VAT Tax</p>
<ul>
<li>The search tool uses vector search to find questions, risks, and issues related to VAT.</li>
</ul>
</li>
<li><p>Review the attached meeting transcript (copy-paste it into the Context field), extract all questions, risks, and issues, and organize them into subsection 20.</p>
<ul>
<li>This is pretty powerful. We used the LLM to analyze the meeting transcript and extract all questions, risks, and issues raised during the meeting. The LLM extracted them and then invoked the create tool multiple times to populate the database with questions, risks, and issues.</li>
</ul>
</li>
</ul>
<h2>Behind the Scenes</h2>
<p>Privileged users can enable diagnostics. Diagnostics show all of the records tracked during the conversation, including requests for tool calls and responses from tool calls. In the screenshot below, you can see the diagnostics for one turn from the demo video. The diagnostic records are identified with the brown '...' avatar.</p>
<img src="https://cdn.hashnode.com/uploads/covers/626b62127d5d27b992e4cf90/3a0c654e-be11-4b52-8da4-358e10940f20.png" alt="Screenshot showing diagnostics from the Agent" style="display:block;margin:0 auto" />

<ul>
<li><p>We submitted a request, "answer question 110662 with yes"</p>
</li>
<li><p>The model took the question along with the system prompt and broke out the answer "yes" from the request. It then looked at the provided tools and requested that we call the <code>qri_search</code> tool to find the question</p>
</li>
<li><p>We ran the tool (after checking the user had access), and returned the JSON result containing details of the questions we found</p>
</li>
<li><p>The LLM interpreted this tool response JSON, confirmed there is just one question, then requested we call the <code>update_qri</code> tool</p>
</li>
<li><p>The <code>update_qri</code> tool requires a human confirmation, so the Orchestrator saved the tool request from the model and stopped to allow the user to click the Confirm/Reject button.</p>
</li>
<li><p>After clicking Confirm, the Orchestrator called the LLM one last time with the result from calling the <code>update_qri</code> tool.</p>
</li>
<li><p>The LLM decided it didn't require any more tool calls, which ended the turn.</p>
</li>
</ul>
<h1>Conclusion</h1>
<p>Adding an AI agent to an existing APEX application is a practical way to introduce AI capabilities without rewriting your system. Most applications already have the hard parts in place: a data model, business APIs, validation logic, and security rules. An agent simply becomes another consumer of those APIs.</p>
<p>The key is to keep the architecture straightforward. Let the LLM interpret user intent and suggest actions, but keep control in your code. The orchestrator manages the loop, the dispatcher validates and executes tools, and your existing PL/SQL APIs enforce business rules and data integrity.</p>
<p>Start with a limited scope, a small set of well-defined tools, and strong instrumentation. Once the architecture is in place, you can expand the agent’s capabilities incrementally.</p>
<p>In our case, just five tools were enough to let users search, create, and update project questions, risks, and issues directly from a chat interface. The result was a simpler workflow for users and a new way to interact with the application without changing the underlying system.</p>
<p>For teams already building applications with Oracle APEX, agents are a natural extension of the platform. The important part is not the model, it is the architecture around it.</p>
]]></content:encoded></item><item><title><![CDATA[Avoiding the Vibe Coding Rabbit Hole]]></title><description><![CDATA[Introduction
A few weeks ago, I started building an APEX 2nd brain to practice Agentic AI in APEX and PL/SQL, and hopefully create a useful tool to supplement my aging brain.
I am writing this post while Codex is rebuilding my APEX 2nd brain Applicat...]]></description><link>https://blog.cloudnueva.com/avoiding-the-vibe-coding-rabbit-hole</link><guid isPermaLink="true">https://blog.cloudnueva.com/avoiding-the-vibe-coding-rabbit-hole</guid><category><![CDATA[orclapex]]></category><category><![CDATA[AI]]></category><category><![CDATA[vibe coding]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Sun, 15 Feb 2026 16:29:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1771172772261/44195aa6-40e3-46b0-bd24-0478932a7f01.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>A few weeks ago, I started building an APEX 2nd brain to practice Agentic AI in APEX and PL/SQL, and hopefully create a useful tool to supplement my aging brain.</p>
<p>I am writing this post while Codex is rebuilding my APEX 2nd brain Application from scratch. This post is a cautionary tale about what happens when you go down the vibe coding rabbit hole.</p>
<h1 id="heading-the-rabbit-hole">The Rabbit Hole</h1>
<p>The first version of my 2nd brain APEX App included a simple text box on a single APEX page. When the user clicks the submit button, I pass a predefined prompt that provides instructions for filing the entry, along with the entry itself, to an LLM for classification. There was also an APEX Automation, which ingested my personal and work emails and calendar entries from Gmail and MS Office.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">The goal here was to create an automated filing system, along with daily and weekly digests, to surface to-do items and ideas.</div>
</div>

<p>It worked OK, but I soon found the features limiting (especially with all the news about what people we are achieving using <a target="_blank" href="https://openclaw.ai/">Open Claw</a>). For the record, I do not use Open Claw.</p>
<p>With the new <a target="_blank" href="https://openai.com/index/introducing-the-codex-app/">Codex App</a> and a prompt first strategy, I started making enhancements. It was going great… I would ask for feature after feature, and they would get built and work 90% of the time. After a couple of hours, I stepped back and looked at the actual code that had been written:</p>
<ul>
<li><p>I ended up with 800 lines of JavaScript in the main APEX page (that’s more JavaScript than I write in a year).</p>
</li>
<li><p>Instead of using my AI config tables, which store system prompts, tool calls, etc., the AI wrote the system prompts and tool calls JSON and hard-coded them into the code.</p>
</li>
<li><p>The quality of the data model had degraded over time as fields were added, removed, and repurposed, and tables were abandoned. Each new table or set of tables was very well thought out, but there was no consideration for tidying up the old tables.</p>
</li>
<li><p>The AI was overly cautious about dropping old code. By the end, I was left with several views and packages that were no longer used. This amounted to more than 2,000 lines of unused code.</p>
</li>
</ul>
<p>The other side effect was that the overall architecture had drifted and become overly complex and bloated. The issue isn’t the AI; it’s unbounded iteration without thought.</p>
<p>Essentially, it was the work of a competent and overly eager junior programmer. There were no egregious issues (other than not cleaning up old code), but it was not the way I would have done it.</p>
<h1 id="heading-stepping-back-amp-resetting">Stepping Back &amp; Resetting</h1>
<h2 id="heading-write-a-specification">Write a Specification</h2>
<p>I decided to take a step back and reassess what I actually wanted, and spent an hour writing a detailed specification. You can read the specification <a target="_blank" href="https://gist.github.com/jon-dixon/5c7b35b23e6ef17c0d698359293c40ba">here</a>.</p>
<p>This produced two benefits:</p>
<ol>
<li><p>It forced me to think about what features were important to me.</p>
</li>
<li><p>It provided the AI with much clearer guidance on what it was supposed to do. Instead of 10 disjointed prompts with feature requests, it had a single specification on which to build a solid architecture.</p>
</li>
</ol>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">High-quality specs for AIs are the area I think we, as developers, can improve (and keep our jobs for a little longer). More on that <a target="_self" href="https://blog.cloudnueva.com/an-ai-shift-for-apex-developers">here</a>.</div>
</div>

<h2 id="heading-agentsmd">AGENTS.md</h2>
<p>I also updated my <a target="_blank" href="https://agents.md/">AGENTS.md</a> to include some additional instructions:</p>
<ul>
<li><p>Prefer PL/SQL over JavaScript. When JavaScript is necessary, prefer Dynamic Actions over Ajax Callbacks.</p>
</li>
<li><p>Utilize AI Configuration tables gen_ai* to specify new prompts and tools.</p>
</li>
<li><p>Tell me if I suggest changes that contravene APEX and PL/SQL best practices.</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Although it’s painful having <code>AGENTS.md</code> files littering your file system, it is important that you actively keep them up to date with the latest constraints and guardrails.</div>
</div>

<h1 id="heading-lessons-learned">Lessons Learned</h1>
<ul>
<li><p>Always write a spec first. However simple it is, writing it out first helps you organize your thoughts and provides valuable guidance and structure for the AI.</p>
</li>
<li><p>LLMs ❤️ JavaScript more than PL/SQL. Unless you instruct them otherwise, they will generate far too much unnecessary JavaScript. They also love Ajax Callbacks; it’s not that they don’t know about Dynamic Actions, they just prefer Ajax Callbacks.</p>
</li>
<li><p>AGENTS.md is a live document; every time the LLM does something you don’t like, tell it by updating AGENTS.md.</p>
</li>
<li><p>After implementing a new feature, always follow up with a prompt to have the LLM check for unused code. More importantly, ask it to follow the entry points to your app and suggest entire branches of unused code. Also, run a check for data model drift. <strong>Codex is very good at doing this; it just needs to be told to do it.</strong></p>
</li>
<li><p>When using plan mode in Codex or Claude (which I highly recommend), read the plan! This may sound obvious, but when I started out, I would just skim the plan and hit Go. Providing input after the plan is produced is often the last chance to direct the LLM once implementation begins. Adjustments made at this stage can save you hours later on.</p>
</li>
<li><p>If you start a thread with an LLM and get to around 5 turns, press pause ⏸️ to think. Ask yourself whether you are on the edge of the AI 🐇 🕳️❓ Ask yourself: Will the next prompt really get me there, or should I start again with a better spec? The answer is usually the second one, but it is hard to step back!</p>
</li>
</ul>
<h1 id="heading-time-for-controversy">Time for Controversy</h1>
<div data-node-type="callout">
<div data-node-type="callout-emoji">❓</div>
<div data-node-type="callout-text">Do I really need to ‘know’ the code I write?</div>
</div>

<p>With the major improvements made to coding in Claude Opus 4.5/4.6 and Codex 5.2/5.3, I have been asking myself whether I really need to ‘<strong>know’</strong> all the code I create. If AI generated it, then surely AI will be better at maintaining it than I am?</p>
<p>My answer (at least for now) is that I do need to understand the code I / the AI creates.</p>
<ul>
<li><p>I am responsible for the code, not the AI. It will be a dark day indeed when developers stop being responsible for the code they produce.</p>
</li>
<li><p>I still feel that my taste/instincts/intuition are better than the AIs. This is the main advantage humans have over humans (at least right now); we should make the most of it.</p>
</li>
<li><p>We are not close to finding all of the edge cases. Even for this personal project, I added three edge cases to my AGENTS.md and went back to my Spec a few times to guide the AI. We are still a long way off from a fire-and-forget approach to AI development.</p>
</li>
</ul>
<h1 id="heading-conclusion">Conclusion</h1>
<p>Vibe coding is a superpower; right up until it quietly turns into your architecture.</p>
<p>The problem wasn’t Codex. It was me letting a long thread become the design process. The AI will happily keep shipping “reasonable” changes forever, but it has no instinct for simplicity, no taste, and no discomfort about leaving dead code and abandoned tables behind.</p>
<p>The fix also wasn’t “use less AI.” It was <strong>put the AI back inside guardrails</strong>: a written spec, a clear APEX-first strategy (Dynamic Actions over callbacks, PL/SQL over page-level JavaScript), and an <a target="_blank" href="http://AGENTS.md">AGENTS.md</a> that I actually maintain. Once those constraints are in place, AI is great, not just at building features but at tracing entry points, finding dead branches, and calling out drift. It just needs to be told to do it.</p>
<p>And on the “do I need to know the code?” question: for now, yes. Maybe I don’t need to know every generated line, but I absolutely need to own the architecture, the data model, and the edge cases, because the day something breaks (or leaks), it’s my name on it, not the model’s.</p>
]]></content:encoded></item><item><title><![CDATA[Dynamic Post-Logout URLs in APEX]]></title><description><![CDATA[Introduction
When using single sign-on type authentication schemes like Social Sign-In, you need to define a Post-Logout URL in APEX so that the Authentication provider redirects your APEX App to a public page after it completes the logout. When you ...]]></description><link>https://blog.cloudnueva.com/dynamic-post-logout-urls-in-apex</link><guid isPermaLink="true">https://blog.cloudnueva.com/dynamic-post-logout-urls-in-apex</guid><category><![CDATA[orclapex]]></category><category><![CDATA[#oracle-apex]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 12 Feb 2026 12:52:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765763034810/b9995646-2c45-4cc0-9fdb-4fc73a7ebef3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>When using single sign-on type authentication schemes like Social Sign-In, you need to define a Post-Logout URL in APEX so that the Authentication provider redirects your APEX App to a public page after it completes the logout. When you deploy your App to TEST and PROD, the Post-Logout URL (Public Page) changes with your instance URL. This introduces a challenge: the Post-Logout URL setup in APEX does not easily support dynamic values, so you must manually update it after deploying your App.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">In this post, I will show you how to make the Post-Logout URL dynamic so you can change it as part of your CI/CD pipeline, or change it once per instance and never have to change it again.</div>
</div>

<h1 id="heading-background">Background</h1>
<p>The diagram below shows a typical logout flow for a Social Sign-In type Authentication Scheme. In my use case, I want the Authentication provider to redirect to a public page in my APEX App after the logout completes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765731692645/8710db64-4e38-4fd1-b788-c80fef8bb179.png" alt="Diagram showing the typical APEX Social Sign-On Logout Flow" class="image--center mx-auto" /></p>
<p>In the APEX Authentication Scheme, we can specify where we want APEX to go after logout is complete:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765731843985/32aa0b53-62ec-4a60-9f9b-2f9b24fc2261.png" alt="Authentication Scheme setup for Post-Logout URL" class="image--center mx-auto" /></p>
<p>You can specify either:</p>
<ul>
<li><p><strong>Home Page</strong> - Attempts to go to the home page after logout; because the session is invalid, it then redirects to the login page. This is not suitable for Social Sign-In because it will just trigger another login with the Authentication Provider.</p>
</li>
<li><p>URL - You can specify a URL APEX should go to after the logout. Unfortunately, the Post-Logout URL field does not support APEX-style runtime substitution such as <code>f?p=&amp;APP_ID.:9999</code>. On the surface, the best you can do is enter a hard-coded URL, e.g., <code>https://example.com/ords/dev/logout-page</code>. When deploying to TEST or PROD, we must change this URL manually (there is no API).</p>
</li>
</ul>
<h1 id="heading-the-solution">The Solution</h1>
<p>The best workaround I have come up with is as follows.</p>
<h2 id="heading-1-create-an-application-item">1 - Create an Application Item</h2>
<p>Create an Application Item to store the Post-Logout URL. Here is a screenshot of the Application Item, which, for my example, I have called <code>AI_POST_LOGOUT_URL</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765732318966/7a915a63-5a5d-4fe8-8033-f850ce4ebb2d.png" alt="AI_POST_LOGOUT_URL Application Item" class="image--center mx-auto" /></p>
<h2 id="heading-2-create-an-application-setting">2 - Create an Application Setting</h2>
<p>Create an Application Setting to store the Post-Logout URL. Here is a screenshot of an Application Setting named <code>POST_LOGOUT_URL</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765732418165/8c9b88ec-9270-4ccd-9ad1-7d07c88ae9e7.png" alt="APEX Application Setting to store the Post-Logout URL" class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Be sure to set the ‘On Upgrade Keep Value’ option to Yes. This will ensure that when you deploy your App from DEV &gt; TEST &gt; PROD, the current value will not get overridden during the deployment.</div>
</div>

<p>This means the first time you deploy your App to a new instance, you will need to change the URL to the appropriate URL for the target instance. Moving forward (as long as you have ‘On Upgrade Keep Value’ set to Yes), you will no longer have to change it.</p>
<h2 id="heading-3-populate-the-application-item-for-new-sessions">3 - Populate the Application Item for New Sessions</h2>
<p>You will need to set the application item <code>AI_POST_LOGOUT_URL</code> to the value of the Application setting when creating a new session. The easiest place to do this is in an ‘After Authentication’ Application Process. In the screenshot below, I am calling apex_session_state.set_value directly from the ‘After Authentication’ Process of my App.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765732791460/b4b5f71c-fe29-48e5-81f7-cd2cf3a1f219.png" alt="APEX After Authentication Application Process to set Application Item" class="image--center mx-auto" /></p>
<pre><code class="lang-sql"><span class="hljs-keyword">BEGIN</span>
  <span class="hljs-comment">-- Copy the environment-specific setting into session state once per login</span>
  apex_session_state.set_value 
    (p_item  =&gt; <span class="hljs-string">'AI_POST_LOGOUT_URL'</span>, 
     p_value =&gt; apex_app_setting.get_value(<span class="hljs-string">'POST_LOGOUT_URL'</span>));
<span class="hljs-keyword">END</span>;
</code></pre>
<h2 id="heading-4-set-the-post-logout-url-to-the-value-of-the-application-item">4 - Set the Post-Logout URL to the Value of the Application Item</h2>
<p>Finally, we must set the Post-Logout URL to the value of the Application Item <code>AI_POST_LOGOUT_URL</code>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765732929336/a734c6a4-03ed-4df5-b659-b8af966f4615.png" alt="Set the Post-Logout URL to the value of the Application Item AI_POST_LOGOUT_URL" class="image--center mx-auto" /></p>
<h2 id="heading-alternatives">Alternatives</h2>
<p>Of course, you do not have to use the APEX Application Setting to store the URL. You could store the URLs in your own table keyed on the instance SID/Service Name, but I think storing them in an APEX Application setting is more compact and standard APEX.</p>
<p>Because the Post-Logout URL ultimately controls the redirect, you should ensure it is fully trusted and not user-modifiable. Application Settings are ideal here because they are developer-controlled and not influenced by runtime user input.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>This pattern has proven reliable and eliminates a common manual deployment step when using Social Sign-In in APEX.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💬</div>
<div data-node-type="callout-text">I would love to hear if you have a different way to do this.</div>
</div>]]></content:encoded></item><item><title><![CDATA[An AI Shift for APEX & PL/SQL Developers]]></title><description><![CDATA[Introduction
I have been using AI to help me build APEX Apps for well over a year now, and I’ve shared a lot about the tools and workflows I use. But something happened this week that felt like a genuine shift.
Usually, I use AI for things like autoc...]]></description><link>https://blog.cloudnueva.com/an-ai-shift-for-apex-developers</link><guid isPermaLink="true">https://blog.cloudnueva.com/an-ai-shift-for-apex-developers</guid><category><![CDATA[orclapex]]></category><category><![CDATA[#oracle-apex]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Sat, 31 Jan 2026 14:19:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1769869089674/b0d74de9-bf3f-470d-ba39-5369f6eed11e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>I have been using AI to help me build APEX Apps for well over a year now, and I’ve shared a lot about the tools and workflows I use. But something happened this week that felt like a genuine shift.</p>
<p>Usually, I use AI for things like autocomplete, understanding a codebase, and code reviews. I even use it for creating new procedures and functions, but the results vary in quality. This week, on two separate occasions, I had the AI generate hundreds of lines of PL/SQL, resulting in production-ready code (after review and testing). It included validations, handled edge cases I hadn't explicitly listed, and followed my approach perfectly. In this post, I want to break down the specific factors that made these efforts successful, even as others failed.</p>
<h1 id="heading-metadata-the-ais-rosetta-stone">Metadata: The AI’s Rosetta Stone</h1>
<p>The first key to this success wasn't the prompt; it was the database <strong>schema</strong>.</p>
<ul>
<li><p>Clear, unambiguous <strong>column names</strong>.</p>
</li>
<li><p>Tables and columns have clear, <strong>plain English comments</strong>.</p>
</li>
<li><p><strong>Foreign Key Constraints</strong> that clearly define the table relationships.</p>
</li>
<li><p><strong>Check Constraints</strong> to define valid values for columns, where possible.</p>
</li>
<li><p><strong>NOT NULL Constraints</strong> to identify which columns must be populated.</p>
</li>
<li><p><strong>Unique constraints / natural keys.</strong></p>
</li>
</ul>
<p>Because I provided the AI with the DDL (including this extra metadata), it didn't just see a table; it inferred far more of the business rules with fewer guesses. It knew that a column named <code>STATUS_CODE</code> wasn't just a string, but a state-machine driver. When the metadata is clean, the model makes fewer incorrect assumptions.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Let's face it, as APEX developers, this is something we should be doing anyway.</div>
</div>

<h1 id="heading-the-spec-is-the-work">The Spec is the Work</h1>
<blockquote>
<p><em>If I spend all this time writing a detailed spec for the AI, I could have just written the code myself.</em></p>
</blockquote>
<p>This is something that I used to think until I realized I was wrong on two counts:</p>
<ol>
<li><p>In most cases, I have to write a spec anyway, so the client can review it and I can be sure I am on the right path.</p>
</li>
<li><p>A well-thought-out spec can make the difference between average and near-perfect results when an LLM is generating code.</p>
</li>
</ol>
<p>I did change the way I write my specs. I now write them in Markdown (using <a target="_blank" href="https://obsidian.md/">Obsidian</a>) and export to Word using an Obsidian plugin that runs Pandoc if the client needs a Word copy. I also annotate the spec with hints for the AI, such as table names and references to procedures that perform similar logic. I make these annotations using HTML comments, which Pandoc excludes when exporting to Word.</p>
<p>Here is an example excerpt using HTML comments for annotations:</p>
<pre><code class="lang-markdown">The App should then create a child RFQ and RFQ lines for each supplier.
<span class="xml"><span class="hljs-comment">&lt;!-- AI &gt;</span></span> Use tables: SPTL<span class="hljs-emphasis">_RFQ_</span>SPLR<span class="hljs-emphasis">_HEADER and SPTL_</span>RFQ<span class="hljs-emphasis">_SPLR_</span>LINE --&gt;
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Writing specs isn’t extra work; it is the work you should have been doing anyway. It’s just that the audience has changed, and you need to adapt to it.</div>
</div>

<h1 id="heading-leveraging-existing-patterns">Leveraging Existing Patterns</h1>
<p>In my case, AI wasn't starting from a blank slate. I was adding code to an existing package, and I also had other packages in the repository that the AI could reference for context.</p>
<p>In one instance, I had a specific pattern established for processing file uploads:</p>
<ol>
<li><p>Upload records from Excel into an <code>APEX_COLLECTION</code> using <code>APEX_DATA_PARSER</code>.</p>
</li>
<li><p>Run validations to check the uploaded records for errors.</p>
</li>
<li><p>Allow the user to review the validated records before final processing.</p>
</li>
<li><p>Perform the final import into the base tables.</p>
</li>
</ol>
<p>I pointed the AI to two existing procedures that followed this pattern and said, "Follow the pattern in procedures X and Y, but apply the logic from the specification below…".</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Because it had a "template" of my coding style, the AI-generated code felt like I had written it myself.</div>
</div>

<h1 id="heading-the-power-of-agentsmd">The Power of AGENTS.md</h1>
<p>The final piece of the puzzle was the use of <a target="_blank" href="http://agents.md"><code>AGENTS.md</code></a>. <code>AGENTS.md</code> is a file containing instructions that many coding agents, such as Codex, Cursor, and Claude, pass to the LLM along with your prompt. My <code>AGENTS.md</code> files are constantly evolving, but they typically include instructions like:</p>
<ul>
<li><p>Use APEX PL/SQL APIs where possible <code>APEX_STRING</code>, <code>APEX_JSON</code>, and <code>APEX_DEBUG</code> over custom logic.</p>
</li>
<li><p>Use set-based logic where possible instead of FOR loops.</p>
</li>
<li><p>Avoid Dynamic SQL wherever possible; if unavoidable, always use bind variables and validate identifiers (e.g., DBMS_ASSERT) to reduce SQL injection risk.</p>
</li>
<li><p>The folder structure of the codebase.</p>
</li>
<li><p>Prefer <code>%TYPE</code> and <code>%ROWTYPE</code></p>
</li>
<li><p>No hard-coded schema names.</p>
</li>
<li><p>Always include <code>APEX_DEBUG</code> calls in exception handlers and major logic branches.</p>
</li>
<li><p>Code formatting rules.</p>
</li>
<li><p>etc.</p>
</li>
</ul>
<p>Without this file, the AI defaults to "generic" PL/SQL. With it, the AI becomes an expert in my specific preferences and standards.</p>
<h1 id="heading-warning">Warning!</h1>
<p>As I’ve said before, AI is a tool, not a crutch. The code you build is your responsibility (not the AI’s). For now, at least!</p>
<ul>
<li><p><strong>Understand the Output:</strong> Before committing the code, you should understand what it does and that it is doing what it is supposed to do.</p>
</li>
<li><p><strong>Security is Your Job:</strong> I still manually check security settings at the end of every project. AI can find vulnerabilities, but it shouldn't be the only one looking.</p>
</li>
<li><p><strong>Test, test, and test again</strong>: AI does not replace testing, though it can help with it.</p>
</li>
</ul>
<h1 id="heading-the-ai-generated-code-checklist">The AI-Generated Code Checklist</h1>
<ul>
<li><p>Clean DDL + constraints + comments included</p>
</li>
<li><p>Markdown spec with rules + edge cases</p>
</li>
<li><p>Reference 1–2 existing “golden” procedures</p>
</li>
<li><p>Repo instructions (AGENTS.md)</p>
</li>
<li><p>Run tests + security review + performance sanity check</p>
</li>
</ul>
<h1 id="heading-conclusion">Conclusion</h1>
<p>This week proved that we are moving toward a world where the APEX Developer acts more like a conductor than a member of the orchestra. I think I am OK with this, but it does take some getting used to.</p>
<p>If your database design is solid, your patterns are consistent, and your requirements are clear, the actual coding becomes a commodity. The AI didn't just save me time; it allowed me to stay in the "flow state" of designing the solution rather than getting bogged down in the syntax of a 300-line package body.</p>
<p>If you haven't reached this inflection point yet, stop focusing solely on the "prompt" and start considering the <strong>context</strong> you provide to the AI.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🚀</div>
<div data-node-type="callout-text">When <strong>APEXlang</strong> lands, the context story will matter even more, because the unit of generation will shift from PL/SQL functions and procedures to larger app-level artifacts. Either way, the lesson holds: invest in metadata, patterns, and specs, and the AI stops guessing.</div>
</div>

<p>Exciting times ahead!</p>
]]></content:encoded></item><item><title><![CDATA[Understanding APEX Open Door Credentials]]></title><description><![CDATA[Introduction
If you, like me, have wondered why you would ever use Open Door Credentials, then this post is for you.
Open Door Credentials are a powerful tool when used for the right reasons and in the correct instance. They make testing and troubles...]]></description><link>https://blog.cloudnueva.com/understanding-apex-open-door-credentials</link><guid isPermaLink="true">https://blog.cloudnueva.com/understanding-apex-open-door-credentials</guid><category><![CDATA[orclapex]]></category><category><![CDATA[#oracle-apex]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 08 Jan 2026 15:45:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765687779151/ba31d2a3-aa72-4e2f-9f97-b05468c10cc7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>If you, like me, have wondered why you would ever use Open Door Credentials, then this post is for you.</p>
<p>Open Door Credentials are a powerful tool when used for the right reasons and in the correct instance. They make testing and troubleshooting easier, especially for applications using Authentication Schemes like Social Sign-On, but they require discipline and restraint.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">⚠</div>
<div data-node-type="callout-text">Open Door Credentials should only be used as a temporary tool for testing and troubleshooting. Once you have completed your testing, they should be removed and should never (ever) be deployed to production.</div>
</div>

<h1 id="heading-what-are-open-door-credentials">What are Open Door Credentials?</h1>
<p>Open Door Credentials allow you to sign on to an APEX application using a username and a password. Nothing unusual there. The difference is that you can sign in using any username and password, even if they do not exist in your application. You can enter Username: ABC and Password: XYZ, and APEX will still let you in. This applies only to Authentication. Authorization, application logic, and security checks still apply and should be used to control what the authenticated user can actually do.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">😱</div>
<div data-node-type="callout-text">Although this sounds super scary, it does have its uses in certain limited situations.</div>
</div>

<p>Once the user is signed in with Open Door Credentials, as far as your APEX App is concerned, there is nothing different:</p>
<ul>
<li><p>Once authenticated, the username behaves like any other value in <code>:APP_USER</code>. This means all authorization schemes, row-level security logic, VPD policies, and application-specific checks will execute exactly as they would for a real user, assuming you validate the username correctly.</p>
</li>
<li><p>Any code you have in the Login Processing section of the Authentication Scheme (Pre-Authentication Procedure Name, Post-Authentication Procedure Name) is still executed.</p>
</li>
</ul>
<h1 id="heading-why-are-open-door-credentials-useful">Why are Open Door Credentials Useful?</h1>
<p>Open Door Credentials are particularly useful during the testing phase of a project and for troubleshooting issues. They allow you to experience exactly what that user is experiencing.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">I find Open Door Credentials most helpful for Applications that rely on Social Sign-on (e.g., Okta, Active Directory, Google) for authentication because these apps do not have a login page, and you cannot spoof other users.</div>
</div>

<p>An example of where Open Door Credentials are useful is when testing APEX Workflow. With APEX Workflow, you often need to log in as multiple users to test the process end-to-end. Getting all the participants on a call to test is impractical, and having their passwords is insecure. Open Door Credentials allow you to test the end-to-end process yourself.</p>
<h1 id="heading-words-of-caution">Words of Caution</h1>
<ul>
<li><p>It may seem obvious, but I will say it anyway. <strong>Never use Open Door Credentials in Production.</strong></p>
</li>
<li><p>Open Door Credentials should not be used (even in DEV) for applications that store personally identifiable information or any other data that requires security. That is, unless you mask this data when you clone from PROD to DEV/TEST.</p>
</li>
<li><p>If your DEV APEX App is open to the internet, then you should either add an IP restriction to your load balancer or restrict access to the APEX App you are testing by IP Address while testing with Open Door Credentials.</p>
</li>
<li><p>Validate the username. Open Door credentials will accept any username and password. You should validate that the username is a valid user of your application in the Application Level Authorization Scheme (hopefully, you were already doing this).</p>
</li>
</ul>
<h1 id="heading-how-to-create-open-door-credentials">How to Create Open Door Credentials</h1>
<p>Navigate to Shared Components &gt; Authentication Schemes</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765666731888/6a5e0067-68f3-4c1a-878a-b08767ee4862.png" alt="Navigate to Shared Components &gt; Authentication Schemes" class="image--center mx-auto" /></p>
<p>Click Create and select ‘Based on a pre-configured scheme from the gallery’.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765666767137/6a0063c0-c546-4410-86bb-25680e2ab586.png" alt="Select what the new Authentication Scheme is based on." class="image--center mx-auto" /></p>
<p>Select ‘Scheme Type’ as ‘Open Door Credentials’, and click ‘Create Authentication Scheme’.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765666991250/cf3f6e74-c555-44c5-adcf-a7240a4c60c4.png" alt="Select Open Door Credentials  as the Scheme Type" class="image--center mx-auto" /></p>
<p>Add your usual Pre and Post-Authentication logic:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765667051754/260d14fc-3edf-41b2-bb84-21f139847ca4.png" alt="Add Pre and Post-Authentication logic." class="image--center mx-auto" /></p>
<h2 id="heading-logging-in">Logging In</h2>
<p>When you make the new Authentication Scheme, the current scheme and login, you will be presented with an APEX-generated login page like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765686383706/714810ee-850b-489e-b09a-9326c0c4f97f.png" alt="APEX Generated Login Page" class="image--center mx-auto" /></p>
<p><strong>Note</strong>: When using Open Door Credentials, the password value is ignored by the Authentication Scheme. Any password will be accepted.</p>
<p>Interestingly, some environments, e.g., <a target="_blank" href="https://oracleapex.com/">https://oracleapex.com/</a>, only show the username:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765686519950/b55c4b92-f693-4dcd-b02e-bf13e9a8e69b.png" alt="Alternate APEX Generated Login Page" class="image--center mx-auto" /></p>
<h2 id="heading-deploying-the-app-from-dev-to-prod">Deploying the App from DEV to PROD</h2>
<p>Unfortunately, Authentication Schemes do not currently have Build Options. With a build option, you could set it so that APEX automatically excludes the Authentication Scheme whenever you export it for deployment to another instance.</p>
<p>Not having Build Options means you must manually delete the Open Door Credential Authentication Scheme before you deploy to PROD. There is an <a target="_blank" href="https://apex.oracle.com/ideas/FR-2152">idea in the APEX Ideas App</a> to add Build Options to Authentication Schemes. Even though it is flagged as on the Roadmap, it is over 4 years old, so I encourage you to vote for it.</p>
<h1 id="heading-alternatives-to-open-door-credentials">Alternatives to Open Door Credentials</h1>
<p>Another option is to use the <strong>Oracle APEX Accounts</strong> Authentication Scheme and create dedicated test users. This approach uses real credentials, but it requires you to build a login page and ongoing user management, which may be impractical for short-lived testing scenarios.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>Open Door Credentials are a powerful tool when used for the right reasons. They make testing and troubleshooting simpler, especially for applications using external authentication, but they require discipline and restraint. Use them responsibly, lock them down properly, and remove them before deploying to production.</p>
]]></content:encoded></item><item><title><![CDATA[Build Dynamic Excel Upload Templates with APEX_DATA_EXPORT]]></title><description><![CDATA[Introduction
I have lost count of how many times I have developed an Excel Upload using APEX_DATA_PARSER. It provides users with a convenient way to mass-upload or update data using a tool they are familiar with. I usually include a link to a static ...]]></description><link>https://blog.cloudnueva.com/dynamic-excel-upload-templates</link><guid isPermaLink="true">https://blog.cloudnueva.com/dynamic-excel-upload-templates</guid><category><![CDATA[orclapex]]></category><category><![CDATA[#oracle-apex]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 18 Dec 2025 13:18:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763306636657/45d2007d-9c87-4d11-971e-f3b26498be83.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>I have lost count of how many times I have developed an Excel Upload using <a target="_blank" href="https://blog.cloudnueva.com/apexdataparser"><code>APEX_DATA_PARSER</code></a>. It provides users with a convenient way to mass-upload or update data using a tool they are familiar with. I usually include a link to a static Excel Template File that users can download to understand the structure they need to include in their upload.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">In this post, I will show you how to dynamically generate Excel Upload Template Files, including sample data and custom instructions.</div>
</div>

<h1 id="heading-use-case">Use Case</h1>
<p>In a recent project, I was tasked with building a Hierarchy Management Application. This involved loading nodes (and their attributes), then loading hierarchies composed of those nodes. I’ll be focusing on the node (and attribute) upload.</p>
<p>The Nodes APEX UI looks like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763217245839/82576296-016b-43b5-9796-d82f478a90ed.png" alt="APEX Page Showing Nodes Page" class="image--center mx-auto" /></p>
<ul>
<li><p>A Segment Type is a grouping of like nodes.</p>
</li>
<li><p>The Bonus Eligible column is a custom node attribute. Users can specify up to 20 custom attributes for each Segment Type.</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Given that the attributes differ across Segment Types, it is not realistic to have separate Static Excel templates for each Segment Type.</div>
</div>

<p>👉 Enter <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/aeapi/APEX_DATA_EXPORT.html">apex_data_export</a>.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💁</div>
<div data-node-type="callout-text">The apex_data_export package allows you to export data from Oracle APEX. It supports several file types, including PDF, XLSX, HTML, CSV, XML, and JSON.</div>
</div>

<h1 id="heading-goal">Goal</h1>
<p>The goal is to download an Excel file containing the existing rows and columns for a Segment Type, make updates, and re-import it to apply those updates.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763236621617/92247494-0c08-40d3-b85c-1ed24a536a96.png" alt="Example Excel Template Generated by APEX_DATA_EXPORT" class="image--center mx-auto" /></p>
<h1 id="heading-approach">Approach</h1>
<h2 id="heading-build-sql">Build SQL</h2>
<p>My approach was first to build a function to generate dynamic SQL for both the APEX page shown above and for <code>apex_data_export</code> to use. The function below is intended to show an approximation of the function (it is not the actual code).</p>
<pre><code class="lang-sql">FUNCTION get_node_sql 
  (p_segment_type_id IN apx_rdm_segment_type.segment_type_id%TYPE) RETURN CLOB IS

  CURSOR cr_segment_attributes 
   (cp_segment_type_id IN apx_rdm_segment_type.segment_type_id%TYPE) IS
    <span class="hljs-keyword">SELECT</span> attr.attribute_code
    ,      attr.data_type_code
    ,      attr.attribute_name <span class="hljs-keyword">AS</span> display_value
    <span class="hljs-keyword">FROM</span>   apx_rdm_segment_type_attr sta
    ,      apx_rdm_attribute         <span class="hljs-keyword">attr</span>
    <span class="hljs-keyword">WHERE</span>  attr.attribute_id   = sta.attribute_id
    <span class="hljs-keyword">AND</span>    sta.segment_type_id = cp_segment_type_id
    <span class="hljs-keyword">ORDER</span>  <span class="hljs-keyword">BY</span> sta.sort_order;

  (p_segment_type_id IN apx_rdm_segment_type.segment_type_id%TYPE) RETURN CLOB IS
  l_sql               CLOB;
  l_col_count         PLS_INTEGER := 0;
  lc_lf               CONSTANT VARCHAR2(1) := CHR(10);
<span class="hljs-keyword">BEGIN</span>

  <span class="hljs-comment">-- Build the base columns for the hierarchy node.</span>
  l_sql := <span class="hljs-string">'SELECT hnode.node_code, hnode.node_name, hnode.system_name, hnode.postable_flag, hnode.active_flag'</span> || lc_lf;

  <span class="hljs-comment">-- Loop through custom attributes and Append a MAX(...) expression per </span>
  <span class="hljs-comment">--   attribute assigned to the segment type.</span>
  FOR rec IN cr_segment_attributes (cp_segment_type_id =&gt; p_segment_type_id) LOOP
    l_col_count := l_col_count + 1;
    EXIT WHEN l_col_count &gt; 20;
      l_sql := l_sql
        || ',      MAX(CASE WHEN attrval.attribute_code = '''
        || <span class="hljs-keyword">REPLACE</span>(rec.attribute_code, <span class="hljs-string">''''</span>, <span class="hljs-string">''''''</span>)
        || <span class="hljs-string">''' THEN attrval.attr_value_varchar END) AS "'</span>
        || <span class="hljs-keyword">REPLACE</span>(rec.attribute_code, <span class="hljs-string">'"'</span>, <span class="hljs-string">'""'</span>)
        || <span class="hljs-string">'"'</span> || lc_lf;
  <span class="hljs-keyword">END</span> <span class="hljs-keyword">LOOP</span>;

  <span class="hljs-comment">-- Add FROM and WHERE clauses.</span>
  l_sql := l_sql ||
  'FROM   apx_rdm_hierarchy_node_svw hnode
  LEFT JOIN apx_rdm_node_attribute_value_vw attrval 
    ON     attrval.node_id = hnode.node_id
    WHERE hnode.segment_type_id = :SEGMENT_TYPE_ID' || lc_lf;

  <span class="hljs-comment">-- Add Group By Clause.</span>
  <span class="hljs-comment">-- Attribute values are aggregated with MAX so they do not belong in the GROUP BY.</span>
  l_sql := l_sql || 'GROUP BY hnode.node_code, hnode.node_name, hnode.system_name, hnode.postable_flag, hnode.active_flag';

  RETURN l_sql;

<span class="hljs-keyword">END</span> get_node_sql;
</code></pre>
<h2 id="heading-fetch-sql-format-output-generate-amp-store-excel">Fetch SQL, Format Output, Generate &amp; Store Excel</h2>
<p>Next, write a procedure to generate the Excel Upload Template File. Again, the code below is an extract of the highlights from the actual code and is not complete.</p>
<pre><code class="lang-sql">PROCEDURE generate_node_upload_template
 (p_segment_type_id   IN apx_rdm_segment_type.segment_type_id%TYPE,
  p_segment_type_code IN apx_rdm_segment_type.segment_type_code%TYPE) IS

  <span class="hljs-comment">-- apex_exec and apex_data_export types.</span>
  l_xlsx_context        apex_exec.t_context;
  l_sql_params          apex_exec.t_parameters;
  l_xlsx_export         apex_data_export.t_export;
  lt_columns            apex_data_export.t_columns;
  l_print_config        apex_data_export.t_print_config;
  l_sql                 CLOB;

<span class="hljs-keyword">BEGIN</span>

  <span class="hljs-comment">-- Generate the SQL to retrieve the nodes and their attributes.</span>
  l_sql := get_node_sql (p_segment_type_id =&gt; p_segment_type_id);

  <span class="hljs-comment">-- Add Static Colummn Definitions and Headings.</span>
  <span class="hljs-comment">-- Note the use of p_is_frozen to the first three columns. This will freeze </span>
  <span class="hljs-comment">--  these three columns in the Excel output. </span>
  <span class="hljs-comment">-- The frozen columns must be the first contiguous columns.</span>
  <span class="hljs-comment">-- p_heading is the value used in the output Excel.</span>
  apex_data_export.add_column
   (p_columns =&gt; lt_columns,
    p_name    =&gt; 'NODE_CODE',
    p_heading =&gt; 'Node Code',
    p_is_frozen =&gt; TRUE);
  apex_data_export.add_column
   (p_columns =&gt; lt_columns,
    p_name    =&gt; 'NODE_NAME',
    p_heading =&gt; 'Node Name',
    p_is_frozen =&gt; TRUE);
  apex_data_export.add_column
   (p_columns =&gt; lt_columns,
    p_name    =&gt; 'SYSTEM_NAME',
    p_heading =&gt; 'Source System Name',
    p_is_frozen =&gt; TRUE);
  apex_data_export.add_column
   (p_columns =&gt; lt_columns,
    p_name    =&gt; 'POSTABLE_FLAG',
    p_heading =&gt; 'Postable Flag');
  apex_data_export.add_column
   (p_columns =&gt; lt_columns,
    p_name    =&gt; 'ACTIVE_FLAG',
    p_heading =&gt; 'Active Flag');

  <span class="hljs-comment">-- Loop through attributes for the Segment Type adding dynamic attribute columns.</span>
  FOR lr_segment_attrs IN cr_segment_attributes (cp_segment_type_id =&gt; p_segment_type_id) LOOP
    apex_data_export.add_column
     (p_columns =&gt; lt_columns,
      p_name    =&gt; lr_segment_attrs.attribute_code,
      p_heading =&gt; lr_segment_attrs.display_value);
  <span class="hljs-keyword">END</span> <span class="hljs-keyword">LOOP</span>;

  <span class="hljs-comment">-- Add SQL Parameters / Bind Variable Values</span>
  apex_exec.add_parameter(l_sql_params, 'SEGMENT_TYPE_ID', p_segment_type_id);

  <span class="hljs-comment">-- Add Branding Colors to the Heading and make Heading Font Size Larger.</span>
  l_print_config := apex_data_export.get_print_config
                     (p_header_bg_color   =&gt; '<span class="hljs-comment">#020381',</span>
                      p_header_font_color =&gt; '<span class="hljs-comment">#FFFFFF',</span>
                      p_header_font_size  =&gt; 11);

  <span class="hljs-comment">-- Open the Query Context, bind the parameters and execute the query.</span>
  l_xlsx_context := apex_exec.open_query_context
                     (p_location       =&gt; apex_exec.c_location_local_db,
                      p_sql_parameters =&gt; l_sql_params,
                      p_sql_query      =&gt; l_sql);

  <span class="hljs-comment">-- Export to XLSX format</span>
  l_xlsx_export := apex_data_export.export
                     (p_context           =&gt; l_xlsx_context,
                      <span class="hljs-comment">-- ⬇️ Determines the Export File Format</span>
                      p_format            =&gt; apex_data_export.c_format_xlsx,
                      <span class="hljs-comment">-- ⬇️ Defines the Columns and Headings in the Export</span>
                      p_columns           =&gt; lt_columns,
                      <span class="hljs-comment">-- ⬇️ Used for the Excel Tab Name</span>
                      p_page_header       =&gt; p_segment_type_code,  
                      <span class="hljs-comment">-- ⬇️ Text Appears at the top of the Excel Sheet</span>
                      p_supplemental_text =&gt; 'This template is to be used for uploading Nodes into the [' || 
                                              p_segment_type_code || '] Segment Type. ',  
                      <span class="hljs-comment">-- ⬇️ Name of the Exported File</span>
                      p_file_name         =&gt; 'Node_Upload_Template',
                      <span class="hljs-comment">-- ⬇️ Print Configuration to use for the Exported File</span>
                      p_print_config      =&gt; l_print_config);
  apex_exec.close(l_xlsx_context);

  <span class="hljs-comment">-- Store the Exported File in an APEX Collection for Later Use.</span>
  apex_collection.create_or_truncate_collection(p_collection_name =&gt; 'NODES_TEMPLATE_COLLN');
  apex_collection.add_member
   (p_collection_name =&gt; 'NODES_TEMPLATE_COLLN',
    p_blob001         =&gt; l_xlsx_export.content_blob,
    p_c001            =&gt; l_xlsx_export.file_name);
<span class="hljs-keyword">END</span> generate_node_upload_template;
</code></pre>
<ul>
<li><p>You don’t have to use <code>apex_data_export.add_column</code>. If you do not, then APEX will use the column names (or aliases) from your SQL for the column headers.</p>
</li>
<li><p>The <code>p_is_frozen</code> parameter <code>apex_data_export.add_column</code> has to be applied to all of the columns you want frozen.</p>
</li>
<li><p>For my use case, I call the generate procedure above from a Dynamic Action, temporarily store the Excel file in a Collection, and then download it using the Download Dynamic Action (check out this <a target="_blank" href="https://lmoreaux.hashnode.dev/how-to-easily-download-files-in-oracle-apex-241#heading-new-download-dynamic-action">post</a> from <a class="user-mention" href="https://hashnode.com/@lmoreaux">Louis Moreaux</a> for more).</p>
</li>
</ul>
<h1 id="heading-more-on-apexdataexport">More on apex_data_export</h1>
<p>In this post, I focused on a specific use case and some specific features of apex_data_export. In this section, I will cover additional key features and user cases.</p>
<h2 id="heading-other-key-features">Other Key Features</h2>
<h3 id="heading-column-grouping"><strong>Column Grouping</strong></h3>
<p>The <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/aeapi/APEX_DATA_EXPORT-ADD_COLUMN_GROUP-Procedure.html"><code>apex_data_export.add_column_group</code></a> API allows you to organize columns into groups with group headings to generate something like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763261031980/0cd689ac-d194-4048-90ab-c5c0bffbca60.png" alt="APEX_DATA_EXPORT Excel Export with Column Groups" class="image--center mx-auto" /></p>
<h3 id="heading-aggregating">Aggregating</h3>
<p>The <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/aeapi/APEX_DATA_EXPORT-ADD_AGGREGATE-Procedure.html"><code>apex_data_export.add_aggregate</code></a> API allows you to calculate totals for numeric values. This can be used in conjunction with the <code>p_is_column_break</code> parameter of <code>apex_data_export.add_column</code> to generate group totals.</p>
<h3 id="heading-highlights">Highlights</h3>
<p>The <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/aeapi/APEX_DATA_EXPORT-ADD_HIGHLIGHT-Procedure.html"><code>apex_data_export.add_highlight</code></a> API allows you to highlight cells in your output based on columns in your data source. The way this works is a little different.</p>
<p>In your SQL, return the highlight ID based on the criteria you want to use. In the example below, I have two highlights with IDs 1 and 2.</p>
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> order_number
,      order_total
,      due_date
,      <span class="hljs-keyword">CASE</span> <span class="hljs-keyword">WHEN</span> order_total &gt; <span class="hljs-number">1000</span> <span class="hljs-keyword">THEN</span> <span class="hljs-number">1</span> <span class="hljs-keyword">END</span> <span class="hljs-keyword">AS</span> order_total_highlight
,      <span class="hljs-keyword">CASE</span> <span class="hljs-keyword">WHEN</span> due_date &gt; <span class="hljs-keyword">SYSDATE</span> <span class="hljs-keyword">THEN</span> <span class="hljs-number">2</span> <span class="hljs-keyword">END</span> <span class="hljs-keyword">AS</span> due_date_highlight
<span class="hljs-keyword">FROM</span>   orders
</code></pre>
<pre><code class="lang-sql"><span class="hljs-comment">-- Define Highlight for Order Total</span>
apex_data_export.add_highlight(
        p_highlights          =&gt; l_highlights,
        p_id                  =&gt; 1,         <span class="hljs-comment">-- Order Total Highlight from SQL</span>
        p_value_column        =&gt; 'ORDER_TOTAL_HIGHLIGHT',
        p_display_column      =&gt; 'ORDER_TOTAL',  <span class="hljs-comment">-- Where to put the highlight</span>
        p_text_color          =&gt; '<span class="hljs-comment">#FF0000' );</span>

<span class="hljs-comment">-- Define Highlight for Due Date</span>
apex_data_export.add_highlight(
        p_highlights          =&gt; l_highlights,
        p_id                  =&gt; 2,         <span class="hljs-comment">-- Due Date Highlight from SQL</span>
        p_value_column        =&gt; 'DUE_DATE_HIGHLIGHT',
        p_display_column      =&gt; 'DUE_DATE',  <span class="hljs-comment">-- Where to put the highlight</span>
        p_text_color          =&gt; '<span class="hljs-comment">#FF0000' );</span>
</code></pre>
<ul>
<li>If the SQL returns a 1 for the <code>order_total_highlight</code> column, then the highlight is applied; otherwise, it is not. The same goes for <code>due_date_highlight</code>, except it needs to return a 2 for the highlight to be applied.</li>
</ul>
<h3 id="heading-download">Download</h3>
<p>The <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/aeapi/APEX_DATA_EXPORT-DOWNLOAD-Procedure.html"><code>apex_data_export.download</code></a> API allows you to initiate a download of the file generated by apex_data_export (or any other BLOB for that matter). Check out this <a target="_blank" href="https://haniel.hashnode.dev/easy-file-downloads-with-apexdataexportdownload">post</a> from <a class="user-mention" href="https://hashnode.com/@Haniel">Haniel Burton</a> for more.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💪</div>
<div data-node-type="callout-text">We are in a golden age of APEX file downloads. We have the <a target="_self" href="https://docs.oracle.com/en/database/oracle/apex/24.2/aeapi/APEX_HTTP.html">apex_http</a> API, the apex_data_export.export method, and native Dynamic Actions!</div>
</div>

<h3 id="heading-print-config">Print Config</h3>
<p>The <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/aeapi/APEX_DATA_EXPORT-GET_PRINT_CONFIG-Procedure.html"><code>apex_data_export.get_print_config</code></a> API, when used in conjunction with the <code>p_print_config</code> parameter of <code>apex_data_export.export</code>, is used to drive print formatting for your export. Here are the settings applicable to Excel exports:</p>
<ul>
<li><p>Page Header (p_page_header) is used for the tab name, unless overridden in the parameter with the same name in the call to <code>apex_data_export.export</code>.</p>
</li>
<li><p>Page footer-related parameters (p_page_footer_…): add and format text below the last cell in the Excel output.</p>
</li>
<li><p>Heading-related parameters (p_header_…): apply formatting to the Excel column headings.</p>
</li>
<li><p>Body-related parameters (p_body_…): apply formatting to the rows in the Excel output.</p>
</li>
<li><p>Border-related parameters (p_border_…): apply formatting to the cell borders in the Excel output.</p>
</li>
</ul>
<h2 id="heading-other-use-cases-for-apexdataexport">Other Use Cases for APEX_DATA_EXPORT</h2>
<p>In this post, I focused on a specific use case, but there are many others for <code>apex_data_export</code>.</p>
<ul>
<li><p>Exporting Setup Data in JSON for import into other systems or moving setups from DEV &gt; TEST &gt; PROD.</p>
</li>
<li><p>Generate downloads of data triggered from a button press, where you don’t have an Interactive Report or Grid to handle the download.</p>
</li>
<li><p>Generate a report from an APEX Automation and attach it to an email for distribution.</p>
</li>
<li><p>G<strong>enerate specialized “operational snapshot” files</strong> for downstream teams or auditors: Use the API to produce point-in-time exports (with highlights, aggregates, or formatting) from PL/SQL jobs, ideal for monthly financial freezes, audit cycles, or HR snapshots.</p>
</li>
</ul>
<h2 id="heading-apexexec">APEX_EXEC</h2>
<p>Part of the flexibility of apex_data_export stems from its use of the powerful <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/aeapi/APEX_EXEC.html">apex_exec</a> API. This allows you to generate exports from REST APIs just as easily as from database tables.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>Dynamic Excel template generation is one of those APEX features that solves a real problem: keeping users aligned with data structures that aren’t static. By pairing apex_data_export with apex_exec, you eliminate the maintenance headache of versioning static spreadsheets and give users templates that consistently reflect the current configuration; columns, attributes, sample values, instructions, everything.</p>
]]></content:encoded></item><item><title><![CDATA[How I Use AI to Make Me a Better APEX Developer]]></title><description><![CDATA[Introduction
I have been using AI to help me build APEX Apps for more than a year, and the pace of change during that time has been amazing. In this post I will review the tools and workflows that that have helped me to improve as an APEX Developer.
...]]></description><link>https://blog.cloudnueva.com/how-i-use-ai-for-apex-development</link><guid isPermaLink="true">https://blog.cloudnueva.com/how-i-use-ai-for-apex-development</guid><category><![CDATA[orclapex]]></category><category><![CDATA[#oracle-apex]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 27 Nov 2025 08:36:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763914219563/e26bdd19-ec8a-4b5f-9862-786f7369189f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>I have been using AI to help me build APEX Apps for more than a year, and the pace of change during that time has been amazing. In this post I will review the tools and workflows that that have helped me to improve as an APEX Developer.</p>
<p>I won’t be getting into how to set up these tools, just how I use them.</p>
<h1 id="heading-ai-subscriptions">AI Subscriptions</h1>
<p>I currently have the following AI Subscriptions:</p>
<ul>
<li><p>Open AI Chat GPT Business ($300/year)</p>
<ul>
<li>I use this subscription for ChatGPT, APIs, and Codex</li>
</ul>
</li>
<li><p>GitHub Co-Pilot Pro ($100/year)</p>
</li>
</ul>
<h1 id="heading-ai-tools">AI Tools</h1>
<p>These are the AI tools I am using today:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Tool</td><td>Usage</td></tr>
</thead>
<tbody>
<tr>
<td><a target="_blank" href="https://code.visualstudio.com/docs/copilot/overview">GitHub Copilot in VS Code</a></td><td>Ghost Text / Autocomplete, Changes to PL/SQL code in the IDE.</td></tr>
<tr>
<td><a target="_blank" href="https://developers.openai.com/codex/ide/">Codex Plugin</a> in VS Code</td><td>Agent mode changes made from VS Code. Working with SQLcl MCP Server</td></tr>
<tr>
<td><a target="_blank" href="https://chatgpt.com/features/codex">Codex CLI</a></td><td>Tasks I can let run in the background, e.g., Code Reviews, Generating Repository Documentation, Creating Draft Data Models, Refactoring Code, etc.</td></tr>
<tr>
<td>GitHub Copilot in GitHub Desktop</td><td>Auto-Generate Commit Message</td></tr>
<tr>
<td><a target="_blank" href="https://docs.oracle.com/en/database/oracle/sql-developer-command-line/25.2/sqcug/using-oracle-sqlcl-mcp-server.html">Oracle SQLcl MCP Server</a></td><td>Talking to the Database. Tuning SQL, Understanding Data Models, etc.</td></tr>
</tbody>
</table>
</div><h1 id="heading-auto-complete-ghost-text">Auto-Complete / Ghost Text</h1>
<p>I estimate that <strong>Ghost Text</strong> accounts for about 50% of my overall productivity gains with AI. This is the feature where the AI predicts your next step based on the surrounding code and provides a suggestion you can accept with the Tab key. It is especially useful for repetitive tasks, such as:</p>
<ul>
<li><p>Writing <code>MERGE</code> and <code>INSERT INTO</code> statements. The AI leverages the <strong>schema and table structure</strong> currently open in the IDE (e.g., the columns and data types) to accurately auto-complete the statement after just a few initial lines.</p>
</li>
<li><p>Automatically adding comments and apex_debug entries.</p>
</li>
<li><p>Autocompleting the <code>EXCEPTION WHEN OTHERS</code> block of a procedure or function.</p>
</li>
<li><p>Applying a required change across multiple related code blocks.</p>
</li>
</ul>
<p>Here is a short video showing me creating a new procedure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763439879194/cf899552-58ba-40f3-ba31-3d1478ce564c.gif" alt class="image--center mx-auto" /></p>
<h1 id="heading-code-reviews">Code Reviews</h1>
<p>I use AI to do three different kinds of code reviews (of my own work):</p>
<ul>
<li><p>End of Day Review of all changes made during the day</p>
</li>
<li><p>Ad-Hoc review of a PL/SQL package</p>
</li>
<li><p>Full Code Review</p>
</li>
</ul>
<h2 id="heading-end-of-day-review">End of Day Review</h2>
<p>At the end of every day, I like to run an automated code review of the changes I made that day. The Codex CLI makes this easy with the <code>/review</code> command.</p>
<p>Change to the root folder of the GitHub repository and start Codex, then type <code>/review</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763856199041/2875242c-efef-4f3f-9e6f-f90f2af78e16.png" alt="Codex Review 1" class="image--center mx-auto" /></p>
<p>Then choose what kind of review you want to do; option 2 is what I do.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763856301554/5853d236-1e1a-43e2-a81e-8c861fb86561.png" alt="Codex Review 2" class="image--center mx-auto" /></p>
<p>Codex will then compare the last committed version with the changes and perform a review.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">I can work on something else while it is running.</div>
</div>

<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763856339376/3c1a54aa-636c-4aa8-a73b-b11c23ee8110.png" alt="Codex Review 3" class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">It is surprising how many issues this daily review catches.</div>
</div>

<h2 id="heading-ad-hoc-review-plsql-package">Ad-Hoc Review (PL/SQL Package)</h2>
<p>Every so often, I like to do a complete review of a package. I use the Codex CLI for this so that I can run it in the background. The key here is to develop a prompt that reviews the code the way you would like it reviewed.</p>
<p>Here is the prompt I currently use:</p>
<pre><code class="lang-markdown"><span class="hljs-section"># Code Review Objective</span>
Perform a code review for the following files:
<span class="hljs-bullet">-</span> <span class="hljs-code">`PLSQL/XXFN_RFQ_UTL_PKB.sql`</span>
<span class="hljs-bullet">-</span> <span class="hljs-code">`PLSQL/XXFN_RFQ_UTL_PKS.sql`</span>

Begin with a concise checklist (3-7 bullets) of the sub-tasks required to complete this review; keep items conceptual, not implementation-level.
<span class="hljs-section"># Tasks</span>
Review each file for:
<span class="hljs-bullet">-</span> Logic errors
<span class="hljs-bullet">-</span> Unused variables
<span class="hljs-bullet">-</span> Lack of code reuse
<span class="hljs-bullet">-</span> Security Concerns
<span class="hljs-bullet">-</span> Poorly performing code

For each identified issue, state your assumptions and verify the potential impact before assigning severity.
<span class="hljs-section"># Output Instructions</span>
<span class="hljs-bullet">-</span> Document all findings in <span class="hljs-code">`RFQ_CODE_REVIEW.md`</span>.
<span class="hljs-bullet">-</span> Use a Markdown table with the columns:
<span class="hljs-bullet">    -</span> <span class="hljs-strong">**File**</span>
<span class="hljs-bullet">    -</span> <span class="hljs-strong">**Line(s)**</span>
<span class="hljs-bullet">    -</span> <span class="hljs-strong">**Issue Type**</span>
<span class="hljs-bullet">    -</span> <span class="hljs-strong">**Description**</span>
<span class="hljs-bullet">    -</span> <span class="hljs-strong">**Severity**</span> (Low/Medium/High)
<span class="hljs-bullet">    -</span> <span class="hljs-strong">**Suggested Fix**</span>
<span class="hljs-bullet">-</span> Group issues by file and list findings sequentially within each file.
<span class="hljs-bullet">-</span> If a file has no issues, include a row stating "No issues found" in the <span class="hljs-strong">**Description**</span> column, leaving the other columns blank for that row.
<span class="hljs-bullet">-</span> Assign <span class="hljs-strong">**Severity**</span> based on the potential impact of the issue; if uncertain about severity or affected lines, specify your assumption and reasoning.
<span class="hljs-bullet">-</span> For each finding, specify the affected line(s) precisely. For multi-line or file-wide issues, indicate the corresponding range or "entire file" as needed.

After reviewing and documenting, validate that all findings are clearly described and all required fields are filled in; if any instructions are ambiguous or criteria unmet, highlight them at the end of the output for clarification.
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Keep iterating on the prompt until you get the results you want. I am constantly fine-tuning my prompts. Furthermore, when a new AI model is released, you should review your prompts and update them according to the latest prompting guide.</div>
</div>

<h2 id="heading-complete-code-review">Complete Code Review</h2>
<p>I run a code review of the complete codebase about once a week. Once again, I turn to the Codex CLI to run this so I can work on something else while it runs.</p>
<p>Here is the prompt I currently use:</p>
<pre><code class="lang-markdown">You are acting as a senior Oracle APEX / PL/SQL architect and code reviewer.

Context:
<span class="hljs-bullet">-</span> The repository contains one or more Oracle APEX applications, shared components, PL/SQL packages, functions, procedures, triggers, views, and other database objects.
<span class="hljs-bullet">-</span> Assume it is used in a production or near-production environment.
<span class="hljs-bullet">-</span> Focus heavily on correctness, security, performance, maintainability, and APEX best practices.

Your task:
Perform a <span class="hljs-strong">**comprehensive code review**</span> of the entire repo (APEX export files, PL/SQL packages, views, triggers, utility scripts, etc.) and then produce a <span class="hljs-strong">**single Markdown report**</span> as your only output.

General review focus:
<span class="hljs-bullet">1.</span> <span class="hljs-strong">**Correctness &amp; robustness**</span>
<span class="hljs-bullet">   -</span> Logic errors, edge cases, null/empty handling, and error handling.
<span class="hljs-bullet">   -</span> Transaction handling and commit/rollback discipline.
<span class="hljs-bullet">   -</span> Concurrency issues (locks, race conditions, serialization).
<span class="hljs-bullet">2.</span> <span class="hljs-strong">**Security**</span>
<span class="hljs-bullet">   -</span> SQL injection risks (dynamic SQL, concatenated where clauses, using NVL with parameters, etc.).
<span class="hljs-bullet">   -</span> XSS / output escaping issues in APEX (unescaped substitutions, htp.p/owa<span class="hljs-emphasis">_util.showpage usage, missing server-side validation).
   - Session and authorization handling (APEX authorization schemes, access control logic).
   - Sensitive data handling (logging PII, passwords, tokens, or secrets).
   - Use of APEX substitution strings, bind variables, and item values in queries and PL/SQL.
3. <span class="hljs-strong">**Performance &amp; scalability**</span>
   - N+1 query patterns, repeated queries in loops.
   - Missing or misused indexes, non-selective predicates, full table scans where dangerous.
   - Inefficient PL/SQL patterns (unnecessary row-by-row processing, missing BULK COLLECT / FORALL where appropriate).
   - Heavy computations in views vs. materialized views or caching strategies.
   - APEX-specific performance concerns (expensive queries in report regions, interactive report/grid filters, LOVs with slow queries).
4. <span class="hljs-strong">**APEX application design**</span>
   - Page process and branch logic clarity.
   - Proper use of shared components (lists, LOVs, templates, authorization schemes).
   - Hard-coded values vs. configuration tables/application items.
   - Use of APEX APIs (APEX_</span>APPLICATION, APEX<span class="hljs-emphasis">_UTIL, APEX_</span>PAGE, APEX<span class="hljs-emphasis">_SESSION, APEX_</span>DEBUG, etc.) and any risky or deprecated calls.
<span class="hljs-bullet">5.</span> <span class="hljs-strong">**Code quality &amp; maintainability**</span>
<span class="hljs-bullet">   -</span> Naming conventions for packages, procedures, variables, constants, and views.
<span class="hljs-bullet">   -</span> Comment quality and accuracy (misleading or outdated comments).
<span class="hljs-bullet">   -</span> Module boundaries and cohesion (what should be refactored into separate packages).
<span class="hljs-bullet">   -</span> Reusability and DRY violations (duplicate logic that should be centralized).
<span class="hljs-bullet">   -</span> Error logging and instrumentation (APEX debug, custom logging tables, structured error messages).
<span class="hljs-bullet">6.</span> <span class="hljs-strong">**Database design &amp; views**</span>
<span class="hljs-bullet">   -</span> View complexity and readability.
<span class="hljs-bullet">   -</span> Security of views (exposing too much data, missing filters, relying on client-side filters).
<span class="hljs-bullet">   -</span> Use of synonyms, grants, and schema separation.

Output requirements:
<span class="hljs-bullet">-</span> Only output a single Markdown document called CodeReview.md
<span class="hljs-bullet">-</span> No explanation or actions outside of the Markdown.
<span class="hljs-bullet">-</span> Structure the report so that issues are easy to scan and the related code is easy to find.

Markdown structure:
<span class="hljs-bullet">1.</span> # APEX Repo Code Review
<span class="hljs-bullet">   -</span> Short overview of the overall health of the codebase.
<span class="hljs-bullet">   -</span> 2–3 key strengths.
<span class="hljs-bullet">   -</span> 3–5 top-priority concerns.

<span class="hljs-bullet">2.</span> ## Summary by Severity
<span class="hljs-bullet">   -</span> A bullet list showing counts:
<span class="hljs-bullet">     -</span> <span class="hljs-code">`Severe issues: N`</span>
<span class="hljs-bullet">     -</span> <span class="hljs-code">`Moderate issues: N`</span>
<span class="hljs-bullet">     -</span> <span class="hljs-code">`Low issues: N`</span>

<span class="hljs-bullet">3.</span> ## Detailed Findings
<span class="hljs-bullet">   -</span> Group by logical area, for example:
<span class="hljs-bullet">     -</span> <span class="hljs-code">`### PL/SQL Packages`</span>
<span class="hljs-bullet">     -</span> <span class="hljs-code">`### APEX Pages &amp; Processes`</span>
<span class="hljs-bullet">     -</span> <span class="hljs-code">`### Views &amp; SQL`</span>
<span class="hljs-bullet">     -</span> <span class="hljs-code">`### Security`</span>
<span class="hljs-bullet">     -</span> <span class="hljs-code">`### Performance`</span>
<span class="hljs-bullet">   -</span> Under each area, list issues as subsections:

   Example structure for each issue:
   #### [Severity] Short issue title
<span class="hljs-bullet">   -</span> <span class="hljs-strong">**Severity:**</span> Severe | Moderate | Low
<span class="hljs-bullet">   -</span> <span class="hljs-strong">**Location:**</span> <span class="hljs-code">`path/to/file`</span> (and object name, page number, or line range if available)
<span class="hljs-bullet">   -</span> <span class="hljs-strong">**Description:**</span>
<span class="hljs-bullet">     -</span> Clear explanation of what is wrong and why it matters.
<span class="hljs-bullet">   -</span> <span class="hljs-strong">**Impact:**</span>
<span class="hljs-bullet">     -</span> Security / correctness / performance / maintainability impact in 1–3 lines.
<span class="hljs-bullet">   -</span> <span class="hljs-strong">**Recommendation:**</span>
<span class="hljs-bullet">     -</span> Specific and actionable guidance on how to fix it.
<span class="hljs-bullet">   -</span> <span class="hljs-strong">**Code snippet (before):**</span>
<span class="hljs-bullet">     -</span> Show only the relevant lines, minimal but sufficient context
<span class="hljs-bullet">   -</span> <span class="hljs-strong">**Suggested fix (after or pseudo-code):**</span>
<span class="hljs-bullet">     -</span> Show improved version or a clear pseudo-code template

<span class="hljs-bullet">4.</span> ## Pattern-Level Recommendations
<span class="hljs-bullet">    -</span> Describe any recurring patterns that should be globally fixed (e.g., unsafe dynamic SQL patterns, repeated logic for auditing, repeated branching logic in APEX).
<span class="hljs-bullet">    -</span> Provide 2 to 5 concrete "refactoring themes" that would significantly improve the codebase.

<span class="hljs-bullet">5.</span> ## APEX-Specific Recommendations
<span class="hljs-bullet">    -</span> Suggestions for:
<span class="hljs-bullet">        -</span> Better use of shared components.
<span class="hljs-bullet">        -</span> Improved authorization and authentication handling.
<span class="hljs-bullet">        -</span> Performance tuning for heavy pages, reports, and LOVs.
<span class="hljs-bullet">        -</span> Hardening against XSS and misuse of substitution strings.
<span class="hljs-bullet">6.</span> ## Quick-Win Checklist
<span class="hljs-bullet">    -</span> A concise bullet list of the most important fixes (in priority order) that the team should implement next.

Severity rubric:
Use the following rubric consistently:
<span class="hljs-bullet">-</span> <span class="hljs-strong">**Severe:**</span>
<span class="hljs-bullet">    -</span> Can cause data corruption, security vulnerabilities, major logical errors, or severe performance degradation in realistic conditions.
<span class="hljs-bullet">-</span> <span class="hljs-strong">**Moderate:**</span>
<span class="hljs-bullet">    -</span> Risky or inefficient patterns that can cause noticeable issues under load or over time, but not immediately catastrophic.
<span class="hljs-bullet">-</span> <span class="hljs-strong">**Low:**</span>
<span class="hljs-bullet">    -</span> Style, readability, minor performance improvements, or best-practice alignment that helps maintainability but is not urgent.

Additional rules:
<span class="hljs-bullet">-</span> Prefer <span class="hljs-strong">**precision over volume**</span>: do not list 100 trivial issues; focus on the most impactful ones, while still giving enough detail to be useful.
<span class="hljs-bullet">-</span> When in doubt, <span class="hljs-strong">**show code**</span>: include short but focused code snippets in fenced blocks so developers can quickly locate and fix the problem.
<span class="hljs-bullet">-</span> If something looks dangerous but you are not fully sure (based on the visible context), call it out as a <span class="hljs-strong">**potential issue**</span> and clearly say what assumptions you are making.

Now, perform this review and output only the Markdown report described above.
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Notice that I am asking the AI to output its results to a Markdown file. This makes it easy for me to read the results and could serve as a requirement for feeding back into AI to fix the issues.</div>
</div>

<h1 id="heading-oracle-sqlcl-mcp-server">Oracle SQLcl MCP Server</h1>
<p>Please see <a target="_blank" href="https://blog.cloudnueva.com/oracle-sqlcl-mcp-server-with-codex-and-copilot-joelkallmanday">this post</a> for more on how I use the Oracle SQLcl MCP Server.</p>
<h1 id="heading-writing-new-code">Writing New Code</h1>
<p>You will have noticed that, so far, I have not discussed writing brand-new code based solely on a prompt. This is where I start to get nervous about AI.</p>
<p>Lately, I have started having AI write individual PL/SQL functions and procedures. I have noticed a significant improvement here with the latest models (OpenAI 5.1). I see even better results when I am adding procedures and functions to existing packages. I am sure this is because the additional context helps AI write better code.</p>
<p>This, along with writing SQL statements (using the MCP server), is the sweet spot when it comes to generating new code with AI.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">I don’t think we are there yet when it comes to building new APEX Apps from scratch with AI. But let’s see what happens when APEXlang comes out!</div>
</div>

<h1 id="heading-what-about-apex">What About APEX</h1>
<p>At the moment (before the release of APEX <s>25.2,</s> 26.1), AI is quite capable of reviewing and providing feedback on APEX Apps. This can be done via SQL using the SQLcl MCP server (and the right prompts), or based on AI extracting information from an APEX export file.</p>
<p>When I run the code reviews (mentioned above), AI checks the exported APEX Apps, and I get just as many findings from APEX as from PL/SQL.</p>
<p>I find AI especially useful for:</p>
<ul>
<li><p>Checking APEX security settings (SSP, Authorization Schemes Applied to Buttons, etc.).</p>
</li>
<li><p>Building custom UI Components (e.g., template components).</p>
</li>
</ul>
<h2 id="heading-apex-ai-wizard">APEX AI Wizard</h2>
<p>APEX 24.2 does have a Wizard that allows you to create basic Apps from a prompt (and or a spreadsheet). This is OK for one-off Apps, but I don’t think it is the long-term answer for building APEX Apps with AI (nor does Oracle).</p>
<h2 id="heading-apexlang">APEXlang</h2>
<p>APEXlang will be the future of building APEX Apps with AI. It is not out yet, but I saw a demo at this year’s Oracle AI World, and it looks very promising. Having a formal syntax (that an AI can learn) will unlock the ability to build brand new Apps and perform major refactors on your APEX Apps.</p>
<h2 id="heading-ai-controlled-browsers">AI Controlled Browsers</h2>
<p>When <a target="_blank" href="https://openai.com/index/introducing-chatgpt-atlas/">ChatGPT Atlas</a> was released, I immediately logged into my <a target="_blank" href="https://oracleapex.com/ords">https://oracleapex.com/ords</a> instance and asked AI to write an App to track the service history for my car. I was pretty impressed, it used QuickSQL to generate a data model and then built a fully functional App (albeit relatively simple).</p>
<p>I think the future of AI-controlled browsers and APEX lies in automated testing. You can turn your test scripts into prompts and have the browser run them. Google’s <a target="_blank" href="https://antigravity.google/">AntiGravity</a> IDE takes this a step further and can capture screenshots of the testing along the way.</p>
<h1 id="heading-how-do-i">How do I?</h1>
<p>The final use of AI for me is asking ad hoc questions. I work for myself, so I don’t have colleagues to ask questions of around the watercooler.</p>
<h1 id="heading-agentsmd">AGENTS.MD</h1>
<p>I encourage you to spend an afternoon (or two) developing a robust <a target="_blank" href="https://agents.md/">AGENTS.md</a> file for each of your code repositories. AGENTS.md gives you the ability to tell the AI what your naming conventions are, how you like to structure SQL statements, and general guidelines for how you want it to behave when generating code. Without it, it relies on your existing code base, and you will often end up spending as much time formatting your code as the time the AI saves you to start with.</p>
<p>I am not sure if <code>AGENTS.md</code> will become the de facto standard, but it is a better alternative than having AI coding agents use their own files to instruct the AI (.github/copilot-instructions.md, cursor.json, etc).</p>
<h1 id="heading-warning">Warning!</h1>
<p>There are a few things that I think you must keep in mind when using AI to help you code:</p>
<ul>
<li><p><strong>AI is a Tool, not a Crutch</strong> - There is no substitute for knowing what the code is supposed to look like. We all write the odd JavaScript snippet using AI without really understanding the output, but this should not be the default. If you are building code and getting paid for it, you'd better understand what it does!</p>
</li>
<li><p><strong>Never Leave Security Entirely Up to AI</strong> - I mentioned above that AI does a good job of checking security settings in APEX. Even though it does, I always manually check security settings at the end of a project (using my own SQL statements or tools like <a target="_blank" href="https://github.com/oracle-samples/apex-sert">APEX-SERT</a>, <a target="_blank" href="https://apexsec.recx.co.uk/">ApexSec,</a> or <a target="_blank" href="https://www.united-codes.com/products/apexprojecteye/">APEX Project Eye</a>).</p>
</li>
<li><p><strong>AI is Making me Dumber</strong> - There is no doubt in my mind that AI is making me dumber. When you rely on any tool for a period of time, your body adapts. When you buy a snow blower, the muscles you had from shoveling snow atrophy. The same can be said of AI. Does it really matter as long as you are getting the job done? It wasn’t a big deal when we adopted the calculator over mental math, but this may be different. Only time will tell.</p>
</li>
</ul>
<h2 id="heading-ai-overloadfatigue"><strong>AI Overload/Fatigue</strong></h2>
<p>Phillipp Hartenfeller recently posted the below comment in a LinkedIn thread, which really struck a chord with me.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763906584974/94985089-8969-4a71-ae79-834cb6fe8bf1.png" alt="LinkedIn Post AI Fatigue" class="image--center mx-auto" /></p>
<p>I realized that I, too, am about 20% more efficient with AI than without. This productivity improvement, however, is both a blessing and a curse. I am now able to work on more projects at the same time, which means dealing with more people, more project constraints, more meetings, etc. There is also increased context switching (not just between projects, but also between my coding and checking the code that the AI is generating for me). This leaves aside the time it takes to learn and keep up with the latest AI developments.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🧠</div>
<div data-node-type="callout-text">I don’t yet know the answer to this overload/fatigue, but I have challenged myself to be aware of it and try to mitigate it.</div>
</div>

<h1 id="heading-conclusion">Conclusion</h1>
<p>Much is being said about the hype surrounding AI and the AI bubble. For developers, however, I believe the promise/threat (depending on which way you look at it) of AI is real. I see AI as an adapt or die situation for programmers. Having said that, we are still IT professionals, and it is incumbent on us to use AI professionally.</p>
<p>In a few months, most of this article will be out of date (not least because <strong>APEX 26.1</strong> will likely have been released along with APEXlang).</p>
]]></content:encoded></item><item><title><![CDATA[The Importance of Using Views in APEX]]></title><description><![CDATA[Introduction
With so many emerging technologies inside and outside the Oracle database ecosystem, it might seem trivial, even old-fashioned, to talk about database views in APEX.
A recent project reminded me of the importance of getting the basics ri...]]></description><link>https://blog.cloudnueva.com/importance-of-using-views-in-apex</link><guid isPermaLink="true">https://blog.cloudnueva.com/importance-of-using-views-in-apex</guid><category><![CDATA[orclapex]]></category><category><![CDATA[#oracle-apex]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 20 Nov 2025 12:38:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763642268503/0a045f39-e160-4e21-a426-ae068dbb8719.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>With so many emerging technologies inside and outside the Oracle database ecosystem, it might seem trivial, even old-fashioned, to talk about database views in APEX.</p>
<p>A recent project reminded me of the importance of getting the basics right, so I thought I would share this insight in a brief post.</p>
<h1 id="heading-use-case">Use Case</h1>
<p>Recently, a project reminded me why views are still critical. I was building an inventory count application with three core tables: items, count_sheets, and item_categories. Several pages in the APEX app displayed similar data combinations, initial count, verified count, administrative adjustments, and extended values.</p>
<p>Here’s a simplified version of one of those queries:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> ctgy.category_name
,      ctlg.part_number
,      ctlg.part_description
,      ctlg.uom
,      ctlg.price
,      <span class="hljs-keyword">ROUND</span>(ctlg.price / <span class="hljs-keyword">NULLIF</span>(ctlg.qty_conversion, <span class="hljs-number">0</span>), <span class="hljs-number">2</span>) <span class="hljs-keyword">AS</span> unit_price
,      csh.counter_count                                     <span class="hljs-keyword">AS</span> counter_count
,      csh.reconcile_count                                   <span class="hljs-keyword">AS</span> reconcile_count
,      csh.adjustment_count                                  <span class="hljs-keyword">AS</span> adjustment_count
,      <span class="hljs-keyword">COALESCE</span>(csh.adjustment_count, 
                csh.reconcile_count, 
                csh.counter_count)                           <span class="hljs-keyword">AS</span> final_count
<span class="hljs-keyword">FROM</span>   inv_count_sheet csh
,      inv_catalog     ctlg
,      inv_category    ctgy
<span class="hljs-keyword">WHERE</span>  csh.item_id      = ctlg.item_id
<span class="hljs-keyword">AND</span>    ctlg.category_id = ctgy.category_id;
</code></pre>
<p>There is nothing overly complicated about the SQL. If I use it in the four or five pages, you may not think it is a big deal to replicate the SQL.</p>
<blockquote>
<p>💡 <em>What if the calculation for final_count needs to change?</em></p>
<p>If you’ve repeated this SQL across several APEX pages, even a small change means hunting down and updating each instance, then testing every affected page.</p>
<p>By encapsulating the query in a <strong>database view</strong>, that same change takes seconds to apply and test.</p>
</blockquote>
<p>Of course, this approach is not a panacea. If I needed to add columns or remove columns, I still need to make some code changes. Also, be mindful of performance implications. While views simplify code maintenance, they can sometimes hide expensive joins or aggregations.</p>
<h1 id="heading-other-benefits">Other Benefits</h1>
<p>Re-use and encapsulation of complex logic is the primary benefit of using views, but there are others:</p>
<ul>
<li><p><strong>Elegant APEX Apps</strong> - Instead of embedding complex joins or aggregations in multiple APEX pages, you define them once in a view. This centralized logic makes your APEX pages much simpler and easier to maintain.</p>
</li>
<li><p><strong>Security</strong> - You can limit the scope of views and expose the views instead of the underlying tables to consumers. When combined with <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/aeapi/SET_TENANT_ID.html">APEX_SESSION.SET_TENANT_ID</a>, and <code>SYS_CONTEXT('APEX$SESSION', 'APP_TENANT_ID')</code>, you can use views to limit access in <a target="_blank" href="https://blog.cloudnueva.com/multi-tenant-apex-apps">Multi-Tenant Applications</a>.</p>
</li>
<li><p><strong>Consistent Data Model for APEX Components</strong> - Interactive grids, reports, and charts all query the same logical layer. This ensures uniform results across the application and reduces data inconsistency issues.</p>
</li>
<li><p><strong>Reusability Across Consumers</strong> - The same view can serve multiple APEX apps, RESTful services, AOP Reports, and Document Generator Reports, promoting modular design and avoiding duplicated SQL definitions.</p>
</li>
</ul>
<h1 id="heading-sql-macros">SQL Macros</h1>
<blockquote>
<p>For even more flexibility, Oracle provides <strong>SQL Macros</strong> (introduced in Oracle Database 19c). SQL Macros let you encapsulate reusable SQL logic, either as scalar or table macros, that expand at parse time. Unlike static views, they can accept parameters. This makes them a powerful complement to traditional views when you need more adaptable query definitions.</p>
</blockquote>
<p>In the table SQL Macro example below, I can pass in the Business Unit ID (p_bu_id) as a paremeter:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">OR</span> <span class="hljs-keyword">REPLACE</span> <span class="hljs-keyword">FUNCTION</span> example_sql_macro (p_bu_id <span class="hljs-keyword">IN</span> <span class="hljs-built_in">NUMBER</span>) <span class="hljs-keyword">RETURN</span> <span class="hljs-keyword">CLOB</span> SQL_MACRO <span class="hljs-keyword">AS</span>
<span class="hljs-keyword">BEGIN</span>
  <span class="hljs-keyword">RETURN</span> q<span class="hljs-string">'{
SELECT ctgy.category_name
,      ctlg.part_number
FROM   inv_count_sheet csh
,      inv_catalog     ctlg
,      inv_category    ctgy
WHERE  csh.item_id      = ctlg.item_id
AND    ctlg.category_id = ctgy.category_id
AND    csh.bu_id        = p_bu_id
  }'</span>;
<span class="hljs-keyword">END</span> example_sql_macro;
</code></pre>
<p>You use table SQL Macros in a SELECT statement just like a regular view:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> *
<span class="hljs-keyword">FROM</span>   example_sql_macro (p_bu_id =&gt; <span class="hljs-number">1</span>);
</code></pre>
<p>Parameters are substituted at parse time, and then the SQL is executed.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>This post is a reminder to myself and others that views are an essential part of thoughtful APEX Application Development. Even though there are plenty of shiny new tech tools out there, there is no excuse not to do the basics right.</p>
]]></content:encoded></item><item><title><![CDATA[Color Icons in APEX Tree Regions]]></title><description><![CDATA[Introduction
This is a quick post to show you how to display different-colored icons in the nodes of an APEX Tree Region. This is one of those things that should be easier than it is!
Goal
I am building an App to maintain corporate hierarchies. These...]]></description><link>https://blog.cloudnueva.com/color-icons-in-apex-tree-regions</link><guid isPermaLink="true">https://blog.cloudnueva.com/color-icons-in-apex-tree-regions</guid><category><![CDATA[orclapex]]></category><category><![CDATA[#oracle-apex]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 06 Nov 2025 13:57:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1762126954918/14a497d4-7e70-4f7d-8c15-5020b09ecd43.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>This is a quick post to show you how to display different-colored icons in the nodes of an APEX Tree Region. This is one of those things that should be easier than it is!</p>
<h1 id="heading-goal">Goal</h1>
<p>I am building an App to maintain corporate hierarchies. These could be cost center hierarchies, job hierarchies, etc. A key piece of functionality is the ability to upload changes to the hierarchies via Excel and provide a Diff between the original hierarchy and the newly uploaded hierarchy. The diff should show:</p>
<ul>
<li><p>Added Nodes (node added to the hierarchy)</p>
</li>
<li><p>Removed Nodes (node removed from the hierarchy)</p>
</li>
<li><p>Moved Nodes (node moved from one parent to another)</p>
</li>
</ul>
<p>The UI needs to show changes in the hierarchy and clearly distinguish between the above changes.</p>
<h1 id="heading-the-solution-part-1">The Solution Part 1</h1>
<p>Clearly, an APEX Tree region would be a great choice to illustrate changes to the hierarchy. Using the Oracle CONNECT BY clause, we can construct a hierarchical query that compares the hierarchy in the main table to the hierarchy being uploaded. We will skip the diff logic, but you can see the <a target="_blank" href="https://gist.github.com/jon-dixon/be35be00523935e8c9adec5607491bf3">SQL here</a> if you are interested.</p>
<p>In APEX, we set up a Tree Region and reference the columns in the outer SQL:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762124926847/c51d0913-dd5d-4ebe-ad1f-9f873d9a77d3.png" alt="APEX Tree Region Settings" class="image--center mx-auto" /></p>
<p>We end up with something like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762124570181/b3b19b5c-d0e9-43b4-9fa0-07045dfc5e34.png" alt="Oracle APEX Tree Region No Color" class="image--center mx-auto" /></p>
<p>This looks great, but for larger hierarchies, it will be challenging for users to spot changes to the newly uploaded hierarchy.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Now to the point of this blog post, how to change the colors of the icons in the tree?</div>
</div>

<p>Ideally, we could do something like this in the CASE statement, which determines the icon:</p>
<pre><code class="lang-sql">  ,      CASE status
           WHEN 'Added'   THEN 'fa-plus-circle u-success-text'
           WHEN 'Removed' THEN 'fa-times-circle u-danger-text'
           WHEN 'Moved'   THEN 'fa-arrows-alt u-info-text'
           WHEN 'Changed' THEN 'fa-exchange u-info-text'
           ELSE 'fa-circle-o'
         <span class="hljs-keyword">END</span> <span class="hljs-keyword">AS</span> icon_css
</code></pre>
<ul>
<li>The <code>icon_css</code> column above is referenced in the APEX Tree Region Attribute called ‘Icon CSS Class Column’. Navigation: Tree → Attributes → Appearance → Icon CSS Class Column.</li>
</ul>
<p>Unfortunately, this does not work. If we inspect the HTML and CSS in the page, we can see why:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762125533022/65471fd3-109c-4541-bf97-a569a50e4775.png" alt="APEX Tree Region CSS" class="image--center mx-auto" /></p>
<ul>
<li>Even though we see the u-success-text class that we added, APEX is using a CSS variable <code>--a-treeview-node-icon-color</code> to determine the icon color. We could, of course, override that color, but the override would apply to all nodes, and we are back where we started.</li>
</ul>
<h1 id="heading-the-solution-part-2">The Solution Part 2</h1>
<p>Ideally, APEX would allow you to pass your own color class in the SQL like this:</p>
<pre><code class="lang-sql">  ,      CASE status
           WHEN 'Added'   THEN 'fa-plus-circle icon-green'
           WHEN 'Removed' THEN 'fa-times-circle icon-red'
           WHEN 'Moved'   THEN 'fa-arrows-alt icon-blue'
           WHEN 'Changed' THEN 'fa-exchange icon-blue'
           ELSE 'fa-circle-o icon-grey'
         <span class="hljs-keyword">END</span> <span class="hljs-keyword">AS</span> icon_css
</code></pre>
<p>On the page level, Inline CSS, we would style the above color class.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762125853294/84330afc-9d60-4cf2-ba92-4315e734080c.png" alt="Page Level CSS" class="image--center mx-auto" /></p>
<p>Unfortunately, this does not work either, as the APEX class overrides ours.</p>
<h1 id="heading-the-solution-part-3">The Solution - Part 3</h1>
<div data-node-type="callout">
<div data-node-type="callout-emoji">😠</div>
<div data-node-type="callout-text">OK, enough about what won’t work… What will work?</div>
</div>

<p>We can work with APEX and set the <code>--a-treeview-node-icon-color</code> CSS variables based on our own custom CSS classes. Add the following to the page level Inline CSS attribute. <em>Page → CSS → Inline</em> (or <em>Theme → Custom CSS</em> if you want it app-wide).</p>
<pre><code class="lang-css"><span class="hljs-selector-class">.a-TreeView</span> <span class="hljs-selector-class">.fa</span><span class="hljs-selector-class">.icon-green</span>,
<span class="hljs-selector-class">.a-TreeView</span> <span class="hljs-selector-class">.a-Icon</span><span class="hljs-selector-class">.icon-green</span>,
<span class="hljs-selector-class">.a-TreeView</span> <span class="hljs-selector-class">.t-Icon</span><span class="hljs-selector-class">.icon-green</span> { <span class="hljs-attribute">--a-treeview-node-icon-color</span>: <span class="hljs-built_in">var</span>(--ut-palette-success); }

<span class="hljs-selector-class">.a-TreeView</span> <span class="hljs-selector-class">.fa</span><span class="hljs-selector-class">.icon-red</span>,
<span class="hljs-selector-class">.a-TreeView</span> <span class="hljs-selector-class">.a-Icon</span><span class="hljs-selector-class">.icon-red</span>,
<span class="hljs-selector-class">.a-TreeView</span> <span class="hljs-selector-class">.t-Icon</span><span class="hljs-selector-class">.icon-red</span> { <span class="hljs-attribute">--a-treeview-node-icon-color</span>: <span class="hljs-built_in">var</span>(--ut-palette-danger); }

<span class="hljs-selector-class">.a-TreeView</span> <span class="hljs-selector-class">.fa</span><span class="hljs-selector-class">.icon-blue</span>,
<span class="hljs-selector-class">.a-TreeView</span> <span class="hljs-selector-class">.a-Icon</span><span class="hljs-selector-class">.icon-blue</span>,
<span class="hljs-selector-class">.a-TreeView</span> <span class="hljs-selector-class">.t-Icon</span><span class="hljs-selector-class">.icon-blue</span> { <span class="hljs-attribute">--a-treeview-node-icon-color</span>: <span class="hljs-built_in">var</span>(--ut-palette-info); }

<span class="hljs-selector-class">.a-TreeView</span> <span class="hljs-selector-class">.fa</span><span class="hljs-selector-class">.icon-grey</span>,
<span class="hljs-selector-class">.a-TreeView</span> <span class="hljs-selector-class">.a-Icon</span><span class="hljs-selector-class">.icon-grey</span>,
<span class="hljs-selector-class">.a-TreeView</span> <span class="hljs-selector-class">.t-Icon</span><span class="hljs-selector-class">.icon-grey</span> { <span class="hljs-attribute">--a-treeview-node-icon-color</span>: <span class="hljs-built_in">var</span>(--u-color-<span class="hljs-number">29</span>);}
</code></pre>
<p>Add the custom CSS classes icon-grey, icon-blue, icon-green, and icon-red to our SQL, along with the icon classes.</p>
<pre><code class="lang-sql">,      CASE status
           WHEN 'Added'   THEN 'fa-plus-circle icon-green'
           WHEN 'Removed' THEN 'fa-times-circle icon-red'
           WHEN 'Moved'   THEN 'fa-arrows-alt icon-blue'
           WHEN 'Changed' THEN 'fa-exchange icon-blue'
           ELSE 'fa-circle-o icon-grey'
         <span class="hljs-keyword">END</span> <span class="hljs-keyword">AS</span> icon_css
</code></pre>
<p>If we take a look at the HTML and CSS generated in the page now, we can see the UT <code>u-success-green</code> color class has been used in the <code>--a-treeview-node-icon-color</code> CSS variable for the specific node.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762126335824/57a0b39f-c21e-413f-bc6a-42b94e7ffbc7.png" alt class="image--center mx-auto" /></p>
<p>The result is colorized node icons, which make it easier to see what is being added, removed, or moved:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762126367536/33a681f3-f6f9-4550-98e3-453e613a2247.png" alt="APEX Tree Region with colored Node Icons" class="image--center mx-auto" /></p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>I could have written this post with just the solution, but I thought it would be useful to see my thought process in solving this problem.</p>
<h2 id="heading-call-to-action">📣 Call to Action</h2>
<p>I have created an Idea in the <a target="_blank" href="https://apex.oracle.com/ideas/FR-4704">APEX Ideas App</a>, suggesting it would be easier to pass a color class along with the icon class. Unfortunately, the idea has been closed 😞</p>
]]></content:encoded></item><item><title><![CDATA[Oracle SQLcl MCP Server with Codex & Copilot #JoelKallmanDay]]></title><description><![CDATA[Introduction
I have been using GitHub Copilot with Oracle’s SQLcl MCP server since its release in July 2025. The combination of Generative AI and databases is a powerful pairing that can help APEX developers build better products in less time. This s...]]></description><link>https://blog.cloudnueva.com/oracle-sqlcl-mcp-server-with-codex-and-copilot-joelkallmanday</link><guid isPermaLink="true">https://blog.cloudnueva.com/oracle-sqlcl-mcp-server-with-codex-and-copilot-joelkallmanday</guid><category><![CDATA[sqlcl]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[mcp]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Wed, 15 Oct 2025 13:00:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760241018027/ac7c8104-e076-456c-82ce-51c2756fe21f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>I have been using GitHub Copilot with Oracle’s SQLcl MCP server since its release in July 2025. The combination of Generative AI and databases is a powerful pairing that can help APEX developers build better products in less time. This statement comes with some caveats, which I will cover in this post.</p>
<p>In this post, I will describe how to set up the SQLcl MCP server in SQL Developer for VS Code, utilizing both <a target="_blank" href="https://openai.com/codex/">OpenAI’s Codex</a> and <a target="_blank" href="https://github.com/features/copilot">GitHub Copilot</a>. I will also review several use cases and provide example prompts, which will help you get more out of this technology.</p>
<h1 id="heading-configuring-the-sqlcl-mcp-server">Configuring the SQLcl MCP Server</h1>
<p>First, you will need to install the latest version of the SQL Developer Extension for VS Code and the latest version of SQLcl on your machine. You will also need to know the path to your SQLcl install.</p>
<p>In the following sections, I will describe how to set up Codex and GitHub Copilot to utilize the SQLcl MCP Server. Which tool to use is up to you. I currently use both, but am leaning toward Codex as my go-to development assistant.</p>
<h2 id="heading-configuring-github-copilot">Configuring GitHub Copilot</h2>
<p>See the ‘Appendix 1 - Configure &amp; Test GitHub Copilot with SQLcl MCP Server’ for details on how to set up and test the SQLcl MCP Server with VS Code.</p>
<h2 id="heading-configuring-openai-codex">Configuring OpenAI Codex</h2>
<h3 id="heading-install-the-openai-codex-extension-for-vs-code">Install the OpenAI Codex Extension for VS Code</h3>
<ul>
<li>Install the Codex Extension for VS Code.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760206643572/5f06c8b5-e441-43c9-a5e7-82679ff58d7c.png" alt="Codex Extension for VS Code" class="image--center mx-auto" /></p>
<ul>
<li><p>Log in to your OpenAI / ChatGPT account</p>
<ul>
<li>Open the Codex extension &gt; Click the settings gear icon &gt; Log in</li>
</ul>
</li>
</ul>
<h2 id="heading-configure-codex-to-use-the-sqlcl-mcp-server">Configure Codex to use the SQLcl MCP Server</h2>
<ul>
<li>Open the Codex extension &gt; Click ⚙️ &gt; MCP Settings &gt; Open config.toml</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760206131975/c1656423-e2cf-42e6-8102-46b543ff6e2a.png" alt="Codes Settings" class="image--center mx-auto" /></p>
<ul>
<li>Once the file opens, copy and paste the below text into it (adjust your SQLcl path accordingly).</li>
</ul>
<pre><code class="lang-ini"><span class="hljs-section">[mcp_servers.sqlcl]</span>
<span class="hljs-attr">command</span> = <span class="hljs-string">"/opt/homebrew/Caskroom/sqlcl/25.3.0.274.1210/sqlcl/bin/sql"</span>
<span class="hljs-attr">args</span> = [<span class="hljs-string">"-mcp"</span>]
<span class="hljs-attr">startup_timeout_ms</span> = <span class="hljs-number">60000</span>
</code></pre>
<h2 id="heading-testing-the-setup">Testing the Setup</h2>
<p>For the rest of this post, I will be using a saved connection called ‘DEMO’ in SQLDeveloper for VS Code:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760208293698/88509c97-81e4-42ad-88d1-65b24d7e7e09.png" alt="DEMO SQL Developer Connection" class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">SQLcl and the SQL Developer Extension for VS Code share the same DB connections. This is how SQLcl connects to your database in the examples below.</div>
</div>

<h3 id="heading-test-from-codex-with-vs-code">Test from Codex with VS Code</h3>
<p>Open the Codex Extension and type the following in the chat Window:</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Connect to DEMO using the SQLcl MCP Server</div>
</div>

<p>Once connected, ask:</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">How many tables are these in the schema</div>
</div>

<p>You should end up with something like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760208546997/520c055b-abed-4fd3-a3cc-556589054818.png" alt="Codex Connected to the DB" class="image--center mx-auto" /></p>
<h3 id="heading-test-from-the-codex-cli">Test from the Codex CLI</h3>
<p>The SQLcl MCP Server also works with the <a target="_blank" href="https://developers.openai.com/codex/cli/">OpenAI Codex CLI.</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760314788470/2af6882e-097e-416a-a5ce-0fb3832f1fc7.png" alt="Codex CLI and SQLcl MCP Server 1" class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760314805992/f8b38895-0ae9-4b73-9139-a30843fdd548.png" alt="Codex CLI and SQLcl MCP Server 1" class="image--center mx-auto" /></p>
<h1 id="heading-how-it-works">How it Works</h1>
<p>Before reviewing the examples, it’s important to understand how the SQLcl MCP Server interacts with the Large Language Model (LLM).</p>
<p>When you type a prompt in Copilot Chat or Codex, VS Code sends that prompt, along with prior context and a list of available tools, to the LLM. One of those tools can be the SQLcl MCP Server.</p>
<p>The LLM analyzes the prompt and determines whether SQLcl is the appropriate tool to assist in fulfilling the request. If it is, the LLM instructs VS Code to run a command through SQLcl and return the output. The LLM then uses that output to decide the next step, continuing this exchange until the request is resolved.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">In short, <strong>the LLM is the driver; SQLcl is just an assistant.</strong></div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">⚠</div>
<div data-node-type="callout-text">The reason I call this out is that if the LLM decides that dropping all of your tables will answer the question and that SQLcl is the right tool for the job, then SQLcl will happily drop all the tables!</div>
</div>

<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760212037485/ffe00adf-a5f9-493f-99ff-e6247f0d44f1.png" alt="It will drop tables!" class="image--center mx-auto" /></p>
<h1 id="heading-agentsmd">AGENTS.md</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760210292198/9b39df34-09cd-4e0f-9206-3c22b5a8d22f.png" alt="AGENTS.md" class="image--center mx-auto" /></p>
<p>Adding a file called <a target="_blank" href="https://agents.md/">AGENTS.md</a> to the root of your GitHub repositories allows you to pass on guidelines to the LLM. Both Codex and <a target="_blank" href="https://github.blog/changelog/2025-08-28-copilot-coding-agent-now-supports-agents-md-custom-instructions/">GitHub Copilot</a> recognize this file. You can include details such as:</p>
<ul>
<li><p>APEX and DB Versions</p>
</li>
<li><p>Folder Structures of your Repo</p>
</li>
<li><p>Coding Standards and Conventions</p>
</li>
<li><p>Formatting Rules</p>
</li>
<li><p>Agent behaviors and guidelines (don’t drop my tables)</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">I created the initial version of my AGENTS.md by asking ChatGPT to create it for me.</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">⚠</div>
<div data-node-type="callout-text">⚠️ <strong>Important:</strong> Do not rely on <a target="_self" href="http://AGENTS.md">AGENTS.md</a> to prevent destructive actions (DDL or DML). It only provides guidance — not enforcement.</div>
</div>

<h1 id="heading-sqlcl-mcp-use-cases">SQLcl MCP Use Cases</h1>
<p>Now that we are set up, it’s time to go through some use cases. I think this is the best way to understand why the SQLcl MCP server is so helpful.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">From now on, I will be using Codex for the demos. I will also assume that we are already connected to the DEMO database connection.</div>
</div>

<h2 id="heading-create-a-query">Create a Query</h2>
<p>Let’s start by having the MCP Server create a query for us to use.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Create a SQL query to get sales orders, customers, sales order lines, and products</div>
</div>

<p>Result:</p>
<hr />
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> so.order_number
     , so.order_date
     , so.total_amount         <span class="hljs-keyword">AS</span> order_total
     , so.status               <span class="hljs-keyword">AS</span> order_status
     , c.name                  <span class="hljs-keyword">AS</span> customer_name
     , c.email                 <span class="hljs-keyword">AS</span> customer_email
     , sol.line_number
     , sol.quantity
     , sol.unit_price
     , sol.line_total
     , p.sku
     , p.name                  <span class="hljs-keyword">AS</span> product_name
<span class="hljs-keyword">FROM</span>   sales_orders       so
     , customers          c
     , sales_order_lines  sol
     , products           p
<span class="hljs-keyword">WHERE</span>  so.customer_id    = c.customer_id
<span class="hljs-keyword">AND</span>    sol.order_id      = so.order_id
<span class="hljs-keyword">AND</span>    sol.product_id    = p.product_id;
</code></pre>
<hr />
<ul>
<li><p>The first thing you may notice is that the SQL is formatted. This is because of the formatting instructions in the AGENTS.md file. It also uses the old Oracle SQL syntax, again, because this is in my AGENTS.md (and because I am old 👴).</p>
</li>
<li><p>The LLM requested details of the tables in the database, SQLcl provided them, and the LLM generated the query. It correctly identified the tables based on my prompt, even though I did not use exact table names.</p>
</li>
</ul>
<h2 id="heading-tune-the-query">Tune the Query</h2>
<p>Now that we have a query, let’s tune it for optimal performance when we want to view orders with an order date within the past month.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Tune the SQL statement below. I want to improve performance when querying orders with an order date in the past month. Run the SQL query against the database, check the explain plan, and iterate through options until you find the best solution. SQL: SELECT so.order_number , so.order_date , <a target="_self" href="http://so.total">so.total</a>_amount AS order_total , so.status AS order_status , <a target="_self" href="http://c.name">c.name</a> AS customer_name , <a target="_self" href="http://c.email">c.email</a> AS customer_email , sol.line_number , sol.quantity , sol.unit_price , sol.line_total , p.sku , <a target="_self" href="http://p.name">p.name</a> AS product_name FROM sales_orders so , customers c , sales_order_lines sol , products p WHERE so.customer_id = c.customer_id AND sol.order_id = so.order_id AND sol.product_id = p.product_id AND so.order_date &gt;= ADD_MONTHS(TRUNC(SYSDATE),-1)</div>
</div>

<p>Here is an excerpt from the LLM’s “model reasoning trace” and its back and forth with SQLcl:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760211490603/8c2deb8f-526d-475f-9833-10c2e62b3c84.png" alt="LLM Chain of Thought" class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">As you can see, the LLM and SQLcl went through many iterations.</div>
</div>

<p><strong>Here is the final response from the LLM:</strong></p>
<hr />
<blockquote>
<p><strong>Tuning Results</strong></p>
<ul>
<li><p>Captured the original plan: full scans on SALES_ORDERS and SALES_ORDER_LINES, cost 12 (plan hash 1363380163), so the date filter wasn’t selective without supporting indexes.</p>
</li>
<li><p>Added index sales_orders_ord_dt_idx on (order_date, order_id) to allow the optimizer to apply the last-month filter via an index range scan.</p>
</li>
<li><p><strong><mark>Added index</mark></strong> sales_order_lines_ord_idx on (order_id, product_id) so line lookups now use range scans instead of scanning the entire detail table.</p>
</li>
<li><p><strong><mark>Regathered stats</mark></strong> on both tables (DBMS_STATS.GATHER_TABLE_STATS) to make the new access paths visible to the optimizer.</p>
</li>
<li><p>New plan hash 1931885458: both SALES_ORDERS and SALES_ORDER_LINES now use index range scans with batched rowid lookups; plan cost dropped to 9, and the query no longer performs storage full scans.</p>
</li>
<li><p>Executed the tuned SQL (ANSI JOIN form) and confirmed it returns no rows for the past month in this sample dataset, indicating the plan runs without error.</p>
</li>
</ul>
<p><strong>Next Steps</strong></p>
<ol>
<li><p>Validate the change against production-scale volumes (SQL Monitor/ASH) to quantify real runtime gains.</p>
</li>
<li><p>Keep table stats current after large data loads so the optimizer continues to choose the new indexes.</p>
</li>
</ol>
</blockquote>
<hr />
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">As you can see, it went ahead and created a new index and gathered stats. This is great, and it did reduce the cost of the SQL from 14 to 9, but I did not ask it to do anything that changed the database. More proof (if any was needed) that you should not be running this against PROD.</div>
</div>

<h2 id="heading-combining-context-from-your-db-and-your-codebase">Combining Context from Your DB and Your Codebase</h2>
<p>Sometimes it is helpful to ask questions about your codebase in GitHub and what is actually in your database.</p>
<p>I have a folder open in VS Code with a GitHub Repo:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760212336403/14975e50-d061-4e48-b7b6-74c72d3c8623.png" alt="Folder Open with Github Repo" class="image--center mx-auto" /></p>
<p>Now, I can ask questions that span the DB and my file system.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Which tables in the database are referenced in the codebase? Use the SQLcl MCP server to inspect the DB.</div>
</div>

<p>Here is an excerpt from the “model reasoning trace”:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760212653146/9d8ca341-ea8d-417a-bb72-7ff4cc18442c.png" alt="DB and File System Chain of Thought" class="image--center mx-auto" /></p>
<p><strong>Here is the Response:</strong></p>
<hr />
<blockquote>
<p>Connected to the DEMO database via SQLcl and cross-referenced every SQL/PL/SQL source in the repo. The query select distinct table_name from all_tables where table_name in (...) returned twelve matches, and each has live references in the codebase:</p>
<p><strong>GenAI Schema Tables</strong></p>
<ul>
<li><p>GEN_AI_CONFIGS – defined in cn_utils/ai/TABLES/GEN_AI_CONFIGS_CRTB.sql:1, used in cn_utils/ai/PLSQL/GEN_AI_UTL_PKB.sql:18.</p>
</li>
<li><p>GEN_AI_LOGS – defined in cn_utils/ai/TABLES/GEN_AI_LOGS_CRTB.sql:2, logged through cn_utils/ai/PLSQL/GEN_AI_UTL_PKB.sql:8.</p>
</li>
<li><p>GEN_AI_TOOLS – created in cn_utils/ai/TABLES/GEN_AI_TOOLS_CRTB.sql:1, referenced by cn_utils/ai/PLSQL/GEN_AI_TOOLS_UTL_PKB.sql:1.</p>
</li>
<li><p>GEN_AI_TOOL_PARAMS – documented in cn_utils/ai/TABLES/GEN_AI_TOOL_PARAMS_CRTB.sql:1, queried in cn_utils/ai/PLSQL/GEN_AI_TOOLS_UTL_PKB.sql:66.</p>
</li>
<li><p>SALES_ORDERS – DDL/comments in cn_utils/ai/SELECTAI/SELECT_AI_DEMO.sql:19, accessed in cn_utils/ai/PLSQL/GEN_AI_EXAMPLE_TOOLS_PKB.sql:26.</p>
</li>
<li><p>CUSTOMERS – table metadata in cn_utils/ai/SELECTAI/SELECT_AI_DEMO.sql:28, queried in conference_material/KSCOPE25/Built with LOV Using APEX/f777.sql:7176.</p>
</li>
</ul>
<p><strong>Content &amp; Reference Tables</strong></p>
<ul>
<li><p>CNBA_BLOG_POSTS – synchronized in cn_utils/ai_demo_app/APEX/f110.sql:334, selected in cn_utils/ai_demo_app/PLSQL/CN_AI_UTL_PKB.sql:36.</p>
</li>
<li><p>CNBA_POST_CHUNKS – inserted in cn_utils/ai_demo_app/APEX/f110.sql:2765, joined in cn_utils/ai_demo_app/PLSQL/CN_AI_UTL_PKB.sql:88.</p>
</li>
<li><p>CN_DOCUMENTS – queried in cn_utils/ai_demo_app/APEX/f110.sql:1580, retrieved in cn_utils/ai_demo_app/PLSQL/CN_AI_UTL_PKB.sql:176.</p>
</li>
<li><p>CN_DOCUMENT_CHUNKS – read in cn_utils/ai_demo_app/APEX/f110.sql:3860, maintained in cn_utils/ai_demo_app/PLSQL/CN_AI_UTL_PKB.sql:252.</p>
</li>
<li><p>PRODUCTS – LOV definition in cn_utils/ai_demo_app/APEX/f110.sql:1572, included in cn_utils/ai/SELECTAI/SELECT_AI_DEMO.sql:10.</p>
</li>
<li><p>COUNTRIES – created in conference_material/KSCOPE25/Built with LOV Using APEX/countries.sql:2, queried in conference_material/KSCOPE25/Built with LOV Using APEX/f777.sql:2462.</p>
</li>
</ul>
<p>Other names appeared in the source scan but are not tables in the DEMO schema (e.g., views like CNDEMO_ADDRESSES_V or package calls such as TABLE_FN_PK.EVS_SOLD_BY_YEAR). Let me know if you want to repeat the check against another connection or extend it to views/synonyms.</p>
</blockquote>
<hr />
<h2 id="heading-full-agent-mode">Full Agent Mode</h2>
<p>Q&amp;A questions are great, but can we create code in the DB? Let’s try!</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Create a TAPI package for the customers, orders, order_lines, and products tables in the DEMO db. Use coding standards laid out in the AGENT.md file. Use the prefix AIDEMO for the package name. Create the package directly in the DEMO DB using the sqlcl mcp server.</div>
</div>

<p><strong>Notice</strong> the table names are not quite right. I have orders instead of sales_orders and order_lines instead of sales_order_lines.</p>
<p>Here is an excerpt from the “model reasoning trace”:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760213083700/3832a37c-5c71-41bd-85f2-b9d602f6f453.png" alt="Chain of thought from Create Code Example." class="image--center mx-auto" /></p>
<p>Here is the response:</p>
<hr />
<blockquote>
<p>Created table APIs for customers, orders, order lines, and products in DEMO so each table now has consistent insert/update/delete/fetch routines with APEX debug logging and error propagation.</p>
<ul>
<li><p>AIDEMO_CUSTOMERS_PK uses customers%ROWTYPE for CRUD, returning generated IDs and logging apex_debug.error on failure.</p>
</li>
<li><p>AIDEMO_PRODUCTS_PK mirrors the same pattern for PRODUCTS, defaulting created_at to SYSDATE when absent.</p>
</li>
<li><p>AIDEMO_ORDERS_PK targets SALES_ORDERS (no plain ORDERS object exists) and fills defaults for order_date, total_amount, and status during inserts.</p>
</li>
<li><p>AIDEMO_ORDER_LINES_PK targets SALES_ORDER_LINES, recalculating line_total when not provided.</p>
</li>
<li><p>Verified all specs/bodies compile cleanly (USER_OBJECTS shows each package and body VALID).</p>
</li>
</ul>
<p>Next steps:</p>
<ol>
<li><p>Exercise the new TAPIs from your APEX flows or unit harness to confirm business logic/defaults behave as expected.</p>
</li>
<li><p>If you need deployment artifacts, export these CREATE OR REPLACE statements into your /SCRIPTS folder for source control.</p>
</li>
</ol>
</blockquote>
<hr />
<p>It figured out the correct tables and created a create/update/delete package for each table:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760213252702/bcd3ba72-35f7-497c-bf6d-d05565e142db.png" alt="Code Created" class="image--center mx-auto" /></p>
<p>A few notes:</p>
<ul>
<li>It included the <code>apex_debug</code> call in the WHEN OTHERS exception, which I had specified in my AGENTS.md</li>
</ul>
<h1 id="heading-what-about-apex">What about APEX?</h1>
<p>Because APEX lives in the database, it is an obvious candidate for interrogation by LLMs.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Look at the Oracle APEX applications in the WKSP_WKSPDEMO schema and list the top 5 security concerns.</div>
</div>

<p>Example response (not my apps 😊):</p>
<hr />
<blockquote>
<ul>
<li><p>Application 103 has 45 pages where PAGE_ACCESS_PROTECTION = 'Unrestricted' (apex_application_pages), leaving session state open to tampering; switch these pages to checksum-protected or require re-authentication. 🫢</p>
</li>
<li><p>Browser caching remains enabled for application 103 (BROWSER_CACHE = 'Enabled'), risking sensitive data leakage on shared machines; disable caching for authenticated apps.</p>
</li>
</ul>
</blockquote>
<hr />
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Which APEX Applications are not running the latest version of APEX and are not running the latest version of the universal theme?</div>
</div>

<p>Example response (again, not my apps 😊):</p>
<hr />
<blockquote>
<ul>
<li><p>apex_release.version_no reports 24.2.9 as the current APEX version; four applications (100, 101, 102, 110) already use compatibility_mode = '24.2' and Universal Theme 24.2 (apex_application_themes).</p>
</li>
<li><p>Application 103 is still on compatibility_mode = '4.2' 🫢 (apex_applications), so its runtime features are constrained to an obsolete APEX release.</p>
</li>
<li><p>The same application 103 runs Universal Theme version 1.2 with file_prefix = '#IMAGE_PREFIX#themes/theme_42/1.2/' (apex_application_themes), which predates the current 24.2 delivery.</p>
</li>
</ul>
</blockquote>
<hr />
<h1 id="heading-other-prompts">Other Prompts</h1>
<ul>
<li><p>Identify tables in the DEMO schema that are missing foreign keys. Add each missing foreign key to a script called MISSING_FOREIGN_KEYS.sql for my review.</p>
<ul>
<li>Not only did the LLM do a good job of identifying missing foreign keys and creating the respective ALTER TABLE scripts, but it also identified orphaned records in one table!</li>
</ul>
</li>
<li><p>Review database objects in the DEMO schema and files in my codebase to identify unused tables. Create a script called POTENTIAL_TABLE_DROPS.sql with DROP statements for each. Do not execute the drop statements.</p>
</li>
<li><p>Look at SQL queries that have run in the DEMO schema and list the top three poorly performing SQL statements. You may need to run this as a privileged user.</p>
</li>
<li><p>You are an expert technical author with specialist knowledge in Oracle APEX, Oracle Database, PL/SQL, and SQL. You have been assigned to the ‘XYZ project’. The goal of the project is to migrate from a legacy APEX version to the latest version (24.2) while enhancing security and adding the required business functionality. Connect to the 'XYZ' connection using the sqlcl mcp server. Review all of the database objects and APEX applications in the ‘XYX’ schema. Your goal is to create high-quality, easy-to-read, and concise technical documentation for the support team that will take over support for the application. Create a document called TECHICAL_DESIGN.md in the DOCS folder.</p>
</li>
</ul>
<h1 id="heading-other-considerations">Other Considerations</h1>
<ul>
<li><p>Ensure the LLM and SQLcl are connected to the correct instance and schema. This is especially important if, like me, you have numerous saved database connections in SQL Developer. Because the LLM is deciding what command to run in SQLcl, it can easily pick the wrong connection. It helps to have clear connection names that differ between clients and instances, e.g., ABCCORP-DEV and CLOUDNUEVA-DEV, as opposed to DEV1 and DEV2.</p>
</li>
<li><p>I recommend creating a read-only schema in non-development instances.</p>
</li>
<li><p>For complex questions, the number of iterations between the LLM and SQLcl can be large. Some questions can take multiple minutes to answer and require a significant number of tokens. 💲</p>
</li>
<li><p>I find it interesting to review both the plan that the LLM generates at the start of the process and the “model reasoning trace” it emits as it goes through the process of answering your question. This helps me to build better prompts.</p>
</li>
<li><p>As models evolve (GPT3 &gt; GPT4 &gt; GPT5), the prompts you enter today may not yield the same results tomorrow. As with any AI technology, it is important to build a list of <a target="_blank" href="https://blog.cloudnueva.com/why-evals-are-important-in-ai-development">Evals</a> that you can use to test new models and prompts against results from previous iterations.</p>
</li>
<li><p>If you do not already, add rich and informative table and column comments. I laid out why this is important in my <a target="_blank" href="https://blog.cloudnueva.com/select-ai-is-not-a-toy#heading-table-amp-column-comments">post about SELECT AI</a>.</p>
</li>
</ul>
<h1 id="heading-conclusion">Conclusion</h1>
<p>In the end, the SQLcl MCP server isn’t magic 🪄; it’s just a bridge between your database and an LLM. It saves time, reduces context switching, and helps you think through SQL and APEX problems more efficiently. It won’t replace your judgment or stop you from making a bad call, but it can make routine work faster and more consistent.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Like any tool, it’s only as good as the care and thought that you put behind it.</div>
</div>

<h1 id="heading-appendix-1-configure-amp-test-github-copilot-with-sqlcl-mcp-server">Appendix 1 - Configure &amp; Test GitHub Copilot with SQLcl MCP Server</h1>
<p>Here is a guide to <a target="_blank" href="https://code.visualstudio.com/docs/copilot/getting-started">get started with GitHub Copilot in VS Code</a>.</p>
<h2 id="heading-install-github-copilot-extensions-for-vs-code">Install GitHub Copilot Extensions for VS Code</h2>
<ul>
<li>Install the GitHub Copilot and GitHub Copilot Chat Extensions</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760206602839/4fb22159-614b-4837-88a5-63811b65a24b.png" alt="GitHub Copilot and GitHub Copilot Chat Extensions" class="image--center mx-auto" /></p>
<ul>
<li>Log in using your GitHub account</li>
</ul>
<h2 id="heading-configure-copilot-to-use-the-sqlcl-mcp-server">Configure Copilot to use the SQLcl MCP Server</h2>
<ul>
<li><p>Open the <a target="_blank" href="https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette">VS Code Command Palette</a>.</p>
</li>
<li><p>Type MCP to see the MCP Options</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760207088126/845efb6d-4e98-4f26-b857-5582a0274bac.png" alt="Copilot MCP Options" class="image--center mx-auto" /></p>
<ul>
<li>Select MCP: Add Server</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760207168597/0f69819d-0633-4d1e-aaed-4a1d494cd7c9.png" alt="Copilot Add MCP Server" class="image--center mx-auto" /></p>
<ul>
<li>Select Command (stdio)</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760207228168/02fcb70c-2283-4ae7-8e46-ae90c34416fc.png" alt="Copilot Enter Command" class="image--center mx-auto" /></p>
<ul>
<li>Enter a name for the MCP Server</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760207254783/185c0b38-a441-4018-ab4b-70508fb6810b.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Select which scope you want it to run in:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760207290592/ac9b697a-6dfa-426e-b9c0-1838dd813f41.png" alt="Copilot MCP Server Scope" class="image--center mx-auto" /></p>
<ul>
<li>You should now see the mcp.json file</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760207331142/dceb6b93-92ab-4f62-89ca-a57324db11dd.png" alt="Copilot mcp.json" class="image--center mx-auto" /></p>
<ul>
<li>Now that the SQLcl MCP Server is installed, you can start/stop/restart it by opening the menu and selecting &gt; MCP: List Servers</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760207605605/84496a0a-f068-4680-82a0-8e153f2188d3.png" alt="Start MCP Server 1" class="image--center mx-auto" /></p>
<ul>
<li>Then select SQLcl</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760207725912/2a5b6a0c-9a2f-4230-bb4e-c671eb2febe0.png" alt="Start MCP Server 2" class="image--center mx-auto" /></p>
<ul>
<li>Select ‘Start Server’ to start the MCP Server. The same navigation will allow you to stop an already running MCP Server.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760207744614/1d8af4ce-e725-4669-82b9-51ff9c504a1d.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Click ‘Show Output’ to see a log of what it is doing.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760207822059/aba2e6eb-70dd-4fad-a9d4-e5a2fbd9b318.png" alt="Start MCP Server 5" class="image--center mx-auto" /></p>
<p><strong>Note</strong>: VS Code will start the SQLcl MCP Server for you if it is not already started.</p>
<h2 id="heading-test-from-vs-code">Test from VS Code</h2>
<p>Open the CoPilot Chat Extension and type the following in the chat Window:</p>
<p>Connect to DEMO using the SQLcl MCP Server</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760208892836/8fe34b15-dfa7-45df-97fc-e4363561aff6.png" alt class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🤨</div>
<div data-node-type="callout-text">Oops. What went wrong? Copilot did not route the prompt to the SQLcl MCP server, likely due to a VS Code workspace context or MCP scope issue. I have found that the VS Code extension becomes confused (regarding the SQLcl MCP Server) when a Workspace or any open files are in VS Code. If I close all folders and files and try again…</div>
</div>

<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760209177914/c5afcabf-49ff-4a95-8035-a6cafe3c6550.png" alt="SQLcl MCP Server from Copilot" class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">While it’s great that I am now connected, not being able to ask questions about my database and codebase simultaneously is not ideal. This is the main reason that I now use Codex instead of CoPilot when using the SQLcl MCP Server. You may also have noticed that Copilot is a lot more ‘chatty’, which I don’t like.</div>
</div>]]></content:encoded></item><item><title><![CDATA[Handle Non-Standard API Authentication in APEX]]></title><description><![CDATA[Introduction
Oracle APEX Web Credentials supports many standard REST API authentication schemes. However, not all APIs play nicely - some use custom or unsupported authentication flows.
When that happens, securing and reusing credentials becomes hard...]]></description><link>https://blog.cloudnueva.com/handle-non-standard-api-authentication-in-apex</link><guid isPermaLink="true">https://blog.cloudnueva.com/handle-non-standard-api-authentication-in-apex</guid><category><![CDATA[orclapex]]></category><category><![CDATA[#oracle-apex]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 02 Oct 2025 12:27:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746986710472/c8bb3a56-a7dc-49bb-86ca-b1cbd83a82ad.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p><a target="_blank" href="https://blog.cloudnueva.com/apex-web-credentials">Oracle APEX Web Credentials</a> supports many standard REST API authentication schemes. However, not all APIs play nicely - some use custom or unsupported authentication flows.</p>
<p>When that happens, securing and reusing credentials becomes harder. You lose the built-in power of APEX Web Credentials and the ability to use features like REST Data Sources.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">This post will focus on these authentication outliers. I will show you how, in many cases, we can still use APEX Web Credentials, even if APEX does not support the authentication type of the API you are trying to call.</div>
</div>

<h2 id="heading-goals">Goals</h2>
<ul>
<li><p>Built a reusable and secure authentication solution for non-standard APIs</p>
</li>
<li><p>Leveraged Oracle TDE and Data Redaction for at-rest and at-access credential protection</p>
</li>
<li><p>Enabled native APEX Web Credential reuse through persistent token injection</p>
</li>
</ul>
<h1 id="heading-use-case">Use Case</h1>
<p>I am working on a project to extract Quote information from <a target="_blank" href="https://servicepath.co/">servicePath</a>. servicePath is a SaaS based Configure, Price, Quote solution (CPQ) focused on technology sales.</p>
<p>The <a target="_blank" href="https://developer-hub.servicepath.io/reference/intro/getting-started">servicePath REST APIs</a> use an HTTP Header Bearer token for Authentication. A token can be obtained from the <a target="_blank" href="https://developer-hub.servicepath.io/reference/post_api-v3-security-tokens">Generate Access Token</a> endpoint. Here is where the fun starts!</p>
<h2 id="heading-getting-a-servicepath-access-token">Getting a servicePath Access Token</h2>
<p>The servicePath token service requires that you send a username and password in the body of a POST request:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746899891292/a3870a3b-f9c3-497f-b772-b89de8bf8930.png" alt="Postman screenshot showing Service Path Token Service" class="image--center mx-auto" /></p>
<p>After calling the token service, you receive a Bearer token (access token) in a JSON response:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Bearer"</span>: <span class="hljs-string">"eyJhbG..."</span>, 
  <span class="hljs-attr">"Refresh"</span>: <span class="hljs-string">"I4-8Cun..."</span>
}
</code></pre>
<p>The Bearer (access token) is used in the ‘Authorization’ HTTP Header variable to authenticate when calling servicePath APIs:</p>
<pre><code class="lang-bash">curl --location <span class="hljs-string">'https://example.servicepath.io/api/v3/Quotes?page_size=500&amp;last_modified_from=2025-04-23'</span> \
--header <span class="hljs-string">'Content-Type: application/json'</span> \
--header <span class="hljs-string">'Authorization: Bearer eyJhbG...'</span>
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🤦‍♂</div>
<div data-node-type="callout-text">I don't understand why servicePath did not use basic authentication (or even better, OAuth2).</div>
</div>

<p>The servicePath approach rules out using an APEX Web Credential in the classical sense. But all is not lost; we can still use APEX Web Credentials and benefit from REST Data Sources, etc. Read on to find out how.</p>
<h1 id="heading-approach">Approach</h1>
<p>This diagram illustrates the idea I am trying to convey in this post. We use custom code to get and refresh the Bearer token, then store it persistently in an HTTP Header type APEX Web Credential to be available to other APEX sessions.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746922023031/619908fa-2941-4892-adce-0ad7411a533c.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-storing-credentials-securely">Storing Credentials Securely</h1>
<p>As we cannot use APEX Web Credentials, we must first find a secure way to store the API credentials (in this case, a username and password).</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">⚠</div>
<div data-node-type="callout-text">For ultimate security, you may want to consider using DBMS_CRYPTO or an external keystore to secure these credentials.</div>
</div>

<p>The approach described in this section is much more straightforward and sufficiently secure for most use cases. It relies on <a target="_blank" href="https://docs.oracle.com/en/database/oracle/oracle-database/23/dbtde/changes-this-release-oracle-database-transparent-data-encryption-guide.html">Oracle Transparent Data Encryption</a> (TDE) to secure the data at rest, and <a target="_blank" href="https://docs.oracle.com/en/database/oracle/oracle-database/23/dbred/index.html">Oracle Data Redaction</a> to keep the credentials secure from access via SQL. This approach works seamlessly on OCI Autonomous Databases or on the OCI APEX Service, which uses TDE on all tablespaces without any setup required on your part.</p>
<h2 id="heading-step-1-create-a-new-schema">Step 1: Create a New Schema</h2>
<pre><code class="lang-sql"><span class="hljs-comment">-- For OCI ATP, run as ADMIN</span>
<span class="hljs-comment">-- Having a separate schema separates securing the API keys from your APEX Parsing Schema.</span>
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">USER</span> api_sec
  <span class="hljs-keyword">IDENTIFIED</span> <span class="hljs-keyword">BY</span> <span class="hljs-string">"ComplexPassword"</span>
  <span class="hljs-keyword">QUOTA</span> <span class="hljs-number">10</span>M <span class="hljs-keyword">ON</span> <span class="hljs-keyword">users</span>
  <span class="hljs-keyword">DEFAULT</span> <span class="hljs-keyword">TABLESPACE</span> <span class="hljs-keyword">users</span>
  <span class="hljs-keyword">TEMPORARY</span> <span class="hljs-keyword">TABLESPACE</span> temp
  PROFILE <span class="hljs-keyword">default</span>;

<span class="hljs-comment">-- Create Limited Grants to the new user</span>
<span class="hljs-keyword">GRANT</span> <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">SESSION</span>                 <span class="hljs-keyword">TO</span> api_sec;
<span class="hljs-keyword">GRANT</span> <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span>, <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">PROCEDURE</span> <span class="hljs-keyword">TO</span> api_sec;
<span class="hljs-keyword">ALTER</span> <span class="hljs-keyword">USER</span> api_sec <span class="hljs-keyword">DEFAULT</span> <span class="hljs-keyword">ROLE</span> <span class="hljs-keyword">NONE</span>;
</code></pre>
<h2 id="heading-step-2-create-the-credentials-table-in-the-new-schema">Step 2: Create the Credentials Table in the New Schema</h2>
<pre><code class="lang-sql"><span class="hljs-comment">-- Run as user api_sec or admin</span>
<span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> api_sec.api_credentials 
 (credential_code <span class="hljs-built_in">VARCHAR2</span>(<span class="hljs-number">30</span>)   PRIMARY <span class="hljs-keyword">KEY</span>,
  api_user        <span class="hljs-built_in">VARCHAR2</span>(<span class="hljs-number">100</span>)  <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>,
  api_pwd         <span class="hljs-built_in">VARCHAR2</span>(<span class="hljs-number">4000</span>) <span class="hljs-keyword">NOT</span> <span class="hljs-literal">NULL</span>);
</code></pre>
<h2 id="heading-step-3-create-a-data-redaction-policy-for-the-new-table">Step 3: Create a Data Redaction Policy for the New Table</h2>
<pre><code class="lang-sql"><span class="hljs-comment">-- Run as ADMIN</span>
<span class="hljs-comment">-- Any user except for your parsing schema user will see NULL for the API_PWD column</span>
<span class="hljs-keyword">BEGIN</span>
  DBMS_REDACT.ADD_POLICY
   (object_schema   =&gt; <span class="hljs-string">'API_SEC'</span>,
    object_name     =&gt; <span class="hljs-string">'API_CREDENTIALS'</span>,
    column_name     =&gt; <span class="hljs-string">'API_PWD'</span>,
    policy_name     =&gt; <span class="hljs-string">'REDCT_API_CRED_PWD'</span>,
    function_type   =&gt; dbms_redact.full,
    expression      =&gt; <span class="hljs-string">'sys_context(''userenv'',''current_user'') &lt;&gt; ''PARSING_SCHEMA'''</span>,
    <span class="hljs-keyword">enable</span>          =&gt; <span class="hljs-literal">TRUE</span>);
<span class="hljs-keyword">END</span>;
<span class="hljs-comment">-- Where PARSING_SCHEMA is your APEX Parsing Schema</span>
</code></pre>
<h2 id="heading-step-4-create-a-plsql-package-in-the-new-schema">Step 4: Create a PL/SQL Package in the New Schema</h2>
<p>This package will handle all interactions with the <code>api_credentials</code> table.</p>
<p>Create the package spec in the new <code>API_SEC</code> schema.</p>
<h3 id="heading-package-spec">Package Spec</h3>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">OR</span> <span class="hljs-keyword">REPLACE</span> <span class="hljs-keyword">PACKAGE</span> api_credentials_pk <span class="hljs-keyword">AUTHID</span> DEFINER <span class="hljs-keyword">AS</span>

<span class="hljs-comment">-- This procedure is specific to servicePath</span>
<span class="hljs-comment">-- You would create specific procedures similar to this for </span>
<span class="hljs-comment">--   every API that requires their own unique authentication approach.</span>
<span class="hljs-keyword">PROCEDURE</span> set_servicepath_credentials 
  (p_apex_credential <span class="hljs-keyword">IN</span> <span class="hljs-built_in">VARCHAR2</span>,
   p_token_api_url   <span class="hljs-keyword">IN</span> <span class="hljs-built_in">VARCHAR2</span>);

<span class="hljs-comment">-- Create a new record in api_credentials</span>
PROCEDURE create_credential
 (p_credential_code IN api_credentials.credential_code%TYPE,
  p_api_user        IN api_credentials.api_user%TYPE,
  p_new_password    IN api_credentials.api_pwd%TYPE);

<span class="hljs-comment">-- TBD Create APIs to change a password and delete a credential.</span>

<span class="hljs-keyword">END</span> api_credentials_pk;
</code></pre>
<ul>
<li>Create with <code>DEFINER</code> rights so that when we call the APIs from our APEX parsing schema, the APIs can access the un-redacted password in the <code>API_SEC</code> schema.</li>
</ul>
<h3 id="heading-package-body">Package Body</h3>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">OR</span> <span class="hljs-keyword">REPLACE</span> <span class="hljs-keyword">PACKAGE</span> <span class="hljs-keyword">BODY</span> api_credentials_pk <span class="hljs-keyword">AS</span>

<span class="hljs-comment">----------------------------------------------------------</span>
<span class="hljs-comment">-- Notice that we never return the value for API_PWD.</span>
<span class="hljs-comment">-- This prevents a developer from every seeing the password.</span>
<span class="hljs-keyword">PROCEDURE</span> set_servicepath_credentials
  (p_apex_credential <span class="hljs-keyword">IN</span> <span class="hljs-built_in">VARCHAR2</span>,
   p_token_api_url   <span class="hljs-keyword">IN</span> <span class="hljs-built_in">VARCHAR2</span>) <span class="hljs-keyword">IS</span>

  <span class="hljs-keyword">CURSOR</span> cr_credentials <span class="hljs-keyword">IS</span>
    <span class="hljs-keyword">SELECT</span> api_user
    ,      api_pwd
    <span class="hljs-keyword">FROM</span>   api_credentials
    <span class="hljs-keyword">WHERE</span>  credential_code = <span class="hljs-string">'SERVICEPATH'</span>;

  lr_credentials      cr_credentials%ROWTYPE;
  l_body_json         VARCHAR2(1000);
  l_response          CLOB;
  l_access_token      VARCHAR2(4000);
  l_response_obj      json_object_t;

<span class="hljs-keyword">BEGIN</span>

  apex_automation.log_info (<span class="hljs-string">'** Start Refresh servicePath Bearer Token **'</span>);
  apex_automation.log_info ('APEX Credential: '|| p_apex_credential);

  <span class="hljs-comment">-- Get servicePath Credentials</span>
  OPEN cr_credentials;
  FETCH cr_credentials INTO lr_credentials;
  CLOSE cr_credentials;

  <span class="hljs-comment">-- Build JSON with Username and Password.</span>
  l_body_json := JSON_OBJECT('Username' VALUE lr_credentials.api_user,
                             'Password' VALUE lr_credentials.api_pwd);

  <span class="hljs-comment">-- Set HTTP Headers.</span>
  apex_web_service.set_request_headers 
   (p_name_01   =&gt; 'Content-Type', 
    p_value_01  =&gt; 'application/json',
    p_name_02   =&gt; 'Accept', 
    p_value_02  =&gt; 'application/json',
    p_name_03   =&gt; 'User-Agent', 
    p_value_03  =&gt; 'APEX-Integration',
    p_reset     =&gt; TRUE);

  <span class="hljs-comment">-- Call the servicePath Token Web Service.</span>
  l_response := apex_web_service.make_rest_request
                 (p_url         =&gt; p_token_api_url,
                  p_body        =&gt; l_body_json,
                  p_http_method =&gt; 'POST');
  apex_automation.log_info ('Token API HTTP Response: '|| apex_web_service.g_status_code);

  IF apex_web_service.g_status_code &lt;&gt; 201 THEN
    apex_automation.log_error ('Token API <span class="hljs-keyword">Call</span> Failed. Response: <span class="hljs-string">'|| l_response);
    raise_application_error(-20010, '</span><span class="hljs-keyword">Error</span> getting servicePath Token. <span class="hljs-keyword">HTTP</span> <span class="hljs-keyword">Status</span> Code: <span class="hljs-string">' || apex_web_service.g_status_code);
  ELSE
    -- Store the Bearer token persistently in an APEX HTTP Header Web Credential
    l_response_obj := json_object_t.parse(l_response);
    l_access_token := l_response_obj.get_String('</span>Bearer<span class="hljs-string">');
    apex_credential.set_persistent_credentials
     (p_credential_static_id =&gt; p_apex_credential,
      p_key                  =&gt; '</span>Authorization<span class="hljs-string">',
      p_value                =&gt; '</span>Bearer <span class="hljs-string">' || l_access_token);
    apex_automation.log_info ('</span>Token <span class="hljs-keyword">Set</span>: <span class="hljs-string">'|| SUBSTR(l_access_token,1,10)||'</span>...<span class="hljs-string">');
  END IF;

  apex_automation.log_info ('</span>** <span class="hljs-keyword">End</span> <span class="hljs-keyword">Refresh</span> servicePath Bearer Token **<span class="hljs-string">');

EXCEPTION WHEN OTHERS THEN 
  apex_automation.log_error ('</span>Unhandled <span class="hljs-keyword">Error</span> [<span class="hljs-string">'|| SQLERRM || '</span>]<span class="hljs-string">');
  RAISE; 
END set_servicepath_credentials;

----------------------------------------------------------
PROCEDURE create_credential 
 (p_credential_code IN api_credentials.credential_code%TYPE,
  p_api_user        IN api_credentials.api_user%TYPE,
  p_new_password    IN api_credentials.api_pwd%TYPE) IS

BEGIN
  INSERT INTO api_credentials
    (credential_code, api_user, api_pwd)
  VALUES 
    (p_credential_code, p_api_user, p_new_password);
END create_credential;

END api_credentials_pk;</span>
</code></pre>
<h2 id="heading-step-5-grant-execute-to-the-package">Step 5: Grant Execute to the Package</h2>
<p>Next, allow your APEX parsing schema to run APIs in the package:</p>
<pre><code class="lang-sql"><span class="hljs-comment">-- Run as API_SEC or ADMIN</span>
<span class="hljs-keyword">GRANT</span> <span class="hljs-keyword">EXECUTE</span> <span class="hljs-keyword">ON</span> api_sec.api_credentials_pk <span class="hljs-keyword">TO</span> &lt;&lt;PARSING_SCHEMA&gt;&gt;;
</code></pre>
<h2 id="heading-step-6-add-a-credential">Step 6: Add a Credential</h2>
<p>Create a credential:</p>
<pre><code class="lang-sql"><span class="hljs-comment">-- Run from APEX Parsing Schema.</span>
<span class="hljs-keyword">BEGIN</span>
  api_credentials_pk.create_credential
   (p_credential_code =&gt; <span class="hljs-string">'SERVICEPATH'</span>,
    p_api_user        =&gt; <span class="hljs-string">'myusername'</span>,
    p_new_password    =&gt; <span class="hljs-string">'mypassword'</span>);
<span class="hljs-keyword">END</span>;
</code></pre>
<h2 id="heading-step-7-lock-the-new-schema">Step 7: Lock the New Schema</h2>
<pre><code class="lang-sql"><span class="hljs-comment">-- Run as ADMIN</span>
<span class="hljs-comment">-- Prevents anyone logging into this schema</span>
<span class="hljs-keyword">ALTER</span> <span class="hljs-keyword">USER</span> api_sec <span class="hljs-keyword">ACCOUNT</span> <span class="hljs-keyword">LOCK</span>;
</code></pre>
<h2 id="heading-summary">Summary</h2>
<ul>
<li><p>No user can see the <code>API_PWD</code> column, except for the <code>ADMIN</code> user. Even the <code>API_SEC</code> user cannot see it because we locked the account.</p>
</li>
<li><p>There is no API to get the password. Instead, we have an API to set the APEX Web Credential with the Access token. This prevents developers from seeing the API password (or the token).</p>
</li>
<li><p>No schemas have access to the <code>api_credentials</code> table except for <code>API_SEC</code> (which is locked).</p>
</li>
<li><p>The only way to affect the <code>api_credentials</code> is via the PL/SQL package <code>api_credentials_pk</code> which can only be run from your APEX parsing schema.</p>
</li>
</ul>
<h1 id="heading-putting-it-all-together">Putting it all Together</h1>
<p>In this section, we will create an APEX Web Credential to store the Bearer Token and an APEX Automation to update the APEX Web Credential with a new Bearer token on a schedule.</p>
<h2 id="heading-apex-web-credential">APEX Web Credential</h2>
<p>Before creating the Automation, we must create an ‘HTTP Header’ type APEX Web Credential to store the Bearer Token:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746984086278/3443647b-eece-4407-a6e0-f8e6e832b633.png" alt="APEX Web Credential" class="image--center mx-auto" /></p>
<p>The <code>set_servicepath_credentials</code> procedure uses <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/aeapi/APEX_CREDENTIAL.SET_PERSISTENT_CREDENTIALS-Procedure-Signature-3.html">APEX_CREDENTIAL.SET_PERSISTENT_CREDENTIALS</a> to store the Bearer token. <code>apex_credential.set_persistent_credentials</code> sets the Credential Name and Secret so all APEX sessions can utilize it.</p>
<pre><code class="lang-sql"><span class="hljs-keyword">BEGIN</span>
  apex_credential.set_persistent_credentials
   (p_credential_static_id =&gt; p_apex_credential,
    p_key                  =&gt; <span class="hljs-string">'Authorization'</span>,
    p_value                =&gt; <span class="hljs-string">'Bearer '</span> || l_access_token);
<span class="hljs-keyword">END</span>;
</code></pre>
<ul>
<li><p>The <code>p_key</code> parameter is stored in the ‘Credential Name’ field of the Web Credential.</p>
</li>
<li><p>The <code>p_value</code> parameter is stored in the ‘Credential Secret’ field of the Web Credential.</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Note: Unlike the ‘OAuth2 Client Credentials’ type Web Credential, APEX will not refresh the token for you. This is why we will use an APEX Automation.</div>
</div>

<p>There is an overloaded version of this procedure. It does the same thing with different parameter names.</p>
<pre><code class="lang-sql"><span class="hljs-keyword">BEGIN</span>
  apex_credential.set_persistent_credentials
   (p_credential_static_id =&gt; p_apex_credential,
    p_username             =&gt; <span class="hljs-string">'YourUserName'</span>,
    p_password             =&gt; <span class="hljs-string">'YourPassword'</span>);
<span class="hljs-keyword">END</span>;
</code></pre>
<h2 id="heading-apex-automation">APEX Automation</h2>
<p>We can use an <a target="_blank" href="https://blog.cloudnueva.com/apex-automations">APEX Automation</a> to update the Web Credential with a new Bearer token on a schedule. In the case of servicePath, the Bearer token is valid for eight hours. Given this, we may want to run the automation every seven hours.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746984313611/cf0b7dae-5b3b-4b35-a795-7d78538f4457.png" alt="APEX Automation to Refresh the Token." class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746984357266/9dd79841-0a0a-4c6b-8790-368226348da0.png" alt="APEX Automation Action" class="image--center mx-auto" /></p>
<ul>
<li><p>In the Automation Action, we call the set_servicepath_credentials API to set the Bearer token in the APEX Credential with a static ID called ‘servicepath’.</p>
</li>
<li><p>We also pass in the token URL for the servicePath token endpoint.</p>
</li>
</ul>
<h1 id="heading-using-the-web-credential">Using the Web Credential</h1>
<p>Now that the Bearer token is stored persistently in an APEX Web Credential (and refreshed on a schedule), we can use the HTTP Header type APEX Web Credential the same way as any other APEX Web Credential.</p>
<p>Used in <code>APEX_WEB_SERVICE</code>:</p>
<pre><code class="lang-sql">  l_response := apex_web_service.make_rest_request
                 (p_url                   =&gt; l_api_url,
                  p_http_method           =&gt; 'GET',
                  p_credential_static_id  =&gt; 'servicepath');
</code></pre>
<p>Used in an APEX REST Data Source:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746985657367/eb3284a2-d406-48ee-af17-ae73fde1db24.png" alt="Using the Credential in an APEX REST Source" class="image--center mx-auto" /></p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>While Oracle APEX Web Credentials don’t natively support every API authentication scheme, with the right architecture, we can work around those limitations without compromising on security or maintainability.</p>
<p>In this example, we:</p>
<p>✅ <strong>Handled non-standard API auth</strong> — Built a flow to retrieve and refresh Bearer tokens from a servicePath API that doesn’t follow standard auth protocols</p>
<p>✅ <strong>Secured credentials effectively</strong> — Used Oracle Transparent Data Encryption and Data Redaction to store credentials securely on OCI ATP without external key management</p>
<p>✅ <strong>Enabled full APEX integration</strong> — Persisted tokens in APEX Web Credentials so REST Data Sources and APEX_WEB_SERVICE calls can use them transparently</p>
<p>✅ <strong>Automated refresh securely</strong> — Leveraged APEX Automation to update tokens on schedule, preserving a fully native APEX experience</p>
]]></content:encoded></item><item><title><![CDATA[Wrapping APEX_MAIL & Using JSON_OBJECT_T for Placeholders]]></title><description><![CDATA[Introduction
In this post, I’ll share two practical tips to enhance email functionality in Oracle APEX applications:

Why you should wrap the APEX_MAIL PL/SQL procedure with your own API.

How to use JSON_OBJECT_T to manage email placeholders more fl...]]></description><link>https://blog.cloudnueva.com/wrapping-apexmail-and-placeholders</link><guid isPermaLink="true">https://blog.cloudnueva.com/wrapping-apexmail-and-placeholders</guid><category><![CDATA[orclapex]]></category><category><![CDATA[#oracle-apex]]></category><category><![CDATA[json]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 28 Aug 2025 11:11:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745693849176/c8053cc2-7c97-40cf-8c57-f9847ed86af4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>In this post, I’ll share two practical tips to enhance email functionality in Oracle APEX applications:</p>
<ol>
<li><p>Why you should <strong>wrap the APEX_MAIL PL/SQL procedure</strong> with your own API.</p>
</li>
<li><p>How to use <strong>JSON_OBJECT_T</strong> to manage email placeholders more flexibly.</p>
</li>
</ol>
<p>The second technique is also useful for passing parameters between procedures, without needing to know all the parameters in advance.</p>
<h1 id="heading-using-jsonobjectt-for-placeholders">Using JSON_OBJECT_T for Placeholders</h1>
<h2 id="heading-background">Background</h2>
<p><a target="_blank" href="https://blog.cloudnueva.com/apex-email-templates-advanced-formatting">APEX mail templates</a> allow you to include placeholder variables that are substituted when you send an email using the <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/aeapi/SEND-Procedure-Signature-2.html">APEX_MAIL</a> API. All you have to do is pass JSON in the <code>p_placeholders</code> parameter. This is a very flexible approach.</p>
<p>However, in many apps, you’ll encounter multiple templates that share common fields. Without a structured approach, this often leads to redundant, boilerplate code. If we utilize the <code>JSON_OBJECT_T</code> data type, we can build a more efficient and lower-code solution.</p>
<h2 id="heading-setting-placeholders">Setting Placeholders</h2>
<p>First, we define a reusable procedure to populate common placeholder fields related to quotes:</p>
<pre><code class="lang-sql">PROCEDURE common_email_placeholders
  (p_quote_id     IN quotes.quote_id%TYPE,
   x_placeholders IN OUT NOCOPY json_object_t) IS

  lr_quote_info        cr_quote_info%ROWTYPE;

<span class="hljs-keyword">BEGIN</span>

  <span class="hljs-comment">-- Get Details for the Quote.</span>
  <span class="hljs-keyword">OPEN</span>  cr_quote_info (cp_quote_id =&gt; p_quote_id);
  FETCH cr_quote_info INTO lr_quote_info;
  CLOSE cr_quote_info;

  <span class="hljs-comment">-- Set Quote related values.</span>
  x_placeholders.put ('CUSTOMER_NAME', lr_quote_info.account_name);
  x_placeholders.put ('QUOTE_TYPE', lr_quote_info.quote_type);
  x_placeholders.put ('QUOTE_NAME', lr_quote_info.quote_number);
  x_placeholders.put ('QUOTE_AMOUNT', lr_quote_info.quote_amount);

  <span class="hljs-comment">-- Set other common values e.g. links back to our app, instance, name, and language.</span>
  x_placeholders.put ('LANGUAGE_CODE', lr_quote_info.language_code);
  x_placeholders.put ('INSTANCE_NAME', get_instance_name());
  x_placeholders.put ('CUSTOMER_APP_LINK', get_app_url());
  x_placeholders.put ('PORTAL_APP_LINK', portal_url());

<span class="hljs-keyword">END</span> common_email_placeholders;
</code></pre>
<p>The common procedure can include all possible placeholders.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">It’s OK if we have additional fields in the JSON that are not referenced in all the templates. APEX will only substitute the placeholders that appear in the currently referenced template.</div>
</div>

<h2 id="heading-procedure-to-send-the-email">Procedure to Send the Email</h2>
<p>Next, we create a procedure to send a specific type of email, such as a request for approval. It builds on the common placeholders and adds email-specific fields if needed:</p>
<pre><code class="lang-sql">PROCEDURE email_request_for_approval
  (p_document_id IN evl_documents.document_id%TYPE,
   p_rule_name   IN VARCHAR2,
   p_to_email    IN VARCHAR2,
   p_comments    IN VARCHAR2 DEFAULT NULL) IS

  l_placeholders_obj   json_object_t := json_object_t();

<span class="hljs-keyword">BEGIN</span>

  <span class="hljs-comment">-- Set common email placeholders.</span>
  common_email_placeholders
   (p_document_id  =&gt; p_document_id,
    x_placeholders =&gt; l_placeholders_obj);

  <span class="hljs-comment">-- Add placeholders specific to this particular email.</span>
  l_placeholders_obj.put ('APPROVAL_TYPE', p_rule_name);

  <span class="hljs-comment">-- Call wrapper API to send the email (see below).</span>
  send_email
   (p_to                 =&gt; p_to_email,
    p_template_static_id =&gt; 'APPROVAL_REQUIRED',
    p_placeholders       =&gt; l_placeholders_obj);

<span class="hljs-keyword">END</span> email_request_for_approval;
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">By leveraging <code>JSON_OBJECT_T</code>, we gain tremendous flexibility. You can pass additional data between procedures <strong>without changing the parameter lists</strong>, which reduces code maintenance overhead.</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">⚠</div>
<div data-node-type="callout-text">Of course, with great power comes great responsibility. I am not advocating that we start creating all procedures and functions with just a single <code>JSON_OBJECT_T</code> parameter.</div>
</div>

<h1 id="heading-why-wrap-apexmail">Why Wrap APEX_MAIL?</h1>
<p>Instead of calling <code>APEX_MAIL</code> directly, I recommend using a wrapper procedure. Here’s a basic example:</p>
<pre><code class="lang-sql">PROCEDURE send_email
  (p_to                 IN VARCHAR2,
   p_cc                 IN VARCHAR2 DEFAULT NULL,
   p_from               IN VARCHAR2 DEFAULT NULL,
   p_template_static_id IN VARCHAR2,
   p_placeholders       IN json_object_t) IS

  l_to_email       VARCHAR2(32000);
  l_cc_email       VARCHAR2(32000);
  l_placeholders   CLOB;

<span class="hljs-keyword">BEGIN</span>

  <span class="hljs-comment">-- Convert Placeholders json_object_t object to a CLOB required by APEX_MAIL.</span>
  l_placeholders := p_placeholders.to_Clob;

  <span class="hljs-comment">-- Whitelist To email dlist. </span>
  <span class="hljs-comment">-- Useful for testing, avoid sending customers emails from development or test instances.</span>
  l_to_email := apply_email_whitelist (p_email_address =&gt; p_to);
  IF p_cc IS NOT NULL THEN
    <span class="hljs-comment">-- Whitelist CC email dlist.</span>
    l_cc_email := apply_email_whitelist (p_email_address =&gt; p_cc);
  <span class="hljs-keyword">END</span> <span class="hljs-keyword">IF</span>;

  <span class="hljs-comment">-- TBD additional code that checks a setting and does not send the email at all.</span>

  <span class="hljs-comment">-- Send the Email.</span>
  apex_mail.send 
   (p_to                 =&gt; l_to_email,
    p_cc                 =&gt; l_cc_email,
    p_from               =&gt; NVL(p_from,  'Quoting &lt;quoting@example.com&gt;'),
    p_replyto            =&gt; 'no-reply@example.com',
    p_template_static_id =&gt; p_template_static_id,  
    p_placeholders       =&gt; l_placeholders);  

  <span class="hljs-comment">-- TBD additional code to log the email.</span>
  <span class="hljs-comment">-- insert into email_log (to_email, subject, body, sent_on) values (...);</span>

<span class="hljs-keyword">END</span> send_email;
</code></pre>
<h2 id="heading-key-benefits-of-wrapping-apexmail"><strong>Key Benefits of Wrapping APEX_MAIL</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>Description</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Email Whitelisting</strong></td><td>In non-production instances, we mustn’t send emails to the intended recipients. Having a wrapper for <code>APEX_MAIL</code> allows us to apply whitelisting logic to all email addresses before sending the email. For example, if an email was intended for (ceo@example.com, cfo@example.com), we can intercept it in DEV and replace it with (dev@example.com, dev2@example.com). This allows us to test approval emails and other messages without risking sending them to actual recipients.</td></tr>
<tr>
<td><strong>Disable Sending Easily</strong></td><td>Centralized check to suppress any emails from being sent if needed.</td></tr>
<tr>
<td><strong>Custom Logging</strong></td><td>We may also want to log emails to a table, including the email body, which APEX does not store in its email logs. Custom logging can be expanded to have a scheduled process that checks with the email service provider to confirm that emails have been delivered.</td></tr>
<tr>
<td><strong>Switch Email Providers</strong></td><td>Having a wrapper API allows us to change the email provider with minimal impact on our code. For example, we may want to send emails using a service like <a target="_blank" href="https://sendgrid.com/">SendGrid</a> instead of <code>APEX_MAIL</code>.</td></tr>
<tr>
<td><strong>Standardized Error Handling</strong></td><td>Centralize exception management for all outgoing emails (optional enhancement).</td></tr>
</tbody>
</table>
</div><h1 id="heading-conclusion">Conclusion</h1>
<blockquote>
<p>Wrapping APEX_MAIL and using JSON_OBJECT_T for placeholders are simple but powerful techniques to improve your APEX application’s email functionality.</p>
<ul>
<li><p>Wrappers improve control, security, and maintainability.</p>
</li>
<li><p>JSON-based placeholders enable flexible and scalable email generation.</p>
</li>
</ul>
<p>These strategies reduce technical debt today and future-proof your application for tomorrow.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Is Your APEX Environment Aging Quietly]]></title><description><![CDATA[Introduction
No one gets excited about technical debt, but when your APEX instance is still on 18.2 (or earlier), its “interest clock” is already ticking.
Oracle offers full support for only 18 months per APEX release. Upgrading usually takes less ti...]]></description><link>https://blog.cloudnueva.com/is-your-apex-environment-aging-quietly</link><guid isPermaLink="true">https://blog.cloudnueva.com/is-your-apex-environment-aging-quietly</guid><category><![CDATA[orclapex]]></category><category><![CDATA[#oracle-apex]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 31 Jul 2025 11:50:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747245713337/4a990ed6-4b11-47a1-9292-2f50e01404a7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>No one gets excited about technical debt, but when your APEX instance is still on 18.2 (or earlier), its “interest clock” is already ticking.</p>
<p>Oracle offers full support for only 18 months per APEX release. Upgrading usually takes less time than your weekly sprint demo; yet, thousands of production apps still run on outdated versions.</p>
<p>Staying current isn’t just about running the latest APEX version. It’s about proactively addressing deprecated features and aligning Instance, Workspace, and Applications settings with the latest features.</p>
<p>This post will follow a case study of a client running on-premises APEX (18.2) who is migrating to an Autonomous Transaction Processing Database (ATP) instance running on Oracle Cloud Infrastructure (OCI).</p>
<h1 id="heading-case-study">Case Study</h1>
<p>The client went live with their old system when APEX 18.2 came out in the fall of 2018 (seven years ago). Premier support for APEX 18.2 ran out at the end of March 2023 (two years ago). I thought, “Wow, this client has been running their APEX environment for seven years and has had virtually no maintenance costs.”</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">⚠</div>
<div data-node-type="callout-text">That is the blessing and the curse of APEX. It runs and runs with very little intervention, so you can fall into the trap of thinking you never need to upgrade it.</div>
</div>

<h1 id="heading-apex-remediations">APEX Remediations</h1>
<p>This section describes APEX remediations I discovered while assessing the client’s APEX 18.2 environment for upgrade to APEX 24.2.</p>
<h2 id="heading-include-legacy-javascript">Include Legacy JavaScript</h2>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Shared Components &gt; User Interface &gt; User Interface Attributes &gt; Include Legacy JavaScript &gt; Include Deprecated or Desupported JavaScript Functions</div>
</div>

<p>There are two checkboxes: ‘Pre 18.1’ and ‘18.x’. They tell APEX to load legacy JavaScript libraries found in <code>/i/libraries/apex/legacy*.js</code>. These incur unnecessary overhead and indicate that you are not taking good care of your APEX environment!</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Disable these options, perform a regression test, and remediate as necessary.</div>
</div>

<h2 id="heading-include-jquery-migrate">Include jQuery Migrate</h2>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Shared Components &gt; User Interface &gt; User Interface Attributes &gt; Include Legacy JavaScript &gt; Include jQuery Migrate</div>
</div>

<p>Similar to the previous option, this is a crutch for Apps using legacy jQuery code.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Disable these options, run a regression test, and remediate as necessary.</div>
</div>

<h2 id="heading-compatibility-mode">Compatibility Mode</h2>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Shared Components &gt; Application Definition &gt; Definition &gt; Compatibility Mode</div>
</div>

<p>Certain APEX runtime behaviors change from one release to the next. This option allows you to avoid the impact of these changes during an upgrade. The <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/htmrn/changed-behavior.html#GUID-712BE54F-08CD-43A3-A645-87B9360ED516">release notes</a> list compatibility mode changes dating back to APEX 4.1.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">After an upgrade, set all apps to the latest compatibility mode and remediate them as necessary.</div>
</div>

<h2 id="heading-upgrade-universal-theme">Upgrade Universal Theme</h2>
<h3 id="heading-not-on-the-universal-theme-yikes">Not on the Universal Theme - Yikes!</h3>
<p>In this use case, one of the apps did not use the Universal Theme. Migrating from legacy themes to the Universal Theme is a significant undertaking and will likely require remediation and possibly even a UI redesign. See the ‘Migrating from Other Themes’ tab of the Migration Guide in the <a target="_blank" href="https://apex.oracle.com/pls/apex/apex_pm/r/ut/migration-guide">APEX Universal Theme App</a> to learn how to migrate to the Universal Theme.</p>
<h3 id="heading-refreshing-the-universal-theme">Refreshing the Universal Theme</h3>
<p>Assuming your Apps are using the Universal Theme, the steps to refresh it are simple and are laid out in the ‘Refresh Universal Theme’ tab of the Migration Guide in the <a target="_blank" href="https://apex.oracle.com/pls/apex/apex_pm/r/ut/migration-guide">APEX Universal Theme App</a>.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Refresh the Universal Theme for all your Apps after every upgrade.</div>
</div>

<h2 id="heading-upgrade-application">Upgrade Application</h2>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Application &gt; Utilities &gt; Upgrade Application</div>
</div>

<p>Specific APEX components undergo changes in their implementation. For example, between APEX 23.1 and APEX 24.1, the Rich Text Editor changed frameworks three times from CKEditor &gt; Tiny MCE &gt; Oracle Rich Text Library. The Date Picker has undergone similar changes to its implementation.</p>
<p>While APEX does a good job of allowing old implementations to work alongside the new, it is good practice to update to the latest implementation when you upgrade APEX.</p>
<p>One way to do this is to run the Upgrade Application utility. This will provide a report of components that you should transition to, along with suggestions on changes to make based on updates to existing components.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747154178744/03af479b-ebf9-4ac0-a903-fa6b65480aa4.png" alt="APEX Utility - Upgrade Application" class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747154411662/39106980-3bfe-4b51-a86b-dc97113a8de5.png" alt="Example APEX Upgrade Application Report" class="image--center mx-auto" /></p>
<p>While not all suggestions need to be actioned, it is worth checking this report after every upgrade. For example, in the above report, ‘Upgrade jQuery Date Picker to new Date Picker’ should be addressed.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Run the ‘Upgrade Application’ utility after every upgrade.</div>
</div>

<h2 id="heading-check-security-settings">Check Security Settings</h2>
<p>This exercise provides an excellent opportunity to ensure that you have all the recommended security settings in place. While this list is not intended to be a comprehensive security checklist, it highlights settings that are incorrectly configured for this customer and are worth reviewing in your instance.</p>
<h3 id="heading-application-level-authorization-scheme">Application Level Authorization Scheme</h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Shared Components &gt; Security Attributes &gt; Authorization Scheme</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">All applications should have an Application Level Authorization scheme. This maintains a minimum level of Authorization should you forget to add Authorization at the Page Level.</div>
</div>

<h3 id="heading-page-level-authorization-schemes">Page Level Authorization Schemes</h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">All pages should have an Authorization scheme. This shows you have considered which users should be able to access the page.</div>
</div>

<h3 id="heading-application-security-runtime-api-usage">Application Security Runtime API Usage</h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Shared Components &gt; Security Attributes &gt; Application Security Runtime API Usage</div>
</div>

<p>These options restrict what impact an application can have on your APEX environment. For example, to use the API <code>APEX_UTIL.CREATE_USER</code> in an APEX Application, the application must permit modification of the workspace repository.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Uncheck all of these options unless you have a specific use case that requires their use.</div>
</div>

<h3 id="heading-bookmark-hash-function">Bookmark Hash Function</h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Shared Components &gt; Security Attributes &gt; Bookmark Hash Function</div>
</div>

<p>This was set to MD5 and needed to be SHA-2, 512-bit.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Ensure the Bookmark hash function uses the most advanced hashing algorithm available.</div>
</div>

<h3 id="heading-browser-caching">Browser Caching</h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Shared Components &gt; Security Attributes &gt; Browser Security &gt; Cache</div>
</div>

<p>This was set on. I can’t think of any good reasons to have this switched on.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">This should be turned off.</div>
</div>

<h3 id="heading-session-state-protection">Session State Protection</h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Navigation: Shared Components &gt; Security Attributes &gt; <strong>Session State Protection</strong></div>
</div>

<p>Although this was set at the Application Level (which does nothing), it was only set for about half of the pages. Not having this set for every page makes URL tampering a real risk.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Always enable at the Application Level. Set for every page level, unless you have a compelling reason not to.</div>
</div>

<h3 id="heading-session-management">Session Management</h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Shared Components &gt; Security Attributes &gt; <strong>Session Management</strong></div>
</div>

<p>While this was set for the applications in this case study, it is worth noting, especially for applications that haven’t been updated in a long time. Setting appropriate <a target="_blank" href="https://blog.cloudnueva.com/all-about-apex-timeouts">idle and session timeouts</a> is essential.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Always set session and idle timeouts for your applications.</div>
</div>

<h2 id="heading-other-settings-to-check">Other Settings to Check</h2>
<h3 id="heading-friendly-urls">Friendly URLs</h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Shared Components &gt; Application Definition &gt; Friendly URLs</div>
</div>

<p>Enabling Friendly URLs changes how APEX handles URLs from the legacy <code>f?p=</code> syntax to a path-based syntax. It is also a prerequisite for enabling PWAs.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Switch it on, go on, you know you want to do it.</div>
</div>

<h3 id="heading-progressive-web-app"><strong>Progressive Web App</strong></h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Shared Components &gt; <strong>Progressive Web App &gt; </strong>Enable Progressive Web App</div>
</div>

<p>Enable the ‘Enable Progressive Web App’ setting as a minimum. This provides performance improvements by serving static files more efficiently using advanced caching. There are also other compelling PWA features worth considering.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Enable ‘Enable Progressive Web App’.</div>
</div>

<h2 id="heading-instance-amp-workspace-settings">Instance &amp; Workspace Settings</h2>
<p>As APEX evolves, new options are added at the Instance (INTERNAL/Administration Services) and Workspace levels. It is essential to verify these settings after an upgrade to ensure that any newly added options are configured correctly.</p>
<h3 id="heading-instance">Instance</h3>
<p>Some more recently added instance settings that you should check:</p>
<ul>
<li><p>Manage Instance &gt; Security</p>
<ul>
<li><p>AI Enabled (on by default)</p>
</li>
<li><p>Allow Persistent Auth (Off by default)</p>
</li>
</ul>
</li>
<li><p>Manage Instance &gt; Instance Settings</p>
<ul>
<li><p>Workflow Settings</p>
</li>
<li><p>Background Jobs</p>
</li>
</ul>
</li>
</ul>
<p>Not recent but worth checking anyway:</p>
<ul>
<li>Manage Instance &gt; Security &gt; Require HTTPS (should be on)</li>
</ul>
<h3 id="heading-workspace">Workspace</h3>
<p>Some more recently added workspace settings that you should check:</p>
<ul>
<li><p>Manage Workspaces &gt; Existing Workspaces &gt; Edit Workspace Information</p>
<ul>
<li><p>AI Enabled</p>
</li>
<li><p>Maximum Background Page Process Jobs (on-premise only)</p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-plugins">Plugins</h2>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Application &gt; Shared Components &gt; Plug-ins</div>
</div>

<p>The applications that were part of this exercise used five plugins. All of them are no longer actively supported. The good news is that they can all be replaced with standard functionality in APEX 24.2.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Check all plugins: 1) Can they be replaced by standard functionality? 2) Make sure they are still actively supported. 3) If they are not supported, ensure you understand how they work so you can resolve any issues that may arise after an upgrade.</div>
</div>

<h2 id="heading-tabular-forms">Tabular Forms</h2>
<p>Tabular forms were used extensively. Tabular forms were deprecated in APEX 20.1 and are considered <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/htmdb/managing-tabular-forms.html">legacy</a>.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Replace tabular forms with Interactive Grids as soon as possible. It is only a matter of time before they cause problems.</div>
</div>

<h2 id="heading-deprecated-amp-de-supported-featuresapis">Deprecated &amp; De-Supported Features/APIs</h2>
<p>Check for references to deprecated APEX features, PL/SQL, and JavaScript APIs. In my use case, I found calls to htmldb_mail.SEND and wwv_flow_mail.push_queue and usage of legacy data load definitions and tabular forms.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">If you do not address remediation of deprecated to de-supported features, at least add remediation for these to your backlog. Incorporate these remediations the next time you need to change the impacted application(s).</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Having your code in GitHub makes finding references to deprecated APIs much easier.</div>
</div>

<h1 id="heading-modern-authentication">Modern Authentication</h1>
<p>For this use case, the customer utilized a custom table-based Authentication Scheme with plain-text passwords. Legacy apps often use hardcoded custom authentication schemes that are now better handled with APEX Social sign-on and modern authentication providers, such as Active Directory and Okta.</p>
<p>This even applies to APEX Builder. <a target="_blank" href="https://blog.cloudnueva.com/oracle-apex-builder-social-sign-on">Configuring APEX Builder to use SSO</a> enhances the security of your development environment and streamlines login for your developers.</p>
<h1 id="heading-apex-advisor">APEX Advisor</h1>
<div data-node-type="callout">
<div data-node-type="callout-emoji">➡</div>
<div data-node-type="callout-text">Navigation: Utilities &gt; Advisor</div>
</div>

<p>The APEX advisor appears to be receiving more attention from the APEX development team, and as of APEX 24.2, is starting to provide valuable insights again.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Run before and after any upgrades.</div>
</div>

<h1 id="heading-release-notes">Release Notes</h1>
<p>If all you did for each APEX upgrade were to read the <a target="_blank" href="https://docs.oracle.com/en/database/oracle/apex/24.2/htmrn/index.html">release notes,</a> you would be more than halfway there. Just look at the index. Why would you not want to know these things before upgrading?</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747157470002/ac3f104f-5d03-4759-b1fa-c006f8153ad7.png" alt="Screenshot of Oracle APEX Release Notes" class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">✅</div>
<div data-node-type="callout-text">Always read the release notes for each new version of APEX.</div>
</div>

<h1 id="heading-what-about-ords">What About ORDS?</h1>
<p>Everything I have said about APEX applies to Oracle REST Data Services (ORDS). In every release, ORDS introduces new features, deprecations, performance improvements, and bug fixes. In short, whatever steps you take for APEX, you should also take for ORDS.</p>
<h1 id="heading-what-about-the-database">What About the Database?</h1>
<p>The great thing about APEX running in the database is that with each major database release, APEX receives a direct boost in functionality and performance enhancements. When performing a database upgrade, review the new features to identify areas where you can utilize modern approaches and features. Some examples in 19c and 23ai include:</p>
<ul>
<li><p>Significant improvements to how you handle JSON in the database. This includes much more efficient <a target="_blank" href="https://blog.cloudnueva.com/for-speeds-sake-stop-using-apexjson">JSON Parsin</a>g since 19c. It also enables the inclusion of additional attributes in a JSON column.</p>
</li>
<li><p><a target="_blank" href="https://blog.cloudnueva.com/23ai-vector-search-in-apex">Vector/Semantic search</a> in 23ai significantly enhances search and serves as the building block for AI techniques, such as Retrieval-Augmented Generation (RAG).</p>
</li>
<li><p><a target="_blank" href="https://docs.oracle.com/en/database/oracle/oracle-database/23/lnpls/SQL_MACRO-clause.html">SQL Macros</a> (since 19c) allow you to create parameterized views, which can boost the performance of your APEX reports.</p>
</li>
<li><p><a target="_blank" href="https://www.oracle.com/database/json-relational-duality/">JSON Relational Duality Views</a> provide a configurable JSON interface to relational tables.</p>
</li>
</ul>
<h1 id="heading-reasons-to-stay-up-to-date">Reasons to Stay Up to Date</h1>
<ol>
<li><p><strong>Oracle Support</strong> - After a new version of APEX is released, Oracle provides support for it for 18 months.</p>
</li>
<li><p><strong>Security</strong> - By running the latest version, you are running the most secure version of APEX. Also, remember to update the recommended security settings accordingly.</p>
</li>
<li><p><strong>Return on Investment</strong> - Take advantage of the latest groundbreaking features by running the latest APEX version and enabling new features.</p>
</li>
<li><p><strong>Performance</strong> - Take advantage of performance improvements included in the latest version.</p>
</li>
<li><p><strong>Developer Productivity</strong> - The latest features make building APEX Apps even quicker.</p>
</li>
<li><p><strong>Developer Sanity</strong> - Keep your developers happy (no one wants to be working on APEX 18.2).</p>
</li>
</ol>
<h1 id="heading-reasons-why-we-dont-stay-up-to-date">Reasons Why We Don’t Stay Up to Date</h1>
<p>For every reason why it is a good idea to stay up to date, there is another reason why people don’t:</p>
<ol>
<li><p>Fear of breaking Apps with a lot of Custom JavaScript.</p>
</li>
<li><p>Fear of breaking something by making changes to settings.</p>
</li>
<li><p>Fear of breaking Plugins.</p>
</li>
<li><p>Lack of Resources to remediate and or regression test after every upgrade.</p>
</li>
<li><p>Locked into a Custom Theme that cannot be refreshed or converted to the Universal Theme.</p>
</li>
</ol>
<h1 id="heading-how-can-i-avoid-this-fate">How Can I Avoid This Fate?</h1>
<p>Follow these steps to avoid the fate of the customer in my use case:</p>
<ol>
<li><p>Avoid plugins unless they provide a measurable differentiator for your business. For example, APEX Office Print from United Codes provides functionality fundamental to business applications that are not available in APEX out of the box.</p>
</li>
<li><p>If your business depends on a Plugin, make sure you either know how to fix it yourself or that a reputable company, such as United Codes, supports it.</p>
</li>
<li><p>Check the release notes to see if the latest version has a feature that allows you to remove a plugin or simplify code.</p>
</li>
<li><p>Avoid JavaScript unless it is going to provide a measurable business impact.</p>
</li>
<li><p>Do not unsubscribe from the Universal Theme.</p>
</li>
<li><p>Upgrade APEX (and ORDS) at least once a year.</p>
</li>
<li><p>Read the release notes for <strong>every APEX release</strong>.</p>
</li>
<li><p>If you do not address the remediation issue in the current upgrade cycle, at least document it so that you are aware of the technical debt you are accumulating by not taking action. Documenting deferred remediations has the added benefit of giving you a list to process throughout the year. I suggest leaking them into your sprints while working on other changes to your Apps. You could also reserve a couple of dedicated sprints each year to handle remediation issues.</p>
</li>
<li><p>Build regression test scripts (manual or automated) to allow you to run regressions after each upgrade.</p>
</li>
</ol>
<h1 id="heading-conclusion">Conclusion</h1>
<p>An aging APEX environment rarely screams for attention<strong>.</strong> All the same, it quietly accumulates risk, inefficiency, and missed opportunity. As this case study shows, the illusion of stability can mask a mounting backlog of deprecated features, security gaps, and unsupported components.</p>
<p>The payoff is modern security, faster performance, happier developers, and the freedom to adopt new features with confidence. Don’t let inertia become your architecture. Make staying current part of your culture, not just your roadmap.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">I urge you to consider upgrading APEX (and ORDS) at least once a year and maintain a backlog of remediations along with a plan to address them.</div>
</div>]]></content:encoded></item><item><title><![CDATA[AI Function Calling with APEX]]></title><description><![CDATA[Introduction
One of the upcoming features described by the APEX Development team at KSCOPE25 was Custom Tools (also known as Functions). Although this feature is not yet available, this post describes how AI tools like OpenAI utilize functions and ho...]]></description><link>https://blog.cloudnueva.com/ai-function-calling-with-apex</link><guid isPermaLink="true">https://blog.cloudnueva.com/ai-function-calling-with-apex</guid><category><![CDATA[orclapex]]></category><category><![CDATA[#oracle-apex]]></category><category><![CDATA[AI]]></category><category><![CDATA[functions]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 17 Jul 2025 11:59:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750715695572/c07be813-8867-468a-b20e-7580ca2211d1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>One of the upcoming features described by the APEX Development team at KSCOPE25 was Custom Tools (also known as Functions). Although this feature is not yet available, this post describes how AI tools like OpenAI utilize functions and how we can implement them in our APEX Apps today.</p>
<p>Before I begin, I would like to clarify the distinction between Tools and Functions in the context of AI. LLMs utilize tools to perform external actions that assist the LLM in answering a question. Functions are a type of Tool that allows you to run custom code and return the results to the LLM.</p>
<h1 id="heading-how-functions-work">How Functions Work</h1>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">The purpose of using functions is to allow the LLM to utilize your business logic and data when answering a user’s question.</div>
</div>

<blockquote>
<p>When I first looked into functions, I thought that the LLM directly called the function. This put me off looking into functions because, for most business use cases, you would not want a public LLM to be able to access your business data via an API call.</p>
</blockquote>
<p>It was at KSCOPE that I realized that this is how functions work:</p>
<ol>
<li><p>Pass the LLM the user’s question and a list of potential functions.</p>
</li>
<li><p>The LLM returns a list of functions (and their corresponding parameter values) that it wants you to run to help it answer the user’s question.</p>
</li>
<li><p>You run the requested functions(s) and call the LLM a second time with the user’s questions and the results of running the function(s).</p>
</li>
<li><p>The LLM uses the results from the function calls (and it’s general knowledge) to answer the question.</p>
</li>
</ol>
<p>The diagram below shows an example process flow:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750639097623/f99fc1c9-37b5-491d-a67f-2259b162d1c9.png" alt="Diagram showing how AI Function Calling Works" class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Note that functions don’t have to be about returning data. You can create transactions, start workflows, and call web services. Anything you can do from PL/SQL, you can expose via a function.</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">The LLM does not directly invoke any code. Instead, it <em>proposes</em> function calls. Your server-side code must interpret these proposals, validate inputs, execute the logic, and return the results to the LLM.</div>
</div>

<h1 id="heading-why-not-use-rag">Why not use RAG?</h1>
<p>Retrieval Augmented Generation (RAG) involves searching for and sending a subset of your data to the LLM to use for context when answering a question.</p>
<p>Functions offer an alternative to RAG solutions, enabling LLMs to act on your business data. RAG solutions often rely on passing a significant amount of your data to the LLM for context. Functions can significantly reduce the amount of data (and therefore the cost) by allowing the LLM to request only the data it needs to answer the question.</p>
<h1 id="heading-example">Example</h1>
<p>For my example, I would like to request information about a Sales Order. I will be using the OpenAI <a target="_blank" href="https://platform.openai.com/docs/api-reference/responses">Responses API</a>. The documentation on functions can be found <a target="_blank" href="https://platform.openai.com/docs/guides/function-calling?api-mode=responses">here</a>.</p>
<h2 id="heading-plsql-function">PL/SQL Function</h2>
<p>I created a simple PL/SQL function that returned information about a Sales Order. The function was compiled in my database.</p>
<pre><code class="lang-sql"><span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">OR</span> <span class="hljs-keyword">REPLACE</span> <span class="hljs-keyword">FUNCTION</span> get_order_info(p_order_id <span class="hljs-built_in">NUMBER</span>, p_info_type <span class="hljs-built_in">VARCHAR2</span>) <span class="hljs-keyword">RETURN</span> <span class="hljs-built_in">VARCHAR2</span> <span class="hljs-keyword">IS</span>
    l_order_id        sales_orders.order_id%<span class="hljs-keyword">TYPE</span>;
    l_order_date      sales_orders.order_date%TYPE;
    l_customer_number sales_orders.customer_number%TYPE;
    l_total_amount    sales_orders.total_amount%TYPE;
    l_status          sales_orders.status%TYPE;
<span class="hljs-keyword">BEGIN</span>
    <span class="hljs-keyword">SELECT</span> order_id, order_date, customer_number, total_amount, <span class="hljs-keyword">status</span>
      <span class="hljs-keyword">INTO</span> l_order_id, l_order_date, l_customer_number, l_total_amount, l_status
      <span class="hljs-keyword">FROM</span> sales_orders
     <span class="hljs-keyword">WHERE</span> order_id = p_order_id;

    IF LOWER(p_info_type) = 'order_id' THEN
        RETURN l_order_id;
    ELSIF LOWER(p_info_type) = 'order_date' THEN
        RETURN TO_CHAR(l_order_date, 'YYYY-MM-DD');
    ELSIF LOWER(p_info_type) = 'customer_number' THEN
        RETURN l_customer_number;
    ELSIF LOWER(p_info_type) = 'total_amount' THEN
        RETURN TO_CHAR(l_total_amount);
    ELSIF LOWER(p_info_type) = 'status' THEN
        RETURN l_status;
    ELSE
        RETURN 'Invalid info_type';
    <span class="hljs-keyword">END</span> <span class="hljs-keyword">IF</span>;
EXCEPTION WHEN NO_DATA_FOUND THEN
  RETURN 'Order not found.';
<span class="hljs-keyword">END</span>;
</code></pre>
<h2 id="heading-end-to-end-example-in-json">End-To-End Example in JSON</h2>
<p>Let’s walk through an example illustrating the JSON that is passed back and forth at each step. Note: In the JSON examples, I have omitted parts of the payload (e.g., model, temperature) that are not explicitly related to functions.</p>
<p>For each of the calls to the LLM, we can use <code>apex_web_service.make_rest_request</code>.</p>
<pre><code class="lang-sql">  apex_web_service.set_request_headers 
   (p_name_01   =&gt; 'Content-Type', 
    p_value_01  =&gt; 'application/json',
    p_name_02   =&gt; 'Authorization', 
    p_value_02  =&gt; 'Bearer YOURAPIKEYGOESHERE',
    p_reset     =&gt; TRUE);

  l_response := apex_web_service.make_rest_request
   (p_url         =&gt; 'https://api.openai.com/v1/responses',
    p_http_method =&gt; 'POST',
    p_body        =&gt; l_json);
</code></pre>
<h3 id="heading-json-for-initial-llm-call-with-function-definitions-call-1">JSON for Initial LLM Call with Function Definitions (Call 1)</h3>
<pre><code class="lang-json">{
  <span class="hljs-attr">"input"</span>: [
    {
      <span class="hljs-attr">"role"</span>: <span class="hljs-string">"system"</span>,
      <span class="hljs-attr">"content"</span>: [
        {
          <span class="hljs-attr">"type"</span>: <span class="hljs-string">"input_text"</span>,
          <span class="hljs-attr">"text"</span>: <span class="hljs-string">"Answer the users question"</span>
        }
      ]
    },
    {
      <span class="hljs-attr">"role"</span>: <span class="hljs-string">"user"</span>,
      <span class="hljs-attr">"content"</span>: [
        {
          <span class="hljs-attr">"type"</span>: <span class="hljs-string">"input_text"</span>,
          <span class="hljs-attr">"text"</span>: <span class="hljs-string">"what is the status of order 1"</span>
        }
      ]
    }
  ],
  <span class="hljs-attr">"tools"</span>: [
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"function"</span>,
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"get_order_info"</span>,
      <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Retrieves order information based on the provided order ID and information type."</span>,
      <span class="hljs-attr">"parameters"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"object"</span>,
        <span class="hljs-attr">"required"</span>: [
          <span class="hljs-string">"p_order_id"</span>,
          <span class="hljs-string">"p_info_type"</span>
        ],
        <span class="hljs-attr">"properties"</span>: {
          <span class="hljs-attr">"p_order_id"</span>: {
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"number"</span>,
            <span class="hljs-attr">"description"</span>: <span class="hljs-string">"The unique identifier for the order"</span>
          },
          <span class="hljs-attr">"p_info_type"</span>: {
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span>,
            <span class="hljs-attr">"description"</span>: <span class="hljs-string">"The type of information to retrieve (e.g., order_id, order_date, customer_number, total_amount, status)"</span>
          }
        },
        <span class="hljs-attr">"additionalProperties"</span>: <span class="hljs-literal">false</span>
      },
      <span class="hljs-attr">"strict"</span>: <span class="hljs-literal">true</span>
    }
  ]
}
</code></pre>
<ul>
<li><p>The <code>input</code> array contains the system prompt and the user’s question.</p>
</li>
<li><p>The <code>tools</code> array contains a list of the functions I want the LLM to consider when answering the user’s question. The tool type in this case is <code>function</code>.</p>
</li>
<li><p>Function definitions require a specific JSON format, which can be viewed <a target="_blank" href="https://platform.openai.com/docs/guides/function-calling#defining-functions">here</a>. The documentation also specifies best practices for defining a function, which I advise you to read.</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">The more detail you include in the function and parameter descriptions, the easier it makes it for the LLM to understand the function, determine when to use it, and determine what parameters to pass.</div>
</div>

<h3 id="heading-response-from-call-1">Response from Call 1</h3>
<pre><code class="lang-json">{
  <span class="hljs-attr">"output"</span>: [
    {
      <span class="hljs-attr">"id"</span>: <span class="hljs-string">"fc_6858c0ab059c819b89594dc4d64dd4df03a1897b9ddea35b"</span>,
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"function_call"</span>,
      <span class="hljs-attr">"status"</span>: <span class="hljs-string">"completed"</span>,
      <span class="hljs-attr">"arguments"</span>: <span class="hljs-string">"{\"p_order_id\":1,\"p_info_type\":\"status\"}"</span>,
      <span class="hljs-attr">"call_id"</span>: <span class="hljs-string">"call_fyixqaqJGHwVsOada86TkvTs"</span>,
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"get_order_info"</span>
    }
  ],
  <span class="hljs-attr">"parallel_tool_calls"</span>: <span class="hljs-literal">true</span>,
  <span class="hljs-attr">"tool_choice"</span>: <span class="hljs-string">"auto"</span>,
  <span class="hljs-attr">"tools"</span>: [
    {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"function"</span>,
      <span class="hljs-attr">"description"</span>: <span class="hljs-string">"Retrieves order information based on the provided order ID and information type."</span>,
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"get_order_info"</span>,
      <span class="hljs-attr">"parameters"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"object"</span>,
        <span class="hljs-attr">"required"</span>: [
          <span class="hljs-string">"p_order_id"</span>,
          <span class="hljs-string">"p_info_type"</span>
        ],
        <span class="hljs-attr">"properties"</span>: {
          <span class="hljs-attr">"p_order_id"</span>: {
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"number"</span>,
            <span class="hljs-attr">"description"</span>: <span class="hljs-string">"The unique identifier for the order"</span>
          },
          <span class="hljs-attr">"p_info_type"</span>: {
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span>,
            <span class="hljs-attr">"description"</span>: <span class="hljs-string">"The type of information to retrieve (e.g., order_id, order_date, customer_number, total_amount, status)"</span>
          }
        },
        <span class="hljs-attr">"additionalProperties"</span>: <span class="hljs-literal">false</span>
      },
      <span class="hljs-attr">"strict"</span>: <span class="hljs-literal">true</span>
    }
  ]
}
</code></pre>
<ul>
<li><p>If the LLM finds a function that it wants you to run, it will be listed in the <code>output</code> array. This is signified by the output <code>type</code> of <code>function_call</code>.</p>
</li>
<li><p>In the above example, it is asking us to run a function with the name <code>get_order_info</code> and pass parameters as follows:</p>
<ul>
<li><p>p_order_id = 1</p>
</li>
<li><p>p_info_type = status</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-call-our-function">Call our Function</h3>
<p>All we need to do now is run the function:</p>
<pre><code class="lang-sql">
  <span class="hljs-comment">-- Parse the Response from call 1</span>
  l_response_obj := JSON_OBJECT_T.parse(l_response);
  l_output_arr := l_response_obj.get_Array('output');
  l_output_obj := JSON_OBJECT_T(l_output_arr.get(0));
  l_name      := l_output_obj.get_String('name');  
  l_arguments := l_output_obj.get_String('arguments');

  <span class="hljs-comment">-- Call the function.</span>
  IF l_name = 'get_order_info' THEN
    l_order_info := get_order_info
     (p_order_id  =&gt; JSON_OBJECT_T.parse(l_arguments).get_Number('p_order_id'),
      p_info_type =&gt; JSON_OBJECT_T.parse(l_arguments).get_String('p_info_type'));
  <span class="hljs-keyword">END</span> <span class="hljs-keyword">IF</span>;
</code></pre>
<h3 id="heading-json-payload-for-the-second-llm-call-with-function-results-call-2">JSON Payload for the Second LLM Call with Function Results (Call 2)</h3>
<pre><code class="lang-json">{
  <span class="hljs-attr">"input"</span>: [
    {
      <span class="hljs-attr">"role"</span>: <span class="hljs-string">"system"</span>,
      <span class="hljs-attr">"content"</span>: [
        {
          <span class="hljs-attr">"type"</span>: <span class="hljs-string">"input_text"</span>,
          <span class="hljs-attr">"text"</span>: <span class="hljs-string">"Answer the users question"</span>
        }
      ]
    },
    {
      <span class="hljs-attr">"role"</span>: <span class="hljs-string">"user"</span>,
      <span class="hljs-attr">"content"</span>: [
        {
          <span class="hljs-attr">"type"</span>: <span class="hljs-string">"input_text"</span>,
          <span class="hljs-attr">"text"</span>: <span class="hljs-string">"what is the status of order 1"</span>
        }
      ]
    },
    {
      <span class="hljs-attr">"role"</span>: <span class="hljs-string">"user"</span>,
      <span class="hljs-attr">"content"</span>: [
        {
          <span class="hljs-attr">"type"</span>: <span class="hljs-string">"input_text"</span>,
          <span class="hljs-attr">"text"</span>: <span class="hljs-string">"The result from tool get_order_info is: Pending"</span>
        }
      ]
    }
  ]
}
</code></pre>
<ul>
<li><p>I included the result from the function call as a second <code>user</code> message in the <code>input</code> array.</p>
</li>
<li><p>When we call the LLM with this JSON, it now has everything it needs to answer the user’s question.</p>
</li>
</ul>
<h3 id="heading-response-from-call-2">Response from Call 2</h3>
<pre><code class="lang-json">{
  <span class="hljs-attr">"output"</span>: [
    {
      <span class="hljs-attr">"id"</span>: <span class="hljs-string">"msg_6858c2e3439c8199a71400ac3c3b38ba020aa24e30c9a174"</span>,
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"message"</span>,
      <span class="hljs-attr">"status"</span>: <span class="hljs-string">"completed"</span>,
      <span class="hljs-attr">"content"</span>: [
        {
          <span class="hljs-attr">"type"</span>: <span class="hljs-string">"output_text"</span>,
          <span class="hljs-attr">"annotations"</span>: [],
          <span class="hljs-attr">"text"</span>: <span class="hljs-string">"The status of order 1 is currently pending."</span>
        }
      ],
      <span class="hljs-attr">"role"</span>: <span class="hljs-string">"assistant"</span>
    }
  ]
}
</code></pre>
<ul>
<li>You can see the answer to the question in the <code>output.content.text</code> field.</li>
</ul>
<h1 id="heading-additional-thoughts">Additional Thoughts</h1>
<h2 id="heading-multiple-tool-calls">Multiple Tool Calls</h2>
<p>As I alluded to in the example, the LLM will request multiple function calls if it requires them. For example, if the user asks the question ‘what is the status of order 1 and what is its value', then we can expect the LLM to ask us to call the function twice with different parameters. The following JSON is an excerpt from the response, where the LLM instructs us to call the function once to retrieve the status and again to get the amount.</p>
<pre><code class="lang-json"><span class="hljs-string">"output"</span>: [
  {
    <span class="hljs-attr">"id"</span>: <span class="hljs-string">"fc_685997def9b88199977de8d1d817431603004e30121b785e"</span>,
    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"function_call"</span>,
    <span class="hljs-attr">"status"</span>: <span class="hljs-string">"completed"</span>,
    <span class="hljs-attr">"arguments"</span>: <span class="hljs-string">"{\"p_order_id\":1,\"p_info_type\":\"status\"}"</span>,
    <span class="hljs-attr">"call_id"</span>: <span class="hljs-string">"call_X38rKT3SQhefM5561hvxKlTk"</span>,
    <span class="hljs-attr">"name"</span>: <span class="hljs-string">"get_order_info"</span>
  },
  {
    <span class="hljs-attr">"id"</span>: <span class="hljs-string">"fc_685997df3c7c819993541e73df119cf503004e30121b785e"</span>,
    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"function_call"</span>,
    <span class="hljs-attr">"status"</span>: <span class="hljs-string">"completed"</span>,
    <span class="hljs-attr">"arguments"</span>: <span class="hljs-string">"{\"p_order_id\":1,\"p_info_type\":\"total_amount\"}"</span>,
    <span class="hljs-attr">"call_id"</span>: <span class="hljs-string">"call_lwcVs312CMVjz1TRex9To09i"</span>,
    <span class="hljs-attr">"name"</span>: <span class="hljs-string">"get_order_info"</span>
  }
</code></pre>
<p>This means your code must:</p>
<ul>
<li><p>Loop through and call all of the requested functions.</p>
</li>
<li><p>Concatenate responses from the tool calls and send them back in the 2nd user message of the second call to the LLM, e.g., ‘Function Name: get_order_info - status = Pending \n Function Name: get_order_info - total_amount = 250 \n’.</p>
</li>
</ul>
<h2 id="heading-security">Security</h2>
<p>Of course, security is a primary concern here. Assuming you are calling the LLM from an APEX Application, then you should provide that user context to your function calls to make sure the user has access to the data in question.</p>
<p>Other security considerations:</p>
<ul>
<li><p>Make sure you are OK with the results of your function calls being consumed by the LLM and potentially ending up in remote log files, etc.</p>
</li>
<li><p>Validate parameters received from the LLM against database constraints before calling functions.</p>
</li>
<li><p>Log each function invocation with inputs and user context for auditing.</p>
</li>
<li><p>Avoid exposing high-sensitivity operations (e.g., finance approvals) directly via function calls.</p>
</li>
</ul>
<h2 id="heading-additional-use-cases">Additional Use Cases</h2>
<p>I purposely chose a simplified example for this post, but there are many other potential use cases for functions. Here are three to get your thinking:</p>
<ul>
<li><p><strong>Customer-Facing ERP Chatbot</strong></p>
<ul>
<li>Provide users of your customer portal with a way to get their order status using natural language. If the customer wants to change or cancel an order, initiate an APEX workflow to obtain approval for and take action on the change.</li>
</ul>
</li>
<li><p><strong>Project Status Assistant</strong></p>
<ul>
<li>Let project managers ask “Show me open risks for Project Delta” or “What tasks are overdue for Milestone 3?” with dynamic responses from your project tables.</li>
</ul>
</li>
<li><p><strong>AI-Assisted Form Auto-Fill</strong></p>
<ul>
<li>On a data-entry screen, users can say, “Pre-fill this form using the last order I created,” and a function retrieves and injects historical values into the current APEX session state.</li>
</ul>
</li>
</ul>
<h1 id="heading-conclusion">Conclusion</h1>
<p>As you can see, incorporating functions in your apps is not straightforward. You must provide a robust wrapper to handle zero or multiple function call requests, invalid parameters, route to the correct PL/SQL code, etc. That being said, function calling provides a structured approach to integrating business logic with LLMs, allowing for the safe handling of sensitive data. By handling function execution server-side and passing only the necessary results to the LLM, this approach strikes a balance between capability and control.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">While still emerging, this method enables APEX developers to build secure, intelligent assistants that can efficiently interact with enterprise data and processes.</div>
</div>]]></content:encoded></item><item><title><![CDATA[Why Evals are Important in AI Development]]></title><description><![CDATA[Introduction
In AI development, evaluating an LLM’s performance using test cases based on your understanding of what the LLM is supposed to do is critical. These evaluations, commonly called “Evals”, serve as test cases to help you assess whether you...]]></description><link>https://blog.cloudnueva.com/why-evals-are-important-in-ai-development</link><guid isPermaLink="true">https://blog.cloudnueva.com/why-evals-are-important-in-ai-development</guid><category><![CDATA[orclapex]]></category><category><![CDATA[evals]]></category><category><![CDATA[genai]]></category><dc:creator><![CDATA[Jon Dixon]]></dc:creator><pubDate>Thu, 10 Jul 2025 11:47:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749522341437/cde55bbc-bd10-4956-b645-35671aa777a3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>In AI development, evaluating an LLM’s performance using test cases based on your understanding of what the LLM is supposed to do is critical. These evaluations, commonly called “Evals”, serve as test cases to help you assess whether your model behaves the way it should. In this post, I’ll explain why Evals are essential, show how they apply in Oracle APEX environments, and provide examples you can adapt to your projects.</p>
<h1 id="heading-my-aha-moment">My Aha Moment</h1>
<p>I recently came across a <a target="_blank" href="https://www.youtube.com/watch?v=DL82mGde6wo">Y Combinator video on YouTube</a> that discussed Meta-Prompting. While <a target="_blank" href="https://cookbook.openai.com/examples/enhance_your_prompts_with_meta_prompting">Meta-Prompting</a> is an interesting topic in its own right, it was something else they said that made me stop and think. They were showcasing a Prompt from an AI Startup that focuses on Customer Service and highlighted the fact that the company does not consider their AI Prompts to be their primary Intellectual Property (IP). This surprised me because you would have thought that prompts would be the most important asset for a business that has essentially built a wrapper on ChatGPT.</p>
<blockquote>
<p>This startup considers the thousands of Evals (Test Cases) they have developed based on their deep domain knowledge of Customer Service as their primary IP.</p>
</blockquote>
<h1 id="heading-why-create-evals">Why Create Evals?</h1>
<h2 id="heading-measure-model-quality-and-progress"><strong>Measure Model Quality and Progress</strong></h2>
<p>Evals give objective metrics to track:</p>
<ul>
<li><p>Accuracy, fluency, coherence, or truthfulness</p>
</li>
<li><p>Improvements across model iterations</p>
</li>
<li><p>New feature emergence (e.g., tool use, reasoning)</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Without evals, you can’t tell if the latest model is better than the last one.</div>
</div>

<h2 id="heading-align-model-behavior-with-product-or-user-goals"><strong>Align Model Behavior with Product or User Goals</strong></h2>
<p>Well-designed evals ensure the model performs <em>as expected</em> for:</p>
<ul>
<li><p>Specific tasks (e.g., summarization, classification, document recognition)</p>
</li>
<li><p>Business KPIs (e.g., ticket deflection, content moderation accuracy)</p>
</li>
<li><p>User satisfaction and trust</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Evals bridge the gap between AI’s potential and actual product value.</div>
</div>

<h2 id="heading-identify-weaknesses-gaps-and-safety-issues"><strong>Identify Weaknesses, Gaps, and Safety Issues</strong></h2>
<p>Evals uncover:</p>
<ul>
<li><p>Hallucinations, bias, toxicity, and overconfidence</p>
</li>
<li><p>Weak performance on edge cases or minority groups</p>
</li>
<li><p>Failure modes in real-world or adversarial conditions</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">If you don’t measure, you don’t know if there's anything to fix.</div>
</div>

<h2 id="heading-compare-across-models-versions-and-configurations"><strong>Compare Across Models, Versions, and Configurations</strong></h2>
<p>Evals allow rigorous <strong>A/B testing</strong> of:</p>
<ul>
<li><p>Different AI platforms and the different models they offer</p>
</li>
<li><p>Prompt templates, temperature settings, or tool-using agents</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">If you can prove that you get just as good an answer using gpt-4.1-nano as I do using gpt-4.1-mini, then you can save a lot of money, and your responses will be faster.</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">If I tweak my prompt a certain way, can I gain a 2x improvement in accuracy? Without Evals, I have no way to prove that.</div>
</div>

<h2 id="heading-trustworthy-responsible-deployment"><strong>Trustworthy, Responsible Deployment</strong></h2>
<p>For enterprise use cases, evals provide:</p>
<ul>
<li><p>Documentation of model performance and limitations</p>
</li>
<li><p>Assurance of compliance (e.g., fairness, explainability)</p>
</li>
<li><p>Evidence for audits or governance boards</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Evaluation is Responsible AI.</div>
</div>

<h1 id="heading-types-of-evals">Types of Evals</h1>
<p>There are two primary types of Evals. I will use examples from my <a target="_blank" href="https://apps.cloudnueva.com/apexblogs">APEX Developer Blogs</a> Website to illustrate them.</p>
<h2 id="heading-code-driven">Code Driven</h2>
<p>The best way to evaluate the result of an interaction with an LLM is to use code to check its work. To do this, however, the result of the AI interaction must be empirical. It’s probably easiest to describe this using an example.</p>
<h3 id="heading-example-blog-classifier">Example - Blog Classifier</h3>
<p>Before allowing blogs onto APEX Developer Blogs, I first check them for relevance related to 5 areas of interest to APEX Developers (APEX, ORDS, OCI, SQL, and PL/SQL). I pass a prompt and the content of the post to OpenAI, and ask it to check the post for relevance, specifying that I want a JSON object in response that looks like this:</p>
<pre><code class="lang-json">{<span class="hljs-attr">"APEX"</span>:<span class="hljs-number">2</span>,<span class="hljs-attr">"ORDS"</span>:<span class="hljs-number">0</span>,<span class="hljs-attr">"OCI"</span>:<span class="hljs-number">1</span>,<span class="hljs-attr">"SQL"</span>:<span class="hljs-number">0</span>,<span class="hljs-attr">"PLSQL"</span>:<span class="hljs-number">2</span>}
</code></pre>
<p>For each category, I am looking for a score from zero to five. If a post receives a score of two or more across all categories, I allow it. Otherwise, I add it to a queue of rejected posts, which I review manually every so often.</p>
<p>This response is something I can easily test programmatically:</p>
<ul>
<li><p>Verify it is valid JSON using <code>json_object.parse</code>. If an <code>-40441</code> error occurs, then the JSON is not valid.</p>
</li>
<li><p>Verify that all five categories are represented. Use <code>json_obj.has</code> to check for each field.</p>
</li>
<li><p>Verify that the scores are between zero and five. Use <code>json_obj.get_Number</code> to verify the scores are within range.</p>
</li>
<li><p>Store the scores so that I can test the same blog posts with different parameters (new models, changes to the temperature, etc) and see how the scores change. Log all calls to AI and store the request and response (more on this later).</p>
</li>
</ul>
<h2 id="heading-llm-driven">LLM Driven</h2>
<p>Of course, if you could use code to test all of the responses from an LLM, then we would not need the LLM to start with. The LLM-driven approach involves making a second call to the LLM to check that the output is correct, or at least meets the standards you are aiming to achieve.</p>
<h3 id="heading-example-blog-summarizer">Example - Blog Summarizer</h3>
<p>The second step when ingesting blogs in APEX Developer Blogs is to create a simple Gist of the post. This allows people to get the gist of the post before reading it in full. Much as I would like to, there is no way I could read every post and write a summary myself.</p>
<p>So, how can we test if the summary that OpenAI generates is any good? The answer is to pass the post and the summary back to the LLM, asking it to rate the summary. We could use a prompt like the one below to do this:</p>
<pre><code class="lang-json"># BACKGROUND
- I write summaries/gists of blog posts so that users can see a preview of the post before reading it.
- I want to make sure my summary is high quality. 
- What follows is the summary ##SUMMARY## followed by the blog post ##BLOG POST##. 
# TASK
- Carefully compare the summary to the blog post and assess the quality of the summary based on conciseness and readability. 
- The scores should be between <span class="hljs-number">0</span><span class="hljs-number">-5</span> with <span class="hljs-number">5</span> being excellent and <span class="hljs-number">0</span> being extremely poor. 
# OUTPUT
- Return the scores in a JSON object that looks like this: {<span class="hljs-attr">"READABILITY"</span>:<span class="hljs-number">4</span>,<span class="hljs-attr">"CONCISENESS"</span>:<span class="hljs-number">2</span>}
##SUMMARY##
&lt;&lt;Summary Goes Here&gt;&gt;
##BLOG POST##
&lt;&lt;Blog Post Goes Here&gt;&gt;
</code></pre>
<p>This returns a JSON score, which we can evaluate using PL/SQL.</p>
<h1 id="heading-how-can-we-do-evals-in-apex">How can we do Evals in APEX?</h1>
<h2 id="heading-using-historical-data">Using Historical Data</h2>
<p>The easiest way to run evals is on LLM calls that have already been run. To do this reliably, you need to be able to determine the parameters and payload that went into the LLM call, as well as the response. I suggest creating two tables: one to store AI Configurations and another to log all your LLM calls.</p>
<h3 id="heading-configs">⚙️ Configs</h3>
<p>Create a table to store your AI Configs, which includes:</p>
<ul>
<li><p>Config Name</p>
</li>
<li><p>AI Provider, Model, &amp; REST EndPoint</p>
</li>
<li><p>Max Input &amp; Output Tokens</p>
</li>
<li><p>Temperature</p>
</li>
<li><p>APEX Web Credential</p>
</li>
<li><p>Instructions / System Prompt</p>
</li>
</ul>
<p>Storing this information in a table allows you to easily make changes to the parameters and keep track of which configuration generated which output.</p>
<h3 id="heading-log-table">🪵 Log Table</h3>
<p>You should be logging every call you make to the LLM. This provides the foundational data for running your Evaluations. Your log table should include the following:</p>
<ul>
<li><p>Foreign Key to your Config table, so you know which config generated which log</p>
</li>
<li><p>Response Time (to measure the performance of LLM API calls using different models)</p>
</li>
<li><p>Outbound JSON Payload to the LLM REST API</p>
</li>
<li><p>Response JSON returned from the LLM REST API</p>
</li>
</ul>
<p>With a combination of the Config and the Log Tables, you have everything you need to run evals against historical LLM calls.</p>
<h2 id="heading-eval-library">📚 Eval Library</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749437976377/208c0ade-88c1-4054-a99b-4283aac2c7b5.png" alt="Diagram Showing the Eval Library Approach" class="image--center mx-auto" /></p>
<p>Being proactive about Evals takes more effort. You need to build a library of tests (and expected results) as well as code that can run these tests against your prompts and data. This is where your domain knowledge comes into play. You need to think of all the possible scenarios that could come into play and design tests that cater for those (not forgetting the edge cases, of course). These evals (or tests) represent the knowledge you have on the subject that perhaps no one else knows.</p>
<p>You should continually add to and adjust the library over time as new use cases and user behaviors emerge.</p>
<p>Given that we run APEX on a database, APEX is the obvious tool for maintaining such a library. You can also use APEX to run your evaluations, track results, and report on them.</p>
<h1 id="heading-red-teaming">Red Teaming</h1>
<p>While traditional Evals measure whether an AI model meets your quality and performance standards, <strong>red teaming</strong> focuses on discovering its <em>failures</em>. This involves intentionally crafting contrarian inputs to elicit undesired behavior, such as biased, toxic, or misleading outputs.</p>
<p>Red teaming helps answer questions like:</p>
<ul>
<li><p>Can the model be jailbroken to ignore safety instructions?</p>
</li>
<li><p>Does it respond differently to subtly biased prompts?</p>
</li>
<li><p>Will it hallucinate plausible-sounding but false information?</p>
</li>
<li><p>Does it degrade disproportionately on edge cases (e.g., ambiguous phrasing)?</p>
</li>
</ul>
<p>In enterprise use, this kind of testing is critical for:</p>
<ul>
<li><p><strong>Hardening your application</strong> against misuse</p>
</li>
<li><p><strong>Meeting compliance or governance obligations</strong></p>
</li>
<li><p><strong>Understanding risk</strong> before deployment at scale</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Even if your day-to-day use cases seem benign (like summarizing or classifying), red teaming ensures you’re not blindly trusting the LLM’s output. It complements evals by simulating how your model could go wrong, intentionally and unpredictably.</div>
</div>

<p>🔗 This <a target="_blank" href="https://www.anthropic.com/news/challenges-in-red-teaming-ai-systems">post from Anthropic</a> provides an overview of Red Teaming and how they do it.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>As APEX developers, test-driven development is second nature, but when it comes to AI, it’s easy to overlook evaluation in the excitement of prompt engineering. Evals give you a structured, repeatable, and objective way to track progress, detect regressions, and justify AI decisions in enterprise environments. Whether you’re classifying content or summarizing documents, start small by logging your LLM calls. Then build your Eval Library over time.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">🔈</div>
<div data-node-type="callout-text">It’s not just good practice, it’s responsible AI.</div>
</div>]]></content:encoded></item></channel></rss>