Embedding LLMs in a WordPress Plugin: EasyCommerce's Async Architecture
WordPress plugins execute on the main thread, synchronously, inside a PHP process that typically has a 30-second timeout. LLM APIs take between 500ms and 8 seconds depending on model and prompt length. Put those two facts together and the core architectural problem becomes obvious: you cannot call an LLM inline with a WordPress hook and ship something usable.
This is the problem I have been solving at Codexpert while building EasyCommerce's AI layer — automated product description generation, image analysis, fraud detection, and inventory forecasting. Here is the architecture we landed on, the alternatives we ruled out, and the tradeoffs we accepted.
The dispatch boundary
The rule that governs everything else: no LLM call runs inside the HTTP request lifecycle. Every AI feature is triggered synchronously but executed asynchronously.
When a merchant saves a product without a description, save_post fires. We do not call the API there. We schedule an Action Scheduler job, then return.
/**
* Queue AI description generation when a product is saved without one.
*
* @param int $post_id Post ID.
* @param WP_Post $post Post object.
*/
function ec_maybe_queue_description_generation( int $post_id, WP_Post $post ): void {
if ( 'ec_product' !== $post->post_type ) {
return;
}
if ( wp_is_post_autosave( $post_id ) || wp_is_post_revision( $post_id ) ) {
return;
}
if ( ! current_user_can( 'edit_post', $post_id ) ) {
return;
}
$has_description = ! empty( $post->post_content );
$already_queued = (bool) get_post_meta( $post_id, '_ec_ai_description_queued', true );
if ( $has_description || $already_queued ) {
return;
}
update_post_meta( $post_id, '_ec_ai_description_queued', true );
as_schedule_single_action(
time() + 3,
'ec_generate_product_description',
[ 'product_id' => $post_id ],
'easycommerce-ai'
);
}
add_action( 'save_post', 'ec_maybe_queue_description_generation', 10, 2 );The _ec_ai_description_queued flag prevents duplicate jobs on rapid saves. The 3-second delay lets the product record fully commit before the background job reads it.
We considered wp_cron as the async mechanism and rejected it immediately. WP-Cron fires on the next page load with no concurrency controls — a merchant importing 50 products would pile all 50 jobs onto the next available request. Action Scheduler gives us a proper queue with retry logic, failure logging, and a UI for operations teams to inspect stalled jobs.
The provider abstraction
We support three LLM providers in production: Claude (primary), OpenAI (fallback), and an on-premise Ollama instance for clients with data-residency requirements. Swapping providers cannot require touching feature code.
interface EC_AI_Provider {
/**
* Generate a product description from structured product data.
*
* @param array<string, mixed> $product_data Sanitised product attributes.
* @return string Generated description.
* @throws EC_AI_Exception On API failure or timeout.
*/
public function generate_description( array $product_data ): string;
/**
* Score an order for fraud risk.
*
* @param array<string, mixed> $order_data Sanitised order attributes.
* @return float Risk score, 0.0 (clean) to 1.0 (high risk).
* @throws EC_AI_Exception On API failure or timeout.
*/
public function score_order_fraud( array $order_data ): float;
}Each provider implements this interface. The active provider resolves via a filter, which lets hosting environments or enterprise clients override the default without a plugin fork.
/**
* Resolve the active AI provider instance.
*
* @return EC_AI_Provider
*/
function ec_ai_provider(): EC_AI_Provider {
$provider_class = apply_filters( 'ec_ai_provider_class', EC_Claude_Provider::class );
static $instances = [];
if ( ! isset( $instances[ $provider_class ] ) ) {
$instances[ $provider_class ] = new $provider_class();
}
return $instances[ $provider_class ];
}The static cache prevents re-instantiation across multiple calls within the same Action Scheduler job run.
Fraud detection at order placement
Fraud scoring runs at woocommerce_checkout_order_processed — after payment authorisation, before fulfilment. The result determines whether the order moves to processing or enters an on-hold state for manual review.
/**
* Score a newly placed order and hold it if fraud risk is elevated.
*
* @param int $order_id Order ID.
*/
function ec_score_order_on_placement( int $order_id ): void {
$order = wc_get_order( $order_id );
if ( ! $order instanceof WC_Order ) {
return;
}
$order_data = [
'total' => $order->get_total(),
'email' => sanitize_email( $order->get_billing_email() ),
'ip' => $order->get_customer_ip_address(),
'item_count' => $order->get_item_count(),
'shipping_country' => $order->get_shipping_country(),
'billing_country' => $order->get_billing_country(),
];
try {
$score = ec_ai_provider()->score_order_fraud( $order_data );
} catch ( EC_AI_Exception $e ) {
// Fail open — an AI outage must never block a legitimate order.
ec_log_ai_error( 'fraud_score', $order_id, $e->getMessage() );
return;
}
$order->update_meta_data( '_ec_fraud_score', (float) $score );
if ( $score >= 0.75 ) {
$order->update_status(
'on-hold',
esc_html__( 'Flagged for fraud review by EasyCommerce AI.', 'easycommerce' )
);
}
$order->save();
}
add_action( 'woocommerce_checkout_order_processed', 'ec_score_order_on_placement', 20 );The try/catch with fail-open behaviour is non-negotiable. A provider outage at 2am cannot block a store's checkout. The AI layer is additive intelligence, not a critical dependency. When an exception is caught, a retroactive scoring job is queued via Action Scheduler so the order still gets reviewed once the provider recovers.
This is one of only two places in EasyCommerce's AI layer where we call the provider synchronously. The HTTP client is configured with a 4-second timeout. If the call exceeds that, the exception propagates to the catch block. We accept that 4 seconds is visible to the customer — the alternative, missing fraud entirely on slow connections, is worse.
Inventory forecasting
Forecasting is different in character from the other features: it is not event-driven, it is scheduled. A daily Action Scheduler job pulls 90 days of sales data per product, sends it to the provider, and writes the forecast back as post meta. The merchant sees a "predicted stock needed by end of month" figure in the product editor.
The interesting design decision is batch size. One product per API call is accurate but expensive at scale. Sending 500 products in a single prompt is cheaper but loses granularity and blows past context limits. We settled on 25 products per call, grouped by category. Category-level batching means the model sees each product alongside its siblings — seasonal patterns in outerwear do not contaminate electronics forecasts.
The batch job uses Action Scheduler's group feature (easycommerce-forecast) to cap concurrency to two parallel jobs. Without that cap, a large catalogue triggers enough concurrent API calls to hit rate limits and produce cascading retries that delay the entire queue.
Measured outcomes and what we would change
Across 12 beta stores, description generation lifted product listing completion rates by 35% and cut average time-to-publish from 11 minutes to under 2. The fraud hold rate sits at 2.3% of orders, with a false-positive rate we are still tuning down by adjusting the score threshold per store category.
The architecture has held, but one decision I would revisit: using as_schedule_single_action with a fixed 3-second delay rather than a staggered schedule on bulk operations. When a merchant imports 400 products via CSV, all 400 description jobs land in the same 3-second window. Action Scheduler processes them up to the concurrency limit, but the admin UI degrades noticeably. The fix — time() + ( $index * 2 ) on bulk import — ships in the next release.
The question I have not resolved: at what catalogue size does per-product LLM calls become economically unviable, and what does the architecture look like when the right answer is a fine-tuned local model rather than a hosted API? That is the problem EasyCommerce's AI layer will hit in Q3, and I do not have a clean answer yet.
Al Amin Ahamed
Senior software engineer & AI practitioner. 5+ years shipping Laravel platforms, WordPress plugins, WooCommerce extensions, and AI-driven products.
About me →More from the blog