Back

The Data Supply Chain Most Australian Marketers Don’t Know They’re In

When a marketing agency purchases a list of business contacts for a client campaign, someone compiled that data. Someone verified it. Someone packaged it and set a price. Most buyers never ask who those “someones” were or how many of them there were between original compilation and final purchase. The data arrived, the campaign ran, and nobody interrogated the supply chain that made it possible. This matters more than it might seem. Every layer between compilation and purchase adds cost. Every handoff introduces delay. And delay, in data terms, means decay.
Industry By nandor March 16, 2026 | 6 min read
Share this post

What the Supply Chain Actually Looks Like

The Australian marketing data market operates through a layered structure that typically includes two to three intermediaries between the original source and the end buyer.

At the base sits a compiler — the entity that actually builds the dataset from primary sources. Above them, a wholesaler licenses data from multiple compilers and aggregates it into larger products. Resellers purchase from wholesalers and repackage data for specific verticals or use cases. Agencies buy from resellers. Clients pay the agency.

Each layer serves a function. Wholesalers provide scale and coverage. Resellers offer market specialisation and sometimes additional services like formatting or integration support. Agencies manage the client relationship and campaign execution.

Each layer also adds margin. A dataset that costs $X at the compiler level might cost $2X or $3X by the time it reaches the agency, depending on how many intermediaries are involved. The buyer rarely sees this markup broken down because it is embedded in the final price with no itemisation of supply chain economics; which itself can lead to unforseen problems.

What Each Layer Removes

Cost is visible, even if the breakdown is not. What is harder to see is what gets removed at each handoff: freshness.

B2B contact data decays at a rate of 20–30% annually. People change jobs, companies restructure, phone numbers are reassigned, email addresses are deactivated. A record that was accurate when compiled becomes progressively less reliable over time.

In a single-layer transaction — buyer purchases directly from compiler — the age of the data is knowable. The compiler can tell you when the record was created or last verified.

In a multi-layer supply chain, that visibility disappears. The reseller may not know when the wholesaler last refreshed their dataset. The wholesaler may not know the compilation date for individual records within their aggregated product. By the time data reaches an agency, it may be three to six months old — and nobody in the transaction has visibility into the actual age.

The decay rate compounds across this lag. If data decays at 25% annually and sits in the supply chain for six months before use, roughly 12–15% of records may already be degraded before the campaign starts. The agency buyer has no mechanism to verify this. They trust the supply chain, or they do not buy.

How Australian Data Is Actually Compiled

Original compilation draws from several primary sources. Government registers — business registrations, director filings, professional licence databases — provide verified legal entity information. Financial datasets capture transaction-level signals that indicate business activity, size, and sector. Public records, industry directories, and professional associations fill specific verticals.

The compilation process is ongoing, not static. Source data changes constantly, and compilers who maintain quality run continuous refresh cycles against their primary sources. The competitive advantage of a compiler is not just coverage but recency — how quickly changes in the source data flow through to the compiled dataset.

This is where the supply chain position matters. A compiler working directly against primary sources can detect changes in real time or near-real time. A reseller working from a wholesaler’s quarterly refresh inherits the lag built into their supplier’s update cycle. The further from source, the older the data — regardless of what the sales conversation suggests.

What the Compliance Language Actually Means

Data products often carry compliance labels: “opt-in verified,” “DNCR-screened,” “ACMA compliant.” These terms have specific meanings, but their value depends entirely on when in the supply chain the verification occurred.

The Do Not Call Register requires that any data used for telemarketing be washed against the register no more than 30 days before use. This is a legal requirement, and reputable providers at any level of the supply chain will perform or facilitate this wash.

But the DNCR wash only checks current registration status. It does not validate whether the underlying data is still accurate. A phone number can be DNCR-clean today because its previous owner never registered — but the person who holds that number now may have registered it last week. If the data is six months old, the wash occurs against a record that may no longer belong to the intended contact.

“Opt-in verified” has similar timing considerations. A contact may have opted in to communications at the point of original compilation. Whether that consent remains valid after the contact has changed roles, left the company, or moved to a different email address is a separate question. The verification happened; it just happened at a point in the supply chain that may no longer reflect current reality.

What Buying From Source Actually Means

The phrase “buying from source” gets used loosely, sometimes to describe any transaction that skips a reseller. More precisely, it means purchasing data directly from the entity that compiled it from primary sources.

This changes three things.

First, freshness is verifiable. The compiler knows when each record was created and last validated. They can provide compilation dates at the order level, and buyers can specify recency requirements as part of their data brief.

Second, accountability is direct. If data quality issues emerge, the buyer has a single-point relationship with the entity responsible for the underlying records. There is no ambiguity about where the problem originated or who should address it.

Third, pricing reflects actual value. Source pricing excludes reseller margins. The buyer pays for compilation, verification, and delivery — not for the cost of data passing through intermediary balance sheets. The savings vary depending on how many layers are bypassed, but they are typically significant enough to change campaign economics.

Why This Matters for Agencies

Agencies sit at a specific point in the supply chain. They are close enough to the end client to feel performance pressure directly, but often far enough from the source to have limited visibility into data quality and age.

This creates a structural tension. The agency wants campaign performance. The data they purchase has been through multiple hands, each adding lag and cost. By the time the campaign runs, a meaningful percentage of records may be degraded — but the agency has no way to know this from the purchase transaction.

Agencies that build direct relationships with compilers change their position in the supply chain. They gain access to fresher data, clearer accountability, and pricing that reflects source economics rather than stacked margins.

Not every campaign requires this level of supply chain diligence. But for high-stakes work where performance pressure is real and data quality directly affects results, knowing where your data actually comes from is not optional. It is the difference between understanding your inputs and hoping for the best.

If you are curious what source pricing looks like for your next campaign — or want to compare what you are paying now against what direct access costs — we can send you an agency rate card within one business day.