M11 gives the protocol structure. E6(R3) gives it the quality discipline. The readiness question is whether your organization can operate both as one system.
In this post
Introduction
Many clinical development leaders and emerging biotech teams have now read ICH E6(R3). Many are tracking ICH M11. Far fewer have asked the more operational question: do our people, processes, vendors, and systems actually behave as if protocol design, structured content, and quality planning are part of one operating model?
That distinction matters. E6(R3) strengthens the expectation for quality by design, proportionate controls, and critical-to-quality (CtQ) thinking. M11 introduces a harmonized, structured protocol model that supports more consistent review, exchange, and downstream execution. Individually, each is important. Together, they also create the foundation for AI-enabled regulatory and quality workflows, including audit acceleration and structured document review, because structured content is what algorithms can evaluate, compare, and reuse more reliably.
In Europe, the E6(R3) Principles and Annex 1 came into effect on 23 July 2025. ICH M11 was adopted at Step 4 on 19 November 2025 and is now moving through regional implementation. Sponsors should confirm applicable effective dates and implementation expectations directly with regional health authorities. The strategic direction, however, is unambiguous: the protocol is becoming both a quality-design instrument and a structured data asset.
I have previously written about ICH M11 as a structural shift in protocol standardization and AI-enabled clinical development (see my March 2026 white paper on kushdhody.com). This post takes the next step: operational readiness, specifically how M11 and E6(R3) converge in day-to-day protocol development, governance, vendor oversight, and quality planning.
Why readiness does not equal awareness
Awareness is a training milestone. Readiness is an operating capability.
E6(R3) is not a template refresh. It is built around a quality-by-design mindset, proportionate controls, and identification of critical-to-quality factors before the protocol is locked. In practical terms, this means quality planning must influence protocol design rather than being documented after the fact.
M11 is not a Word replacement. It is structured, machine-readable protocol content governed by defined sections, data elements, controlled terminology, and a technical specification that supports exchange and reuse across stakeholders and systems.
What I observe in governance reviews is that awareness of both rarely translates into the capability to deliver either. The value is only realized when the two are run as one operating model: quality design shapes the protocol, and structured content carries that design into downstream execution.
Three failure modes I see in 2026
1. Template-first teams. They download the M11 template, paste legacy narrative content into it, and declare progress. The protocol is structurally shaped but not structurally meaningful. Downstream systems still cannot consume it without manual interpretation and rework.
2. Quality-later teams. They add a critical-to-quality section after the design is locked. That turns E6(R3) into a documentation artifact rather than a design discipline. The protocol appears quality-oriented on the surface but does not use quality thinking to make earlier decisions.
3. Two-track teams. M11 is owned by medical writing. E6(R3) is owned by quality. Data management, biostatistics, clinical operations, and vendors join later. The workstreams do not truly meet until the first amendment, when the cost of misalignment becomes visible.
The common denominator is not lack of intelligence or effort. It is the absence of a shared operating model. Protocol modernization fails when structure, quality, statistics, operations, and systems are treated as separate workstreams.
The 10-point readiness test
Score your program honestly: 1 point for each "Yes." This is the diagnostic I find most useful in governance reviews. It is intentionally practical because readiness should be observable in process, roles, vendor evidence, and downstream reuse.
Companion Resource
Download the print-ready scorecard
- CtQ workshop before authoring. Have we run a critical-to-quality workshop before protocol authoring and produced a CtQ register mapped to specific protocol sections?
- Objectives, endpoints and estimands. Have we modeled objectives, endpoints, and estimands as repeatable protocol components, applying the ICH E9(R1) estimand framework to each primary and key secondary endpoint?
- QTL linkage. Have we pre-specified Quality Tolerance Limits, linked to CtQ factors, and paired with escalation pathways consistent with the E6(R3) quality management approach?
- Structured authoring environment. Are we authoring in a CeSHarP-conformant, USDM-compatible environment rather than treating Word as the system of record?
- Controlled terminology ownership. Do we have a terminology steward and a structured-content editor on the protocol team alongside the medical writer?
- Protocol as dataset. Do we version-control the protocol as a structured data asset, not just a document, with a named owner for the structured-content lifecycle?
- Downstream reuse. Does our protocol data populate EDC build, IRB/ethics submission, CTMS setup, and other downstream workflows without re-keying the same information?
- Amendment impact. Do we understand the structured-data delta of a typical amendment — not just the tracked-changes count — so downstream rework can be forecast?
- Vendor evidence. Have CROs and technology partners demonstrated M11/USDM readiness using a real protocol, not only a roadmap slide or sales demo?
- Cross-functional governance. Do we operate a joint M11 + E6(R3) governance forum across medical writing, biostatistics, clinical operations, data management, regulatory, quality, and technology?
Scoring rubric
Five moves for the next 90 days
A low score should not trigger a multi-year transformation program. It should trigger a focused operating-model pilot. The goal is to prove the model in one study, with one team, before scaling across the portfolio.
- Run one CtQ-first pilot on a protocol currently in design. Do not attempt enterprise-wide transformation first — prove the operating model on a single study.
- Stand up a joint M11 + E6(R3) steering group across medical writing, biostatistics, clinical operations, data management, regulatory, quality, and technology. Single forum, single backlog.
- Rewrite the protocol SOP as a lifecycle procedure, not a document-production procedure. The protocol is now a managed data asset.
- Insist on vendor evidence. Ask CROs and eClinical vendors for examples of structured protocol authoring, protocol data exchange, controlled terminology handling, and downstream reuse. A roadmap slide is not enough.
- Measure what the new model actually improves. Track amendment drivers, time from protocol finalization to EDC go-live, downstream setup rework, and protocol-to-submission cycle time.
The payoff
Organizations that operationalize M11 and E6(R3) together should be better positioned to reduce avoidable amendments, shorten downstream setup cycles, and support more consistent regulatory review. That value does not come from adopting a new template. It comes from designing quality into the protocol and making protocol content exchangeable by default.
The organizations that struggle will not necessarily be those that ignore the guidelines. They will be the organizations that read them, train on them, and still leave the underlying operating model unchanged. That is the invisible risk: modern language layered on top of legacy process.
For emerging biotech teams, this is also an opportunity. They often have less legacy infrastructure to unwind. If they build the protocol operating model correctly now, they can avoid years of technical debt that larger organizations are still trying to resolve.
Closing
This is the convergence I am thinking about most in 2026. Protocols are no longer just documents that describe trials; increasingly, they are structured operating assets that connect scientific intent, quality planning, regulatory review, and downstream execution.
If your team has not yet run a readiness test, this quarter is the right time to start. The practical question is simple: can your organization design quality into the protocol and carry that design forward as structured content across the study lifecycle?
If you would like the 10-point scorecard as an editable worksheet — along with the reference set I use in governance reviews — reach out through the contact form at kushdhody.com. I am happy to share it.
Further reading on kushdhody.com
- Implementing ICH M11: A CRO Leadership Perspective on Protocol Standardization and AI Integration (March 2026)
- AI-Enabled Clinical Research Operations: From Concept to Implementation (June 2025)
- Risk-Based Quality Management in a Post-Pandemic CRO Environment (September 2023)
Selected sources
- EMA — ICH E6(R3) scientific guideline
- ICH — E6(R3) Step 4 final guideline (PDF, Jan 2025)
- ICH — M11 Step 4 final guideline (PDF, Nov 2025)
- EMA — ICH M11 Step 5 guideline, template & technical specification
- FDA — M11 Template: CeSHarP
About the author
Kush Dhody, M.D., M.S. is a physician-scientist and clinical development executive with more than 20 years of experience leading global clinical programs, regulatory strategy, and CRO operations across multiple therapeutic areas. He currently serves as President of Amarex Clinical Research, LLC, An NSF Company, and is involved in AI-enabled regulatory and quality workflow innovation, including the NSF/Microsoft Azure initiative featured as a Microsoft customer story.
DISCLAIMER: The views expressed in this blog are those of the author and do not necessarily represent the official position of Amarex Clinical Research, LLC, An NSF Company, or any regulatory authority. This post reflects emerging developments in protocol standardization, quality-by-design practices, regulatory data exchange, and clinical development operating models. Adoption of any approach discussed here should be evaluated in the context of the specific product, study design, therapeutic area, regulatory jurisdiction, organizational capabilities, and applicable health authority expectations. It is intended for informational and educational purposes and should not be construed as regulatory, legal, or compliance advice.