Best Practices for Detecting Data Anomalies Suggestive of Fabrication or Misconduct

admin-ajaxBy Alan Kott, MUDr, Clinical Vice President and Practice Lead at Bracket

Data quality concerns are frequent in clinical trial data. At Bracket, we have always been focused on improving data quality in our programs. Many of our Data Quality Assurance programs, such as our Blinded Data Analytics tool, are designed to address problematic data before it spreads. As Risk-Based Monitoring and Centralized Statistical Monitoring have become more common, much more attention is being paid to how these programs are implemented.

The recent article from TransCelerate successfully explored the utility of statistical monitoring in identifying data fabrication in chronic obstructive pulmonary disease (COPD) clinical trial. This paper is meant to validate some of the Central Statistical Monitoring recommendations already made in earlier TransCelerate white papers.

Bracket has been employing iterations of CSM for many years. In most cases, it is applied in studies with difficult, subjective endpoints. Especially in Central Nervous System (CNS) studies, variability, inter-rater reliability, and high placebo response can negatively impact a study outcome. Being vigilant in your ongoing data monitoring can be an essential tool in ensuring your clinical trial is measuring what you set out to measure. For example, in our dataset of 14 double-blind phase 2 and 3 schizophrenia clinical trials, data quality concerns affect between 11.7 to 31.2% visits with a mean of 24.2%. Many of these data quality concerns (e.g. lack of variability, discordance, hyperconcordance, etc.) have the potential to seriously alter study results, especially if not randomly distributed in the dataset.

Fig1_Any Data Quality Concern

Many of the recommendations in this new paper are straight-forward and consistent with what Bracket and many others working on these programs are already implementing. But as the authors point out, using purely statistical approaches to identify data fabrication and for that matter other data quality concerns in subjective outcome measures may not be sufficient. At Bracket, we utilize a multi-faceted approach to the problem, combining targeted statistical analysis with various technologies such as audio/video recordings of the assessments or computer generated scores and comprehensive clinical review. Bracket has successfully implemented this methodology in a large number of global studies, closely monitoring clinical outcome data, in most cases with subjective endpoints.

Another important approach to risk mitigation is prevention. Using intelligent Electronic Clinical Outcome Assessment tools (eCOA) and predictive analytics to identify potentially problematic data before the subject gets randomized into the trial appears to be a viable approach (Kott & Daniel, 2016.) And when an intelligent eCOA is implemented carefully in a difficult study, data quality can be significantly increased (Miller and Feaster, 2015.)

Cluepoints-LogoTransCelerate has earlier recommended the use of a Risk Assessment & Categorization Tool (RACT) when implementing CSM. In addition to using clinical oversight and eCOA tools to ensure good data, Bracket has partnered with CluePoints to utilize their Central Monitoring Platform and RACT for sponsors who would like to be as vigilant as possible in protecting their study data.

Many of the findings of the TransCelerate initiative are cautious. The authors write:

Statistical monitoring can be used to identify data anomalies suggestive of fabrication, noncompliance, or other nonrandom errors that need further investigation or monitoring, and should balance sensitivity against the cost of investigating false-positive sites

False positive findings can be worrisome. At Bracket, this is managed through both careful system design, which allows for these to be tracked, and through careful clinical oversight. All of our monitoring findings are reviewed by clinicians for veracity and impact before they are disseminated to a sponsor, a CRA, or an investigator for remediation. This allows for the “human touch” on every finding, and hopefully minimizes the burden of any possible false positives, especially on the sites who are working so closely with patients.

Many of the approaches to addressing these issues have evolved quickly in recent years. Bracket, along with all of the stakeholders involved in improving these processes, will continue to be flexible in how we design these programs, and consistent in our constant evaluation of the outcomes. Understanding how these interventions are working, and continuing to publish our findings from these programs, should lead to more success in future programs.

FULL CITATION: Statistical Monitoring in Clinical Trials Best Practices for Detecting Data Anomalies Suggestive of Fabrication or Misconduct. 2016 Feb 4.
JOURNAL: Therapeutic Innovation & Regulatory Science. vol. 50 no. 2 144-154
AUTHORS: Min Lin, PhD; Shiowjen Lee, PhD; Boguang Zhen, PhD; John Scott, PhD; Amelia Horne, DrPH; Ghideon Solomon, PhD; Estelle Russek-Cohen, PhD
YEAR: 2016

Bracket Announces New Mobile Suite of Applications for Clinical Trials

Wayne, PABracket, a leading clinical trial technology company and specialty services provider, has released a new suite of mobile applications to support pharmaceutical and biotechnology companies running clinical trials. Bracket Patient DiaryTM, Bracket RTSMTM, and Bracket AnalyticsTM are now available for download from iTunes and Google Play.

icon-diary-rounded  icon-rtsm-rounded  icon-analytics-rounded

Bracket Patient DiaryTM is the Company’s latest version of a native, downloadable app for patients participating in clinical trials. Bracket Patient DiaryTM allows for patients to report symptoms and complete diaries using a secure app installed on their own smartphone. Bracket Patient DiaryTM helps pharmaceutical companies who would like to use Bring Your Own Device, or BYOD, for patient-reported outcomes in clinical trials.

“As the complexity and data requirements of clinical trials increase, it’s essential to have  the appropriate tools available for stakeholders who participate in these programs,” said Jeff Kinell, Chief Executive Officer for Bracket. “We developed these new tools with the intention of making clinical trials more efficient for pharmaceutical companies, investigator sites and patients.”

In conjunction with the new version of Bracket Patient DiaryTM, Bracket is also releasing platform updates to Bracket RTSMTM, their industry-first native app for managing clinical supplies and randomization in clinical trials, and the first release of Bracket AnalyticsTM, a stand-alone tool that gives study managers and sponsors a dashboard of their clinical trial progress. Bracket DiaryTM, Bracket RTSMTM, and Bracket AnalyticsTM are all available for download for iOS and Android smartphones.

Download_on_the_App_Store_Badge_US-UK_135x40Download Bracket apps on iTunes or on Google Play.

About Bracket

Bracket, with seven offices and more than 500 employees worldwide, is a clinical trial technology and specialty services provider dedicated to helping biopharmaceutical sponsors and contract research organizations increase the power of their clinical research data by leveraging core competencies in Science, Technology, and Service. Bracket eCOA™ is a flexible platform for electronic clinical outcomes assessments. Bracket RTSM™ is a best-in-breed, scalable and configurable clinical IRT solution for the life sciences industry. Bracket Rater Training and Quality Assurance improve outcomes through customized training and quality assurance programs.

ISPOR 21st Annual International Meeting


This year Bracket will be exhibiting and presenting at ISPOR’s 21st Annual International Meeting on May 21 – 25, in Washington, DC.  This event will discuss information surrounding the science of health economics and outcomes research. Bracket will be presenting new research that focuses on user experience design in electronic clinical outcome assessments, establishing equivalence of electronic clinician-reported outcome measures, and utilizing a BYOD mobile app to collect patient diaries and dosing information in a phase II clinical trial.

  • Utilizing a BYOD Mobile App To Collect Patient Diaries and Dosing Information In a Phase II Clinical Trial (PRM11)
  • Incorporating User Experience Design into Electronic Clinical Outcome Assessment System Design (PRM16)
  • Establishing Equivalence of Electronic Clinician-Reported Outcome Measures (PRM22)

Bracket is presenting at Booth 106.

Bracket Attends Two Outsourcing in Clinical Trials Conferences in May

Bracket is excited to announce that the company will be attending two Outsourcing in Clinical Trials conferences this May  – OCT Europe and East Coast. Information regarding these conferences can be found at the links below.

Bracket will be exhibiting at the Outsourcing in Clinical Trials – Europe.  The Europe event will take place on the 17th of May to the 18th  at the New York Hotel & Conference Centre, in Disneyland Paris. This two-day conference gives senior-level pharmaceutical and biotech companies from across Europe to network and enhance strategies within clinical trials. If you are attending, please stop by booth 71!

More information on the Europe conference here.

Bracket will also be exhibiting at the Outsourcing in Clinical Trials – East Coast. This East Coast event will take place on the 25th of May to the 26th  at the Radisson Valley Forge Casino, in King of Prussia.  This well established two-day event will explore in-depth specific challenges faced in company trials. Additionally, discussions will take place on the latest in clinical innovation and patient & site engagement, with presentations from leading companies in the area providing their expertise on these highly topical issues. Please stop by booth 69 to see what we have to offer!

More information on the East Coast conference here.

Reflecting on the Use of Adaptive Design in Clinical Trials

hamilton-scottBy Scott Hamilton, PhD, Principal Biostatistician at Bracket

The use of Adaptive Design in clinical trials has received a lot of attention over the past few years, with many advocates evangelizing for its potential benefits to both patients and drug developers in identifying new cures in a faster and more efficient manner. But it hasn’t seen the uptake many expected.

The FDA has been proactive in working with pharmaceutical and biotech companies in providing background on how to consider adaptive design, and a draft guidance was issued in 2010. In a recent article, several representatives from the FDA’s Center for Biologics Evaluation and Research gathered data on their experience with adaptive design. In “CBER’s Experience with Adaptive Design Clinical Trials”, Lin present a summary of the types of adaptive design proposals before and after the 2010 FDA draft Guidance document on adaptive design in clinical trials. They scoured the CBER electronic document room for all proposals submitted that had any adaptive components to them between the years 2008 and 2013.

IMAGE1Among the 12,095 submissions, 1,225 required formal statistical review. Among those, 140 had adaptive elements.  The authors break down those submission by adaptive design subtypes of dose-finding, adaptive randomization, group sequential, sample size re-estimation, two-stage (seamless and non-seamless).  They also provide other breakdowns by study phase, therapeutic area, outcome type, etc.   In a plot of the number of proposals with adaptive designs by year it shows the number increasing until 2011 and then decreasing to the level it was at in 2008 by 2013.

My own experience and communication with colleagues suggest that the 2010 draft guidance did a great deal to clarify the FDAs level of comfort with adaptive designs.  In the draft guidance some of the newer and possibly more aggressive types of adaptive designs were described as “less well understood”.  This probably caused industry statisticians, who are generally a conservative bunch, to shy away from such designs.  Another consideration in the decline in the number of adaptive design submissions could be the increased scrutiny they receive at the agency, leading to longer review times and more back and forth with sponsor/FDA interactions.  For example, some of the seamless phase II/III Bayesian designs don’t always provide a transparent explanation of how the overall Type-I error rate is preserved.  Thus, additional calculations may be requested that would lead to increased time from submission of a proposal to the time the FDA is comfortable with it.

Nevertheless, the authors provide excellent suggestions for how to address the questions the FDA will have about adaptive designs.   This article is an important survey of the impact that the 2010 draft Guidance Document on Adaptive Design in Clinical Trials has had on industry’s enthusiasm for adaptive designs.  Incorporating their excellent suggestions into adaptive design proposals should lead to a greater level of comfort at the agency with some of the newer types of adaptive designs.

Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics

FULL CITATION: CBER’s Experience With Adaptive Design Clinical Trials. Therapeutic Innovation & Regulatory Science. 2016 Feb 22.
JOURNAL: Therapeutic Innovation & Regulatory Science. 2016;Vol. 50(2) 195-203
AUTHORS: Min Lin, PhD; Shiowjen Lee, PhD; Boguang Zhen, PhD; John Scott, PhD; Amelia Horne, DrPH; Ghideon Solomon, PhD; Estelle Russek-Cohen, PhD
YEAR: 2016