Clinical Trial Performance Analytics Explained

By | Health Sciences Research, Real World Evidence

Clinical Trial Performance Analytics: Data Is the Core of Research

Big data is the core of medical research. Since clinical studies face strict protocols, unpredictable delays, and pressure for low costs, sponsors and contract research organizations (CRO) need robust data to succeed. Synthesized and abundant medical information can help researchers deal with the complexity and demands of today’s medical research.

Clinical trials rely on multifaceted datasets and clinical trial performance analytics to improve drug development, health outcomes, and revenue. Clinical trial performance metrics provide information across systems to track execution, manage logistics, and detect risks across multiple sites and regions. To be more precise, performance metrics can be defined as discrete units of research information, which can be employed to benefit internal and operational performance, as well as interoperability. Performance analytics can reshape the future of research.

Usage and Benefits of Clinical Trial Performance Analytics

With leveraging health technologies, clinical trials need robust data and sophisticated analytics to flourish (Simpao et al., 2014). It’s not surprising that researchers have started to adopt analytics and performance indicators to analyze and visualize clinical trial data. Note that in research, the employment of analytics is defined as the systematic use of medical information, as well as its quantitative and qualitative analysis. Analytics can boost quality performance, risk assessment, decision-making, resource allocation, and relationships with sponsors. Visual analytics, for instance, is an essential part of clinical research and site selection.

Interestingly, experts state that today’s performance analytics differ from traditional operational metrics and standard key performance indicators. While standard indicators show what factors need to be assessed (e.g., low enrollment), performance analytics reveal the actual cause of a problem (e.g., strict inclusion criteria). Valuable clinical trial performance metrics, in particular, are data points which incorporate quality assessment and predictive modeling techniques. They provide insights into operational performance and quality. Note that other essential factors that boost internal operations and relationships with sponsors include benchmarking, high-quality management, and competitor research. It’s no surprise that sophisticated clinical trial management systems (CTMS) platforms and heat maps have started to adopt clinical trial performance metrics to improve medical research and site performance.

Types of Clinical Trial Performance Metrics

Clinical trial performance metrics, as explained earlier, reveal numerous benefits over traditional indicators. Performance metrics are vital in quality assessment, site selection, and risk performance. Such metrics can be used not only to assess a clinical trial but also to improve outcomes and support action plans. Therefore, sponsors must decide on a strict set of metrics and goals of assessment. Note that generally speaking, there are two types of metrics which can help researchers measure risks and performance: standardized metrics (e.g., site quality performance) and study-specific indicators. Also, clinical trial performance metrics can be categorized into three groups: start-up, maintenance, and close-out metrics.

Start-up metrics: Start-up metrics may include investigator recruitment, regulatory protocols, the cycle time for received and finalized budget, vendor selection, first patient first visit (FPFV), cycle time to site activated, and approval rate.

Maintenance metrics: During the actual trial, vital maintenance metrics involve site quality, consent forms, timelines/deadlines for reports, meeting criteria, randomization methods, privacy, and trial master file/documentation.

Close-out metrics: The close-out metrics tackle aspects, such as last patient last visit (LPLV), analysis and errors, and time to closure of sites.

Although each study has its peculiarities, usually short cycle times across different metrics (e.g., budget, protocols) indicate site and staff professionalism and effectiveness. To set an example, since institutional review board (IRB) approval is vital, a site which approves new trials and has good records for this data point may be preferred by sponsors and external organizations. After all, clinical trial performance metrics should support good clinical practice, internal performance, and relationships with sponsors.

High-Quality Clinical Sites and Performance Metrics

Prior to the commencement of a clinical trial, sponsors and researchers must consider multiple factors, possibilities, and obstacles. Site selection is one of the most crucial aspects of digital health research. Sites vary from universities to independent medical institutions. Note that clinical trials with multiple sites and across various regions are more likely to face errors and bias. Site performance is fundamental (Whitham et al., 2018). Sites which reveal short cycle times, enroll subjects on a regular basis, and document high compliance in accordance to protocols usually lead to high-quality data and success.

Interestingly, after a systematic literature review, a Delphi study identified 117 performance metrics across 21 studies (Whitham et al., 2018). 30 metrics were excluded due to unclarity or specific requirements. 32 experts in three focus groups identified 28 meaningful metrics and categorized them into four domains (recruitment and retention, data quality, protocol compliance, and staff). These metrics were used to create an online Delphi survey; 211 respondents were invited. The final round of the study consisted of a consensus meeting, during which the research team concluded the following eight metrics divided into three domains:

Recruitment and retention: Recruitment of participants is one of the most challenging aspects of research. Therefore, recruitment and retention metrics are vital. Actual recruitment versus target recruitment is an important indicator. If the number of participants recruited into the study by the site of interest is high, the site is effective. Another vital metric is the percentage of participants who have consented. The third indicator in this category is the number of randomized subjects who have withdrawn their consent (to any further participation or follow-up procedures).

Data quality: Multiple aspects, such as sample size and study design, affect data quality. Hence, data quality is essential for patient well-being and marketing success. A vital metric is the percentage of randomized subjects (at the site of interest) with a query which concerns primary outcome data. Randomization is essential, so the percentage of randomized participants with complete data for primary and important secondary outcomes should be analyzed as well. The third metric in this group is the number of adverse events (at least one) per number of randomized subjects (at the particular research site).

Protocol compliance: Regulations and ethical consideration are vital in research, with patients being the core of digital health. Therefore, a crucial metric related to protocol compliance is the percentage of participants with at least one protocol violation. Last but not least, researchers must assess the percentage of randomized participants who started allocated intervention as defined in the protocol.

It’s interesting to mention that in research, it’s accepted that sites with a good historical track record will reveal future high-quality performance. Sites that enroll and maintain subjects are considered reliable. Nevertheless, metrics, such as protocol complexity and excessive lab tests, can also affect interpretations and future results. Interpretations, predictions, and generalizations should be based on robust data. For instance, if research teams report positive outcomes in their cohorts, but not in the general population, sponsors should not make conclusions about site quality.

We should mention that a recent study showed that operational data from central lab services could be a powerful indicator of site performance. Such data can be used to improve operational planning and quality (Yang et al., 2018). Yang and colleagues used metadata about laboratory kit shipments to clinical sites (e.g., shipment dates) to reconstruct past performance and provide insights into the operational performance of other sites. As a result, the research team created a set of metrics from over 14,000 protocols, 1400 indications, 230,000 investigators, and 23 million patient visits. Note that Yang and colleagues (2018) concluded that country and regional trends are vital, particularly in the study of Alzheimer’s disease.

Clinical Trial Performance Analytics and Interactive Data Visualization Methods

Performance analytics and visualization methods go hand in hand. Visualization techniques can help experts understand performance indicators and optimize site networks. Visualizations that represent historical enrollment data may include tree-maps, color-coded charts, and enrollment targets. One of the main benefits of data visualization is the fact that findings are clear and easy to analyze. Thus, medical specialists and personnel without IT and data mining knowledge can easily find patterns and mitigate risks.

What’s more, visuals improve communication and interoperability. Complex ideas can be communicated in real time and with precision. Simulation techniques, dimensional modeling, and interactive maps help researchers present large numbers of data – something that standard tabular models cannot accomplish. Visual analytics in health care can enable data exploration, generation of hypotheses, recruitment, data quality, and study management. Such methods can apply not only to clinical trials, but a wide range of fields, such as genomics, diseases surveillance, and clinical decision support. In addition, user dynamic interfaces can present data in detail and allow comparisons of different designs and protocols. Interactive dashboards and single plots can allow analysis of historical data, predictions of future performance, and trends (Yang et al., 2018).

Performance Analytics in Practice: The Future of Research

In medical research, planning is the key to success. From literature reviews to protocols, planning can benefit the employment of big data and performance metrics. For instance, one of the main goals of digital health research and drug development is to cut costs and accelerate drug authorization. While well-defined, standardized performance metrics can benefit clinical trials, each study imposes certain peculiarities and complexity, so experts should set meaningful data points and goals. To assess the progress of a clinical trial, meaningful metrics must measure variance, bring new insights, and lead to change.

Most of all, performance analytics and analysis should lead to action plans. As explained above, data varies between sites, clinical trials, and over time. If a metric shows that a given action has not been beneficial, experts should reassess their methods and change them. Reassessment of metrics is also crucial to identify weaknesses. Such reviews should be done on a regular basis. Note that another factor which affects performance analytics is reproducibility. Reproducibility can help experts verify results for different operations and across different time periods. At the same time, clinical trial performance metrics should compare universal benchmarks in order to assess areas of competitiveness and marketing advantages. Since drug development is a costly business, with a huge percentage of drugs failing to reach the market, planning and performance analytics becomes crucial. Most of all, performance analytics must be employed to improve patient outcomes. As patients are the focus of digital health research, sponsors, researchers, and regulatory bodies must find a balance between costs and benefits – with the sole purpose of improving drug development and patient well-being.

Clinical Trial Performance Analytics: Conclusion

Clinical research is a complex topic, with health technologies (e.g., BYOD approach) influencing clinical trials and business strategies. Performance analytics have the potential to improve clinical trials, internal and operational performance, and relationships with sponsors. Analytics can be used to track operations, boost outcomes, and mitigate risks. Clinical trial performance metrics, in particular, are vital data points which can provide information about research sites and performance. With standardized and site-specific metrics, researchers can assess historical site records and predict future outcomes. By implementing quality assessment, predictive modeling, and visualization methods, researchers can reduce costs and speed up market authorization. In fact, visuals can provide valuable insights into the connection between sites, regions, and hypotheses. Perhaps one of the most valuable aspects of clinical trials performance analytics is their potential to trigger real changes and improve health outcomes.

By planning and establishing useful metrics, sponsors can evaluate sites and improve research. In the end, big data, performance analytics methods, and health technologies are reshaping the future of drug development and clinical trials. Medical data can bring clinical research a step closer to success.

References

  1. Simpao, A., Ahumada, L., & Galvez, J. (2014). A Review of Analytics and Clinical Informatics in Health Care. A Review of Analytics and Clinical Informatics in Health Care. Journal of Medical Systems, p. 38-45.
  2. Whitham, D., Turzanski, J., Bradshaw, L., Clarke, M., Culliford, L., Duley, L., Shaw, L., Skea, Z., Treweek, S., Walker, K., Williamson, P., & Montgomery, A. (2018). Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials: a Delphi study. Trials.
  3. Yang, E., O’Donovan, C., Phillips, J., Atkinson, L., Ghosh, K., & Agrafiotis, D. (2018). Quantifying and visualizing site performance in clinical trials. Contemporary Clinical Trials Communications, 9, p. 108-114.

Risk-based Monitoring Tools in Drug Development and Research

By | Health Sciences Research, Real World Evidence

Risk-based Monitoring and Risk-based Monitoring Tools in Research

Risk-based monitoring (RBM) tools are essential in drug development and digital health research. Due to the complexity of medical research (with strict regulatory practices, documentation requirements, and ethical considerations), clinical trial monitoring is needed to ensure protocol compliance, data quality, and participant safety. To help sponsors and researchers overcome possible challenges and financial burdens, regulatory bodies worldwide agree that a risk-based approach in monitoring is crucial. The ICH-GCP guidelines, for instance, states that clinical trial observations and risk-based monitoring tools are mandatory in health research and drug development (Hurley et al., 2016).

Unlike standard burdensome procedures, such as on-site visits and source data verification, risk-based monitoring is highly effective. Risk-based monitoring is an innovative practice which can help researchers reduce the frequency of monitoring and incorporate different assessments in practice (e.g., on-site and remote monitoring). What’s more, it tackles various factors of research, such as high-risk sites, triggered events, data quality, and patients’ well-being.

Risk-based monitoring tools, in particular, allow experts to implement risk-based assessment, tailor their monitoring practices, reduce risks, and optimize clinical outcomes. Note that critical data and risks are often associated with data integrity, novel products, research bodies, study design, and participants’ safety. Risk-based monitoring tools can also support ongoing supervision in real time and improve long-term health benefits. They can help reshape the nature of pharmaceutical drug discovery and digital health.

Risk-based Monitoring Tools in Detail

Researchers worldwide agree that risk-based monitoring can improve drug development and patient outcomes. Risk-based monitoring tools can help researchers implement risk-based monitoring in practice and overcome challenges in software technology. They can improve the conduct, management, and audit of clinical trials at all phases of research. A recent literature review revealed that although there isn’t an existing set of regulations regarding risk-based practices, risk categories are normally defined as low, medium, and high-risk (Hurley et al., 2016). Note that low-risk studies require less data monitoring. Having clear categories and taxonomy can ensure trial accuracy, patients’ safety, and data collection. It’s interesting to mention that Phase I studies are usually flagged as a high-risk category.

The implementation of risk-based monitoring tools should also tackle different types of monitoring in research. Note that centralized monitoring, statistical assessment, reduced monitoring, remote observations, and on-site procedures are among the most effective monitoring techniques (Molloy & Henley, 2016). Interestingly, centralized monitoring is a major component of risk-based monitoring, which reveals numerous advantages over standard assessments.

In addition, Hurley and colleagues (2016) concluded that risk-based monitoring tools could be paper-based, operated as a Service as a System (SaaS) or supported by Excel. Note that with the recent shift in digital health practices, electronic data and digital solutions are becoming more and more popular in medical research. Most of all, experts should embrace the fact that risk assessment is a continuous process and requires real-time analysis to improve drug development and health outcomes.

Taxonomy of Critical Data and Risks in Clinical Trials

For the successful implementation of risk-based monitoring tools, detection of critical data (which can be flagged as a risk) is crucial. The most common risks in medical research are linked to data integrity and ethical regulations. Interestingly, some experts suggest using a ‘traffic light system’ during risk assessments to identify and visualize low, medium, and high risks. One of the most popular taxonomies identifies 12 risks (Jongen et al., 2016), categorized into three groups:

What – Investigational medicinal products (IMP): When it comes to novel products, numerous risks can arise. One of the major unknowns in medical research revolves around the existing knowledge about new treatments in humans. Adverse effects and toxic compounds may be lethal. Since Phase I and Phase II trials pose high risks for participants, new indications and toxicological effects must be tested thoughtfully. Another major risk tackles the nature of the actual treatment. In trials where no medications for potential side effects are available, or where interventions continue without any benefits for participants, risks are high. Let’s not forget that patient safety and well-being come first. The third concern about an experimental product comes down to potential large populations. This is highly relevant for data integrity, and marketing authorization approval as a drug designed for big populations may harm a large number of people. The last challenge in the investigational medicinal products category tackles high-risk drugs. Note that there are three types of risk characteristics which a product may hold: pharmacology, route of administration, and stability. As stated above, new indications, possible side effects, and storage errors must be considered in order to reduce adverse effects and negative outcomes.

By whom – Investigators, site, sponsors: Research is a complex process which involves researchers, sponsors, participants, practitioners, and committees. When it comes to staff and sponsors, professionalism, experience, and reputation become crucial. In fact, these three characteristics may affect data integrity and ethical risks. Normally, commercial organizations and experienced researchers tend to identify risks and ensure data integrity. The reputation of an organization also affects risks and outcomes; it can affect approval decisions and market authorization. Thus, training of staff, good documentation practices, and protocols become vital.

How – Design: A safe drug and professional staff is not enough to guarantee success. The study design becomes crucial. Participants characteristics, for example, may correlate with risks. Trials with a large number of subjects (e.g., Phase III studies) pose both ethical concerns and risk for data integrity. When vulnerable people or participants across emergence settings are enrolled, risks also increase. Another aspect which is associated with high risk is the potentially high burden to participants. Invasive procedures and time-consuming methods, for example, often lead to high risks. The duration of the actual treatment also needs to be considered. Usually, longer treatments correlate with high occurrences of adverse effects. The actual study design is also a crucial factor. Studies with multiple phases, sites, and research bodies – as well as those that employ new assessment tools and complex procedures – pose a high risk for data integrity. Last but not least, the conduct of the study is essential. Any breaches of the protocol must be tackled to prevent possible failures and adverse effects. Good management and documentation practices become fundamental.

Benefits of Risk-based Monitoring Tools

Risk-based monitoring tools are essential in clinical research. They reveal numerous benefits in practice, such as reduced costs, decreased time to approval of a new product, and lower risks. It’s not a secret that drug development is a complex process. It starts with research on a molecular level to support the understanding of the disease. Once a possible target and compounds have been identified, preclinical testing in non-humans can begin; followed by clinical trials in humans, marketing, and patenting (Torjesen, 2015). Unfortunately, stats show that approximately five in 5,000 drugs enter human testing; with only one of these five products being approved. The entire testing process is prone to delays (up to 12 years) and costly (more than $ 2.6 billion). Therefore, risk-based monitoring tools are needed to improve drug research:

High-quality data and sources: Risk-based monitoring tools can ensure high-quality data and benefit patients’ well-being. It’s not a secret that standards methods, such as source data verification, can be burdensome and ineffective. Due to the complexity of clinical research, risk-based approaches and centralized monitoring become essential. Novel assessments and tools can help experts focus on critical data and problematic sites in order to improve outcomes.

Reduced costs and delays: As explained above, pharmaceutical research is a costly business. Data shows that 1/3 of a study’s budget goes to on-site monitoring of data. Consequently, more and more regulatory bodies, including the FDA, implement risk-based monitoring tools to reduce costs, fraudulent data, and delays. Such risk-based monitoring tools can reduce the frequency of monitoring, decrease financial burdens, and improve data integrity.

Better outcomes and digital solutions: Risk-based monitoring tools can reduce the risks of errors. The use of automated reviews and a central risk dashboard can lead to better analysis and results. It also allows for cross-site comparison and interoperability. After all, digital solutions can improve health outcomes and reshape the entire nature of digital health research.

Implementation of Risk-based Monitoring: Challenges in Practice

Although risk-based monitoring is fundamental in drug development, the lack of a consensus between regulatory bodies and research organizations may become problematic (Agrafiotis et al., 2018). For instance, the interchangeable use of terms, such as risk-based monitoring, centralized monitoring, and quality by design, is a clear example of the need for integrity and harmonization. Note that risk-based monitoring is defined “[RBM] directs sponsor oversight activities on preventing or mitigating important and likely risks to data quality and to processes critical to human subject protection and trial integrity by appropriate use of centralized monitoring and reliance on technological advances.” Therefore, it requires the implementation of both centralized monitoring and quality by design. To overcome research ambiguities and possible challenges, regulatory bodies state that any risk-based monitoring program should include:

Data collection: Any risk-based approach requires a robust flow of information (both automatic and manual), which can be transferred from each study site to their central monitoring system. Data collection in real time, 24/7, can provide essential medical information about patient outcomes, well-being, and financial burdens. Note that novel technologies, real-time data entry, and sophisticated monitors’ capabilities allow remote monitoring based on probability and simulations. This novel approach can help researchers and sponsors spot critical data or potential risks before the actual occurrence of an incident.

Dashboard monitoring: Experts agree that dashboard monitoring becomes an essential risk-based monitoring tool. It integrates medical data into a single platform, which allows automatic data flow and faster decision-making. A central dashboard also facilitates data collection and analysis. What’s more, dashboard monitoring can provide visuals about thresholds for given risks and solutions. Experts can customize (e.g., color-code) their system to categorize risks (e.g., by site). After all, visuals help researchers make sense of their rich data.

Statistical analysis and digital solutions: Risk-based monitoring tools help experts perform additional statistical analyses and create histograms and charts. Analyses depend on risk-based thinking and predictive analytics, which can reduce risks and triggers. Note that depending on the identified risks, a targeted on-site investigation and traditional source data verification may be required as an additional technique. As explained above, this can reduce costs, errors, and delays.

Risk-based Monitoring Tools: Conclusion

Since medical research and drug development are complex scientific endeavors, novel research methods are required. Risk-based monitoring, for instance, has numerous advantages over standard monitoring practices. Risk-based monitoring is defined as the process of ensuring data integrity and ethical principles. It moves away from traditional practices, such as source data verification, and employs sophisticated tools in order to assess clinical trials and novel interventions. New predictive analytics can help researchers identify risks and improve outcomes.

Risk-based monitoring tools help researchers implement risk-based monitoring, conduct an efficient clinical trial, reduce on-site visits, cut costs, and ensure data integrity. As the FDA defines three crucial steps in risk-based monitoring (identifying critical data, performing a risk assessment, and developing a monitoring plan), risk-based monitoring tools can simply support research teams. Note that technological advancements also benefit risk-based monitoring, centralized monitoring, predictive analytics, and visualization techniques. Innovative dashboards and online platforms such as Qolty can enhance data flow, risk-based monitoring, cross-site comparison, and research communication.

In the end, risk-based monitoring tools can reshape the nature of traditional clinical trials, improve patient safety, and ensure data quality.

References

  1. Agrafiotis, D., Lobanov, V., Farnum, M., Yang, E., Ciervo, J., Walega, M., Baumgart, A., & Mackey, A. (2018). Risk-based monitoring of clinical trials: An Integrative approach. Clinical Therapeutics, 40 (7), p. 1204-1212.
  2. Hurley, C., Shiely, F., Power, J., Clarke, M., Eustace, J., Flanagan, E., & Kearney, P. (2016). Risk based monitoring (RBM) tools for clinical trials: A systematic review. Contemporary Clinical Trials, 51, p. 15-27.
  3. Jongen, P., Bogert, C., Laar, C., Notenboom, K., Hille, E., & Hegger, I. (2016). Risk indicator taxonomy for supervision of clinical trials on medicinal products. Current Medical Research and Opinion, 32 (7), p/ 1269-1276. DOI: 10.1185/03007995.2016.1170671
  4. Molloy, S., & Henley, P. (2016). Monitoring clinical trials: A practical guide. Tropical Medicine and International Health, 21 (12), p.1602-1611.
  5. Torjesen, I. (2015, May 12). Drug development: The journey of a medicine from lab to shelf. The Pharmaceutical Journal. Retrieved from https://www.pharmaceutical-journal.com/publications/tomorrows-pharmacist/drug-development-the-journey-of-a-medicine-from-lab-to-shelf/20068196.article?firstPass=false

Ecological Momentary Assessments & Food Diaries

By | Health Sciences Research, Real World Evidence

Dietary Intake, Ecological Momentary Assessment, and Food Diaries: We Are What We Eat

Whether it’s a nutritious meat dish or a bowl of high-fat nuts, food intake is a key determinant for physical growth and cognitive development. The assimilation of nutrients plays a crucial role in human metabolism and the prevention of chronic illnesses. Given the prevalence of nutrition-related diseases, such as obesity, diabetes, cardiovascular disease, and dental problems; food intake assessments become fundamental in health care research and practice. Such evaluations can include dietary consumption data on foods, beverages, and supplements. Note that as food intake is not a static quality, eating assessments must be conducted in real-life settings to reflect the variety of food consumption, both in the short and the long-term.

Hence, ecological momentary assessments (EMAs), which are among the most reliable methods to assess a patient’s daily activities, can be employed as comprehensive and structured food diaries with high ecological validity and minimal recall bias. Interestingly, food diaries are defined as records of foods, portion sizes, eating schedules, nutrition statistics, environmental factors, and feelings. Digital ecological momentary assessment tools, in particular, can erase discrepancies in methodological heterogeneities in research and help users explore the pathways between dietary intake and health outcomes.

Ecological Momentary Assessments as Structured Food Diaries: Usage and Benefits

Dietary intake is a multifaceted phenomenon which lacks constant qualities. As consumption varies between meals and populations, ecological momentary assessments can improve the monitoring of food intake and provide high-quality data in real-life settings. Ecological momentary assessment methods are empirically validated structured diary techniques. Depending on the study design, ecological momentary assessments can be additionally divided into interval-contingent schedules, signal-contingent reports, event-contingent captures, and continuous reporting. Note that other common terms used to define ecological momentary assessment include experience sampling methods, ambulatory assessment, daily diary studies, real-time data capture studies, intensive longitudinal methods, and beeper studies. With high ecological validity and minimal recall bias, ecological momentary assessments reveal numerous benefits and applications:

  • Momentary appetite and energy intake: Ecological momentary assessment can facilitate the understanding of eating behaviors, momentary appetite, and biopsychosocial factors affecting nutrition. Interestingly, Kikuchi and colleagues (2005) developed an ecological momentary assessment scale (a watch-type computer and a personal digital assistant-based food dairy) to evaluate participants’ (n=20) stress, mood, food intake, cravings, and appetite. The team found that momentary appetite was associated with high energy intake before meals, revealing high ecological validity of the tool.

Note that appetite is a vital patient-reported outcome, such as mood and pain experiences, as it affects eating behaviors and varies from moment to moment. Appetite is also a sensory response to food presentation, smell, previous experience, and even social pressure.

  • Hedonic ratings and reward deficiency: Ecological momentary assessment tools can be implemented in the evaluation of food-reward perceptions, anticipation, food cravings, enjoyment, and eating behaviors. Interestingly, obesity rates are associated with people’s eating choices. Patients with higher body fat levels report fewer food wanting events per day and experience less food enjoyment after intake (Alabduljader et al., 2018). In other words, there’s a reward deficiency in obese individuals. Such findings should be implemented in dietary interventions to reduce dietary relapse and prevent nutrition-related diseases, especially in children and adolescents.

Interestingly, evidence shows that dietary lapses are associated with reduced coping strategies, both cognitive and behavioral. Carels and colleagues (2004) assessed 37 postmenopausal women via ecological momentary assessment diaries and found that coping strategies, mood, and abstinence-violation effects play a crucial role in weight-loss programs and relapse crises. Thus, ecological momentary assessment methods can be used to improve self-awareness and adaptive coping regarding temptations and dietary lapses.

  • Daily activities and affect: Understanding the connection between eating behaviors and affect, as well as their effect on daily activities (e.g., sleep), is important to healthy and clinical populations. Ecological momentary assessment can be used to explore the psychological factors behind obesity and eating disorders and provide findings with high ecological validity. Although stress and negative emotions reveal a strong association with eating behaviors, evidence shows that positive emotions also play a crucial role in one’s diet. Macht et al. (2004) sampled 485 situations in a 7-day study and found that 37% of eating events occurred in emotionally positive situations, compared to 30% of eating situations classified as negative.

Additionally, Liao and colleagues (2018) examined high-fat/high-sugar food intake and food and vegetable consumption in 202 women via an electronic ecological momentary assessment. The team found that vegetable and fruit intake was associated with positive feelings. Interestingly, stress was a strong predictor for high-fat/high-sugar foods consumption, particularly in obese patients.

  • Social desirability and social support: Ecological momentary assessment techniques can improve food intake assessment; such methods reduce recall and social desirability biases which often lead to under or over-reporting. In fact, real-life mobile-based ecological momentary assessments can improve the evaluation of weight-related behaviors and draw conclusions about the effects of social networks and support (Bruening et al., 2016).

Note that perceived social support can promote healthy behaviors. Given the fact that cultural differences also impact dietary intake, interventions must focus on traditional foods, family support, and religious beliefs.

Dietary Intake Assessments: Patient-reported Measures and Biomarkers

Given the active role of patients in health care decision-making, patient-reported outcomes and ecological momentary assessments, in particular, are becoming more and more popular in practice. Subjective measures provide vital information known only to the patients, improving nutrition surveillance studies, dietary guidance, and self-awareness. When it comes to self-reported methods, two categories exist: methods of real-time recording and methods of recall (Naska et al., 2017). Real-time records consist of food diaries (with or without weighing of foods) and the duplicate portion method (which includes chemical analysis of portions). Methods of recall include dietary histories, food frequency questionnaires (FFQs), and single or multiple daily recalls (24-hour dietary recalls).

Interestingly, subjective data can be supported by the analysis of biomarkers or biological specimen, which may correlate with dietary intake, metabolism, and personal characteristics (e.g.. smoking habits). Note that there are four categories of biomarkers: recovery, concentration, replacement, and predictive biomarkers. In addition, biomarkers can be categorized into short-term (measured in urine, plasma or serum), medium-term (measured in red blood cells or adipose tissue), and long-term biomarkers (measured in hair, nails, or teeth). The list of reliable nutrition biomarkers is increasing to improve dietary pattern analyses and food composition tables.

When it comes to subjective measures, it’s crucial to mention that food intake ability is associated with oral health-related quality of life and mental health. Choi and colleagues (2016) found that reduced subjective food intake ability due to poor oral health can lead to depression, self-esteem, and anxiety. Participants (n=72) completed subjective food intake ability tests, the Oral Health Impact Profile-14 scale, and three questionnaires about anxiety, self-esteem, and depression. Thus, research findings on self-reported masticatory ability are essential to improve both orthodontic treatment and dietary intake.

Ecological Momentary Assessment in Digital Health

With the rapid implementation of digital health technologies, electronic assessments have become reliable instruments to reduce respondent burden and reporting bias. While paper diaries were originally used in combination with pagers or watches, now personal digital assistants, wearable devices, and health applications facilitate the evaluation and monitoring of sampled experiences and eating behaviors. The use of digital platforms allows the integration of databases and pictures of foods, packaging material, and seasonal variations in intake. Electronic assessments also support data management and interoperability.

In fact, with high ecological validity and attractive design, digital dietary assessments reveal numerous benefits in the field of foods and diet patterns research:

  • Digital self-reports increase ecological validity and compliance.
  • Electronic real-time accounting of food facilitates data capture in real-time and provide time-stamps for the information collected.
  • Digital ecological momentary assessment methods eliminate the need for laborious data input and expand interface design options.

Designing an Ecological Momentary Assessment (EMA) Food Diary

Since effective strategies to measure and improve diet and chronic diseases are a major focus of research, ecological momentary assessment studies are growing in popularity. Ecological momentary assessment can be employed as empirically validated and structured food diaries to help researchers explore behaviors, symptoms, and triggers, which vary between and within individuals. Ecological momentary assessments usually constitute of short questionnaires, and beep prompts. Questionnaires can include open-ended questions, visual analog scales or checklists to assess energy intake, food consumption, physical activity, environmental factors, emotional well-being, and social support. Interestingly, rating scales such as the visual analog scale are valid indices of temporal sensations (e.g., satiety, hunger, appetite) (Merrill et al., 2002). Most of all, evidence shows that self-assessments and daily logs improve patient outcomes related to physical activity and weight management (Burke et al., 2012).

Food diaries can help patients track aspects, such as:

  • Type of food and drinks (e.g., high-calorie beverages)
  • Healthy meals (e.g., high-fiber choices)
  • Cravings (including snacking during the day)
  • Time and date (with comments on eating schedules and patterns)
  • Activities (e.g., eating while working)
  • Mood (as well as social and emotional support)

When designing an ecological momentary assessment (EMA) food diary, researchers must decide either on a schedule (interval-contingent, signal-contingent, event-contingent, or continuous reporting) or a combined approach. Prompts and interface designs should be attractive and non-invasive. Interestingly, dietary self-monitoring can be influenced by daily and seasonal aspects (e.g., the beginning of the intervention, the day of the week, and month of the year). Note that Pellegrini and colleagues (2018) examined within-person differences in self-monitoring during a 6-month technology-supported weight loss trial, and found that lower adherence was recorded as the study progressed as well as on the weekends.

Ecological Momentary Assessment as Food Diary: Perspectives and Food Policy

Measuring diet is a challenging task. Food intake depends on various factors, such as body mass, appetite, social support, age, sex, and education. Therefore, only self-reports and food diaries can help health professionals assess dietary intake and its underlying mechanisms. Ecological momentary assessments, in particular, can be employed as effective diary techniques. Such measurements have high ecological validity, sensitivity, and accuracy. As these repeated assessments are conducted in real-life situations, the memory strains and burden to participants are reduced. Food diaries provide rich data on emotions, quality of life, social support, physical activity, and energy intake. Most of all, subjective measures empower patients, which is a leading goal in digital health practices worldwide. By becoming active participants in decision-making, patients can grasp a better understanding of their diet and eating behaviors.

After all, diet is defined as the main pathway between food environments and health outcomes, including in children. Interestingly, Campbell et al. (2018) found that factors in the family environment (e.g., eating alone or in front of the TV) influence eating behaviors in toddlers. The research team asked low-income mothers (n=277) to assess their home environment, and toddler’s eating habits through ecological momentary assessment, and performed multilevel logistic mixed-effects regression models to analyze the data. Note in the US alone; obesity rates are increasing, affecting 13.7 million children (aged 2-18 years). Thus, the adaptation of early healthy eating habits is vital to prevent chronic diseases and disabilities later in life.

In fact, digital ecological momentary assessment methods can benefit both interventions and informing policy. Understanding dietary intake among individuals can improve nutrition initiatives (e.g., school lunch initiatives) on local and national levels. Note that demographic factors and cultural differences influence people’s perceptions of obesity and dietary intake, so interventions should consider the integration of traditional foods and beliefs. Research findings can also improve food retail access, calorie labeling, food restrictions, and healthy choices.

References

  1. Alabduljader, K., Cliffe, M., Sartor, F., Papini, G., Cox, W., & Kubis, H. (2018). Ecological momentary assessment of food perceptions and eating behavior using a novel phone application in adults with or without obesity. Eating Behaviors, 30, p. 35-41.
  2. Bruening, M., Woerden, I., Todd, M., Brennhofer, S., Laska, M., & Dunton, G. (2016). A Mobile Ecological Momentary Assessment Tool (devilSPARC) for Nutrition and Physical Activity Behaviors in College Students: A Validation Study. JMIR.
  3. Burke, L., Wang, J., & Sevick, M. (2011). Self-Monitoring in Weight Loss: A Systematic Review of the Literature. Journal of the American Dietetic Association, 111 (1), p. 92-102.
  4. Campbell, K., Babiarz, A., Wang, Y., Tilton, N., Black, M., & Hager, E. (2018). Factors in the home environment associated with toddler diet: an ecological momentary assessment study. Public Health Nutrition, 21 (10), p. 1855-1864.
  5. Carels, A., Douglass, O., Cacciapaglia, H., & O’Brien, W. (2004). An Ecological Momentary Assessment of Relapse Crises in Dieting. Journal of Consulting and Clinical Psychology, 72 (2), p. 341-348.
  6. Choi, S., Kim, J., Cha, J., Lee, K., Yu, H., & Hwang, C. (2016). Subjective food intake ability related to oral health-related quality of life and psychological health. Journal of Oral Rehabilitation, 43 (9), p. 670-677.
  7. Kikuchi, H., Yoshiuchi, K., Shuji, I., Ando, T., & Yamamoto, Y. (2015). Development of an ecological momentary assessment scale for appetite. Biopsychosocial Medicine, 9 (2).
  8. Liao, Y., Schembre, S., O’Conner, S., Belcher, B., Maher, J., Dzubur, E., & Dunton, G. (2018). An Electronic Ecological Momentary Assessment Study to Examine the Consumption of High-Fat/High-Sugar Foods, Fruits/Vegetables, and Affective States Among Women. Journal of Nutrition Education and Behavior, 50 (6), p. 626-631.
  9. Macht, M., Haupt, C., & Salewsky, A. (2004). Emotions and eating in everyday life. Application of the experience sampling method. Ecology of Food and Nutrition, 43 (4), p. 11-21.
  10. Merrill, E., Kramer, F., Cardello, A., & Schutz, H. (2002). A comparison of satiety measures. Appetite, 39 (2).
  11. Naska, A., Lagiou, A., & Lagiou, P. (2017). Dietary assessment methods in epidemiological research: current state of the art and future prospects. F1000Research.
  12. Pellegrini, C., Conroy, D., Phillips, S., Pfammatter, A., McFadden, H., & Spring, B. (2018). Daily and Seasonal Influences on Dietary Self-monitoring Using a Smartphone Application. Journal of Nutrition Education and Behavior, 50 (1), p. 56-61.

Understanding NIH R-Series Grants

By | Grant Writing, Health Sciences Research

The National Institute of Health (NIH) Research Grants (R-series) constitute the largest category of funding and support at the federal level. It is also the most sought-after research grant program that provides funds to new, early-career, and experienced investigators as well as institutes and research centers in the form of:

  • Direct research cost
  • Salary
  • Equipment & supplies
  • Travel & other allowable costs
  • Indirect costs, i.e., sponsors, etc.

Each year, the NIH grants tens of thousands of R-awards to sustain biomedical research and development (R&D) at universities and medical research institutes. However, the grants are not easy to obtain because of the hypercompetitive biomedical ecosystem.

Each year, more and more investigators seek the NIH awards. In 2016 alone, the NIH received over 40,000 applications which were about double the number in 1998, i.e., less than 20,000 application. Applications for R01, the NIH flagship research project grant to aid early-career scientists, soared by 97% in the same year.

For an aspiring new or early-career scientist, it is important to understand the type of R-awards so that you can apply for funds confidently. Aiming for one type of fund but applying for another will result in administrative rejection sans review as is the rule of NIH Institutes and Centers (ICs).

The R-series includes the following grants:

  • Research Project Grant (R01)
  • NIH Small Grant Program (R03)
  • Scientific Meeting Grants (R13)
  • NIH Research Enhancement Award (R15)
  • Exploratory/Developmental Research Grant Program (R21)
  • Early Career Research (ECR) Award (R21)
  • NIH Planning Grant Program (R34)
  • R41 and R42: Small Business Technology Transfer (STTR) Grants (R41/R42)
  • Small Business Innovation Research Grant (SBIR) (R43/R44)
  • NIH High Priority, Short-Term Project Award (R56)

A detailed look at each type will help understand the nature and composition.

1.     Research Project Grant (R01)

The R01 is the oldest and the most prestigious grant awarded to independent investigators conducting biomedical research. It is a funding vehicle used to launch and promote an independent career and can be solicited through almost all Institutes and Centers (ICs) of the NIH.

R01 is awarded for 3-5 years and provides up to $500k without approval. Requests can be made for a larger amount, but the budget application needs to reflect and justify the actual needs accurately. R01 is a renewable fund.

For a principal investigator (PI) or biomedical institute seeking R01 award, your research has to be health-oriented, discrete and specific. It can be lengthy; may span anywhere between four to five years and still be covered by the grant. The uninterrupted flow of funds gives you plenty of time to pursue and complete project, publish findings and start crafting a new application should you require a secondary analysis.

R01 is particularly beneficial if you are a new- or early-career investigator because it requires little or no preliminary data. Your proposal, however, has to have a sound research approach and plausible medical solution to increase chances of success.

As you apply for R01 grant, it is important to note that contrary to common belief, R01 grants are highly competitive and not easy to get by. Each year, the NIH posts a one-time request for applications (RFAs) on a specific research topic with a given deadline. You can find more information on the research cycle and due dates can be found here.

2.     NIH Small Grant Program (R03)

The R03 supports small research projects planned for a short duration of time, i.e., <2 years. These projects can be:

  • Pilot studies
  • Small, short-term & self-contained research projects
  • Feasibility studies
  • Collection & secondary analysis of preliminary data
  • Development of new research technology

The R03 grant is ideal if you have limited resources and need just a little bit of financial backup to broaden and round off the project. The R03 budget for direct cost is up to $50,000 per year. It is most suitable for student investigators who are pursuing studies for a dissertation. However, it does not cater to doctoral students.  It is non-renewable.

Like R01, R03 grant does not require preliminary data, but it can be included if you, being the PI, deem so. However, you need to be mindful of the fact that not all ICs provide R03 grants. You can find more on The R03 Parent Funding Opportunity Announcement (FOA) can be found here.

3.     Scientific Meeting Grants (R13)

The R13 grant supports scientific meetings, societies, and conferences that are in line with the NIH healthcare agenda. Each NIH IC has a specific, individual scientific mission that is different from the other IC’s mission. R13 only supports domestic organizations though; foreign institutions are not eligible to apply.

If you are seeking R13 grant, your application must contain;

  • “Permission to submit” letter from the concerned IC
  • Conference plan
  • Logistical arrangement, and
  • Detailed budget request

4.     NIH Research Enhancement Award (R15)

The R15 grant is reserved for institutes that are not major recipients of the NIH grants. This award is given to undergraduates and graduates as well as the faculty of the institutes conducting small-scale biomedical and behavioral studies. It provides up to $300k for projects spanning less than 3 years.

The goal of R15 is to:

  • Support meritorious and quality research
  • Provide exposure to students
  • Encourage and strengthen academic and research milieu at institutes

R15 award has been updated in 2019 and will now provide:

  • Academic Research Enhancement Award (AREA) – for Undergraduate Institutions
  • Research Enhancement Award Program (REAP) – for Graduate Schools and Health Professional Schools

To check your eligibility for the grant, click here.

5.     Exploratory/Developmental Research Grant Program (R21)

As the name suggests, the Exploratory/Developmental Research Grant Program (R21) is designed to nourish the early stages of a clinical project. It is an investigator-initiated grant and particularly caters to:

  • Pilot studies
  • Feasibility studies

It provides up to $275k for a period of two years which is usually sufficient for the early stages of project development. It does not require preliminary data and can be obtained through most of the ICs.

For R21 grant, the NIH requires that your proposed studies have the potential to open new realms of clinical development. The reviewers will favor your application if it:

  • Proposes a high-risk study leading to a scientific breakthrough
  • Contains novel techniques, models and methodologies
  • Impacts clinical research and behavioral and biomedical science

The policy, procedure to apply and application for R21 grants can be found here.

6.     Early Career Research (ECR) Award (R21)

The R21 lays the foundation for an independent career for ambitious researchers investigating biomedical or behavioral aspects of communicable diseases (hearing, taste, speech, smell, and balance, etc.).

The R21 grant also covers:

  • Secondary analysis of existing data
  • Small and limited research projects
  • Development of research methodology and technology
  • Translational and outcomes research

The R21 grant is ideal for a PI or program directors who need funds to obtain sufficient preliminary data and ultimately develop an independent scientific/biomedical career with the help of R01 grant.

7.     NIH Planning Grant Program (R34)

R34 supports the initial stages of a clinical trial or a research project. The grant can either provide a direct cost for the project or help:

  • Establish the research team
  • Provide research oversight
  • With data collection and management
  • Design and develop research tools
  • Prepare operational manuals

Like R01, R03, and R21, R34 does not require preliminary data. However, unlike R01, it is non-renewable. The grant is worth $450k for a total of three years.

The grant permits early peer-review for a clinical trial. It also provides support for developing essential elements of the trial and based on the elements planned, covers the full-scale trial.

To increase your chances of securing R32, make sure the proposal is conceptual, rational and groundbreaking.

8.     R41 and R42: Small Business Technology Transfer (STTR) Grants (R41/R42)

STTR R41/42 grants aim to promote scientific, medical and technological innovation between research institutes (RIs) and small business concerns (SBCs) at phase I and phase II levels of a study or trial.

  • Phase I – comprises a feasibility study to determine the scientific basis of the proposed research. R41/42 provides about $150k for phase I which roughly lasts for a year.
  • Phase II – comprises execution and expedition of research and development efforts described in phase I. R41/42 provides 1000k for phase II which lasts for 2 years.

R41/42 also encourages technological transfer between RIs and SBCs. The SBC may employ a single or multiple PIs or program directors for the study.

The grant is exclusive to U.S. businesses only and can be obtained from almost all ICs. More information on R41/42 can be found here.

9.     Small Business Innovation Research Grant (SBIR) (R43/R44)

SBIR R43/44 promotes scientific innovation and research and development in for-profit, private organizations. It sponsors biomedical and scientific ideas that have the potential to benefit healthcare and medical science as a whole. The grant also helps commercialize the innovative idea or medical technology which may be in the form of:

  • Tools
  • Devices
  • Products
  • Services

The R43/44 also intends to:

  • Encourage small businesses, particularly women-owned businesses, which lack social or economic means but have the potential to participate in technological innovation.
  • Enhance private-sector participation in federal-level technological development
  • Uplift small and private businesses by stimulating scientific development

SBIR grant is investigator-initiated who proposes, on behalf of the business, the rationale, plan, procedure and potential impact of the technology in the application addressed to the NIH or NIGMS (National Institute of General Medical Sciences).

The SBIR program has the following phases:

  • Phase I (R43) – to establish feasibility and scientific merit of the research proposal. The potential for commercialization of the product/devise is also determined in phase I. R43 grants sum up to $150k.
  • Phase II (R44) – to execute research efforts proposed in phase I. Phase II is conducted once phase I is granted by the NIH. It continues research or R&D efforts initiated in Phase I. Phase IIB grant is worth $1000k and is renewable.

10.     NIH High Priority, Short-Term Project Award (R56)

The R56 grant funds R01 renewal applications that just fall outside the scope of NIH ICs but are highly meritorious and boast priority scores. These applications may be new or competing.

The R56 fund provides interim support that lasts for short-term only, i.e., one or two years.

The interim support is sufficient for the PI, usually, an early-career investigator seeking an independent career or seasoned investigators seeking interim funding, to gather additional data or meet other requirements for application renewal. For an application to qualify for R56 grant, the quality has to be superior and top-notch.

References

  1. NIH Data Book. Research and Training Grants: Competing Applications by Mechanism and Selected Activity Codes. https://report​.nih.gov/nihdatabook/ (accessed February 2, 2018).

Understanding Career Development Grants (K Awards)

By | Grant Writing, Health Sciences Research

The National Institute of Health (NIH) Career development grants (K awards) are awarded to physician-scientists who seek professional independence and career development. The aim of the grant is to prepare a generation of medical veterans to combat the growing health challenges affecting human health as a whole.

K awards, fundamentally a platform to provide career objectives, are a consolidated group of 14 awards that lay the foundation for a productive scientific climate in the field of biomedical, clinical and behavioral science. These awards consist of a variety of programs, such K01, K08, K23, etc.; each requiring the awardee to dedicate a specific percentage (75%) of professional effort and time to research and mentored-career development activities over the last 5 years (Lindman et al., 2016). The award may be in the form of:

  • Research-related cost
  • Salary allowance
  • Fringe benefits

Apart from facilitating research training, educational programs, and innovative studies, a K award is transformative, i.e., provides a career trajectory to the biomedical scientist. It is critical for mentoring and career development of clinical and translational research scholars. Without it, a physician-scientist may not be able to fulfill the dream of achieving research independence.

The K awards are of several types. Discussing each in detail will help understand the nature and obtainability of each.

  • Mentored research scientist development award (K01)
  • Independent scientist awards (K02)
  • Established investigator award in cancer prevention and control (K05)
  • Academic career award and cancer prevention, control, behavioral, and populations sciences career development (K07)
  • Mentored clinical scientist research career development award (K08)
  • Mentored clinical scientist development award for institutions (K12)
  • Career enhancement awards for established investigators (K18)
  • Career transition awards to support individual postdoctoral fellows transition into faculty positions (K22)
  • Mentored patient‐oriented research career development awards (K23)
  • Midcareer investigator award in patient-oriented research (K24)
  • Mentored quantitative research development award (K25)
  • Clinical research curriculum development (K30)
  • Career/research transition award (K99)

1.     Mentored Research Scientist Development Award (K01)

The K01 grant is awarded to post-doctoral fellows seeking research independence under the aegis of a mentor or a sponsor. This award provides support and training for a sustainable period of time so that the physician-scientist can indulge in intensive research, such as a clinical trial or feasibility study, etc., mandatory for career development.

The basic purpose of the K01 award, however, is to pave the way for the R01 grant, i.e., new research project grant – the most prestigious NIH award given to independent scientists conducting biomedical research. The R01 award is the ultimate grant that leads to an independent research career.

With the help of mentored-training received via K01 award, the doctors and post-doctoral research fellows gain valuable experience and skills required for career development.

The grant provides a “protected time” of 3-5 years for intensive, supervised research in clinical, behavioral and biomedical sciences.

Almost all ICs provide K01 grants to investigators who:

  • Require intensively mentored and supervised training to develop research independence
  • Want to resume their research career after a hiatus, such as illness, etc.
  • Want to train in a new or different field
  • Ultimately wish to obtain R01 award

The grant provides:

  • Salary Fund: $75,000
  • Research Fund: $ 30,000

2.     Independent Scientist Awards (K02)

The K02 grant is awarded to scientists who want to take a hiatus from teaching and become full-time researchers. This award allows dedicating uninterrupted time and concentration to research and investigation. In a way, a K02 award gives a doctor or a physician “off-time from academic responsibilities or administrative duties and facilitates becoming a researcher with an advancing career.

The release from clinical duties allows the researcher to venture into other research sites, develop collaborations with other investigators/laboratories and learn new techniques.

The K02 fund is provided in the form of salary and research support. To secure the award, an applicant should have a track record of successfully obtaining the R01 grant. The purpose of the K02 award is to facilitate researcher moving from one R01 grant to multiple R01 grants required in the process of full-time research. Unlike K01, the K02 grant does not require mentorship.

The grant provides:

  • Salary Fund: $100,000
  • Research Fund: $8,000

3.     Established Investigator Award in Cancer Prevention and Control (K05)

The K05 grant is given to an established investigator, preferably a professor, who wishes to:

  • Conduct independent research in cancer prevention, control, and treatment, or
  • Mentor junior investigators conducting the above-mentioned search

The applicant, at the time of the application, must be the principal investigator (PI) of R01 or R01-like grant focused on cancer research. The award duration is 3-5 years.

The grant provides:

  • Salary Fund: up to $50,000
  • Research Fund: up to $25,000

4.     Academic Career Award and Cancer Prevention, Control, Behavioral, and Populations Sciences Career Development (K07)

The K07 grant provides a unique opportunity to junior physicians who want to become cancer-focused scientists or investigators. This award focuses on cancer prevention, cure, and control. The purpose is to keep and increase a diverse pool of ambitious and talented scientists, with academic/research expertise, to address the problem that affects approximately 13 million people every year. (Jemal et al., 2008)

The award compensates for salary and mentored-research support for protected time of <5years.

  • Salary Fund: up to $100,000
  • Research Fund: up to $30,000

5.     Mentored Clinical Scientist Research Career Development Award (K08)

Like K01 grant, K08 is a career-development award that prepares capable doctoral fellows for independent research careers. This award basically allows qualified individuals to explore biomedical, behavioral and clinical sciences and improve overall health conditions of the nation.

Like K01, the K08 grant provides mentored-supervision and protected time (3-5 years) for intensive and dedicated clinical research. It provides:

  • Salary Fund: up to $100,000
  • Research Fund: up to $50,000

6.     Mentored Clinical Scientist Development Award for Institutions (K12)

The purpose of the K12 grant is to support early-career clinical scientists, such as physicians, pharmacists, epidemiologists, clinical psychologists and behavioral scientists, who are committed to venture into independent research careers and ultimately seek other support mechanisms, i.e., K08 and K23, for further research.

K-12 facilitates intensively supervised research-training in the field of substance use and associated disorders for 3-5 years.

The grant provides:

  • Salary Fund: up to $100,000
  • Research Fund: up to $30,000

7.     Career Enhancement Awards for Established Investigators (K18)

K18 facilitates experienced scientists to acquire new research skills to initiate or redirect their already-established research programs toward deafness and communicable diseases. K18 is a mentored research career development award and lasts for 6 months to 2 years.

The grant provides:

  • Salary Fund: up to $100,000
  • Research Fund: up to $50,000

8.     Career Transition Awards (K22)

The K22 grant facilitates the transition of physician-scientists, who may be intramural or extramural postdoctoral fellows, to independent researchers. It is a two-phase award where:

  • Phase I: includes mentored-research and technical support of up to 18 months
  • Phase II: includes independent research at the extramural institution. The NIH ICs provide up to 3 years of support for Phase II

Intramural awardees receiving K22 grant continue their biomedical career in the extramural community or secure academic faculty position at U.S. institutions.

The grant provides:

  • Salary Fund: up to $100,000
  • Research Fund: up to $50,000

9.     Mentored Patient‐Oriented Research Career Development Awards (K23)

K23 is awarded to physician-scientists and postdoctoral fellows who seek career development through supervised patient-oriented research and studies. The grant support ultimately leads to research independence in translational, behavioral and biomedical fields.

K23 grant is provided, for the duration of 3-5 years, in the form of:

  • Salary Fund: ≤ $100,000
  • Research Fund: ≤ $50,000

10.     Midcareer Investigator Award in Patient-Oriented Research (K24)

K24 is a patient-oriented award that promotes biomedical, clinical, behavioral and translational research on human subjects. It aims to create a pool of richly-experienced and diversified human resource in patient-oriented research, particularly aging. Clinical researchers at the level of assistant professors are usually eligible for the K24 award. The potential awardee is required to recruit and mentor likeminded researchers to conduct patient-oriented studies and translate research findings.

The grant is renewable and provides:

  • Salary Fund: up to the maximum salary
  • Research Fund: up to $50,000

11.     Mentored Quantitative Research Development Award (K25)

K25 award is a special grant reserved for postdoctoral fellows whose research or degree is non-medical (engineering, quantitative science, etc.) but want to venture into the biomedical and clinical field. The purpose is to attract more and more qualified and experienced researchers to the NIH-relevant research.

The grant, issued for 3-5 years, provides:

  • Salary Fund: up to $90,000
  • Research Fund: up to $40,000

12.     Clinical Research Curriculum Development (K30)

K30 aims to inculcate moral and educational training into the career development of research scientists. It also bridges the gap between researcher and patient by providing supervised training to the former.

The grant provides:

  • Salary Fund: 275,000
  • Research Fund: $50,000

13.     Career/Research Transition Award (K99)

K99 award processes an early and timely transition from postdoctoral position to an independent research position. K99, lasting for 1-2 years, is a two-stage award that is coupled with R00 (to support last and final stages of postdoctoral training) which lasts up to 3 years.

The grant provides:

  • Salary Fund: $75,000 & fringe benefits
  • Research Fund: $25,000

14.     Institutional Support for Mentored Research Career Development Awards (KL2)

KL2 selects a faculty level clinician for successful career development in clinical and translational research.

The grant, lasting up to 3 years, provides:

  • Salary Fund: $85,000
  • Research Fund: $25,000

Brief Summary of K Awards

Here is a brief summary of the type of K award you should apply for:

  • K01: If you are an M.D or a Ph.D. and seeking research independence.
  • K02: If you are an academic or a clinician and want to take a break to focus on a research career.
  • K05: If you are an established investigator planning to conduct or mentor cancer-focused research.
  • K07: If you are a junior investigator wishing to develop a career focused on cancer research.
  • K08: If you are a physician or postdoctoral fellow planning to conduct basic research.
  • K12: If you are a clinician, epidemiologist, pharmacist or a psychologist and wish to conduct research on substance use.
  • K18: If you are a seasoned investigator and want to redirect your research into answering questions about communicable diseases involving speech, language, voice, smell and balance.
  • K22: If you are a postdoctoral fellow and seeking research independence through mentored research.
  • K23: If you are an M.D. or a Ph.D. who is conducting supervised research on human subjects and is ultimately seeking research independence.
  • K24: If you are an assistant professor with a history of mentored-research and aim to conduct patient-oriented research on aging.
  • K25: If you are transitioning from a nonmedical Ph.D. (such as chemistry etc.) to biomedical research.
  • K30: If you are an M.D and seeking didactic training to diversify scientific background.
  • K99/R00: If you are an outstanding post-doctoral fellow and aiming to shorten the research period.

More information on Career Development K-awards can be found here.

References

  1. Balke, W., Carlson, D.E., Jackson, E.A., Lindman, B.R., Madhur, M.S.,… Tong, C.W. (2015, October, 20) National Institutes of Health Career Development Awards for Cardiovascular Physician-Scientists: Recent Trends and Strategies for Success. J Am Coll Cardiol, 66(16), 1816–1827. http://dx.doi.org/10.1016/j.jacc.2015.08.858
  2. Bray, F., Center, M.M., Ferlay, J., Forman, D., Jemal, A., & Ward, E. (2011, April 4) Global cancer statistics. CA Cancer J Clin, 61(2):69-90. http://dx.doi.org/10.3322/caac.20107

Types of NIH Grants & How to Know Which One is the Right Fit

By | Grant Writing, Health Sciences Research

Before you start writing the grant application, it is important to understand the type of funding agencies such as National Institute of Health (NIH), National Science Foundation (NSF) and Centers for Disease Control and Prevention (CDC). Each funding agency has its own approach, goals, and priorities; it is important to understand their mission because funds are research-specific and what works for another researcher may not be the right fit for your project.

The NIH, for instance, has a mission to support exploratory and translational research encompassing health, medicine and other life sciences in order to find a cure, reduce disease burden and lengthen life.

Understanding the mission of funders will help you craft the most accurate and specific grant application. This knowledge allows drafting and viewing your work through the lens of the reviewers. It also increases your chances of securing the grant and furthering your research because most research institutions are financially stressed and incapable of giving your career the early thrust.

Gathering detailed information about the types of NIH grants is all the more important for an early-career investigator because NIH is the largest federal funding agency that dedicates millions of dollars to health research every year (Hendriks and Viergevercorresponding, 2016). It is the most sought funding agency in the United States (U.S.) because it:

  • Surpasses all other grant institutes,
  • Supports a myriad of scientific experimentations, and
  • Provides the most befitting professional portfolio to medical researchers.

However, the NIH award-success rate is low; which means your project and application have to excel in all aspects to seal the deal. Each year thousands of medical researchers drop their applications at NIH, but only a few succeed. In 2010, the NIH received 2097 new proposals, of which only 355 managed to receive funding. (McGovern, 2012)

Nonetheless, there are numerous grant opportunities at the NIH. It is tempting to chase the one that offers a large sum or has a convenient deadline but submitting the wrong application can be a lethal career mistake. This is called “over-reaching,” which is a career-kill. Overreaching comes across as foolish and dishonest to the reviewer.

As a researcher, you need to understand the types of NIH grants to know the category your project falls into. This article demystifies the types of NIH grants and ways to obtain them.

Demystifying NIH Grants

NIH is made up of 27 institutes and centers (ICs), of which 24 can award grants. Each year, the NIH announces grant proposals for new and seasoned investigators at more than 25,000 universities, medical schools and research centers all over the world. These include a gamut of multiple-sized awards; some small to encourage the novice investigators, others large to cover collaborative efforts. With advanced knowledge and a thorough search, you can identify the funding opportunity specific to your project.

The NIH has categorized grant types into series:

  • Research Grants (R series)
  • Career Development Awards (K series)
  • Research Training and Fellowships (T & F series)
  • Program Project/Center Grants (P series)
  • Resource Grants (various series)
  • Trans-NIH Programs
  • Inactive Programs (Archive)

Each series, represented by activity codes, represents a theme. The activity codes differentiate between the grant types; for instance, R-series denotes the research project grant; K-series is intended for researchers yearning independent career. Both R- and K-grants support independent investigators.

Let us have a detailed look at each grant type.

1. Research Grants (R Series)

Research Grants fund independent health investigators and professional institutes. The funds can be in the form of direct research cost, sponsors, salaries or equipment & supplies. R series is the largest category of NIH funding; it includes:

  • Research Project Grant (R01): This is an investigator-initiated traditional grant that is awarded to organizations on behalf of the principal investigator (PI). R01 is the oldest grant and funded by all NIH CIs. It is renewable and has an open budget for each project.
  • NIH Small Grant Program (R03): a small grant involving small-budgeted research projects (pilot studies, secondary analysis, research methodology, etc.) with a short deadline (up to two years). It is non-renewable and does not require preliminary data.
  • Scientific Meeting Grants (R13): support national or international scientific conferences, workshops, and meetings. Applications must be submitted to gov.
  • NIH Research Enhancement Award (R15): is dedicated for small-scale projects conducted at educational institutes that are not major recipients of NIH grants. The purpose of this grant is to propel students of these institutes to mainstream research as well as to expose scientists and their meritorious studies nationally and internationally. The grant duration is ≤36 months.
  • Exploratory/Developmental Research Grant Program (R21): is dedicated for either new studies and projects or extension of earlier discoveries, particularly high-risk studies that lead to medical breakthroughs. The fund is non-renewable and does not require preliminary data.
  • Early Career Research (ECR) Award (R21): is a fund for scientists seeking an independent career with research focused on one of the scientific missions of The National Institute on Deafness and Other Communication Disorders (NIDCD), i.e., speech, hearing, taste, smell, etc. The research could be a small, self-contained new project or secondary analysis of existing data.
  • NIH Planning Grant Program (R34): This grant supports initial stages of a clinical trial and comes handy in establishing the research team and developing tools for data management and trial designs etc. This fund is non-renewable and is available for projects that are limited to three years only.
  • R41 and R42: Small Business Technology Transfer (STTR) Grants (R41/R42): is awarded to non-profit research centers and small businesses collaborating research and development (R&D) projects for commercial health products and services.
  • Small Business Innovation Research Grant (SBIR) (R43/R44): serves to encourage small businesses to participate in technological innovation and supplement federal R&D needs. It also serves to increase private sector commercialization and assimilate it with the federal R&D to establish technical merit. The grant provides monetary support for 6 months – 1 year.
  • NIH High Priority, Short-Term Project Award (R56): is reserved for new or competing renewal R01 applications. This is a high-priority award that is given to applicants with high scores that just fall outside the scope of NIH ICs. This is a limited, short-term award that helps PI, a career stage scientist, to gather additional data in order to complete or revise the current application. The fund lasts for 1-2 years.

2. Career Development Awards (K series)

Career Development Awards (K series) support senior post-doctoral or faculty-level scientists. The purpose is to pave the way for scientists to conduct independent researches. The funds are exclusive for U.S. nationals and permanent residents. K series includes:

  • Mentored Research Scientist Career Development Award (K01): a supervised career-development fund awarded to scientists who apply for training in a new field or are resuming their research after a break.
  • Independent Research Scientist Development Award (K02): is reserved for outstanding scientists in need of funding to expand or deepen their research project that bears a potential to contribute to science and medicine.
  • Senior Research Scientist Award (K05): is for independent and established scientists with peer-reviewed research who want to devote efforts to research as well as to mentor novice investigators.
  • Academic Career Development Award (K07): offers development awards for junior scientists and leadership awards for senior scientists. It tends to provide support to both researchers as well as the sponsoring institute.
  • Mentored Clinical Scientist Research Career Development Award (K08): supports researchers working on clinical, behavioral and biomedical projects that are expected to bring significant improvement for national healthcare.
  • Clinical Scientist Institutional Career Development Program Award (K12): is awarded to institutes that run career-development plan to facilitate the transition of institution-dependent scientists to an independent career.
  • Research Career Enhancement Award for Established Investigators (K18): is for seasoned scientists who seek fresh and updated research skills to further their already-established careers.
  • Career Transition Award (K22): supports investigators become independent researchers.
  • Mentored Patient-Oriented Research Career Development Award (K23): for researchers conducting patient-oriented studies.
  • Midcareer Investigator Award in Patient-Oriented Research (K24): is for associate professors dedicated to patient-oriented research as well as mentoring clinical residents.
  • Mentored Quantitative Research Career Development Award K25): is to attract researchers to medicine and science who have thus far not focused on health and disease.
  • Midcareer Investigator Award in Biomedical and Behavioral Research (K26): is for biomedical and behavioral scientists.
  • Emerging Global Leader Award (K43): is for the junior-faculty scientist from a low- or middle-income country (LMIC) who is seeking research support.
  • Emerging Leaders Career Development Award (K76): is for a horde of scientists working on a project destined to improve healthcare.
  • Pathway to Independence Award (K99/R00): aims to help the cohort of researchers move forward from mentored research position to independent faculty positions and launch competitive careers.

3. Research Training and Fellowships (T & F series)

F awards are individual fellowship awards for undergraduate, graduate and postdoctoral scientists seeking institutional research training opportunities. F series includes:

  • International Research Fellowships (F05): for foreigner postdoctoral investigators.
  • Ruth L. Kirschstein Individual Predoctoral NRSA for MD/Ph.D. and other Dual Degree Fellowships (F30): for predoctoral students with dual doctor degree who are seeking physician-scientist or clinician-scientist career.
  • Ruth L. Kirschstein Predoctoral Individual National Research Service Award (F31): provides mentored research training to promising predoctoral students.
  • Ruth L. Kirschstein Postdoctoral Individual National Research Service Award (F32): for aspiring and promising postdoctoral candidates.
  • Ruth L. Kirschstein National Research Service Awards for Senior Fellows (F33): for veterans to redirect their career or broaden their scientific background.
  • Individual Predoctoral to Postdoctoral Fellow Transition Award (F99/K00): facilitates the transition of brilliant graduate students into successful postdoctoral candidates.

T grants, or Training Grants, provide institutional support to groom predoctoral and postdoctoral research candidates. The purpose of these grants is to develop and enhance research at institutes. It provides an opportunity to trainees to join forces with the research team and gain experience and expertise before venturing into an independent career. Normally senior candidates at research institutes apply for these grants.

4. Program Project/Center Grants (P series)

These are large, multi-project grants that include a spectrum of research activities such as collaborative efforts and multi-institutional research projects investigating disease entity or biomedical tools. Through P-grant series, the NIH ICs support broadly-based studies involving numerous independent investigators sharing a common research theme and working toward a well-defined goal. P grants, together with R series, are the most frequently used NIH awards.

P-series includes:

  • (P01) Research Program Projects: for multi-project research plans involving many researchers
  • (P20) Exploratory Grant: for exploratory search investigating new clinical paradigms
  • (P30) Center Core Grants: support shared research projects by investigators from different disciplines
  • (P50) Specialized Center: provides monetary and supportive services to any part of the full-range R&D project

5. Resource Grants (various series)

Resource grants provide research-related support to investigators or institutes. These are used frequently and include:

  • Resource-Related Research Projects (R24): to enhance research infrastructure
  • Education Projects (R25): for biomedical research
  • Resource Access Program (X01): invites institutes to seek access to NIH sources

6. Trans-NIH Programs

These are broad-reaching research grants that support different clinical and biomedical studies such as neuroscience research (Blueprint), stem cell information (Stem cells), social sciences (OppNet), and countermeasures against chemical threats (CounterACT), etc.

7. Inactive Programs (Archive)

NIH has inactive grant programs that serve to provide information and background only. These include:

  • Clinical Research Curriculum Award (CRCA) (K30)
  • First Independent Research Support and Transition (FIRST) (R29)
  • Short-Term Courses in Research Ethics (T15)

The funding schedule of all grants, including application due dates, project start, and end dates, and cycles, can be found here.

References

  1. Hendriks, T.C., & Viergevercorresponding, R.F. (2016, February 18). The 10 largest public and philanthropic funders of health research in the world: what they fund and how they distribute their funds. Health Res Policy Syst, 14, 12. http://dx.doi.org/10.1186/s12961-015-0074-z
  2. McGovern, V. (2012, January 1). Getting grants. Virulence, 3(1), 1–11.http://dx.doi.org/10.4161/viru.3.1.18844

Rating Scales: A Complete Guide

By | Health Sciences Research, Real World Evidence

Rating Scales and Digital Health: Introduction

With the increasing role of patient-reported outcome measures in today’s digital health industry, rating scales are among the most effective tools used to assess subjective experiences, such as pain, mood, appetite, and comfort with knowledge. Rating scales facilitate the collection of qualitative and quantitative data in research, which can be used to track the progression of a condition or response to treatment. In fact, subjective scales improve the diagnostic process in practice, as well as data interoperability.

Thus, rating scales become fundamental to data collection and patients’ health-related quality of life. Some of the most popular and reliable rating scales include the visual analog scale, the verbal rating scale, the faces rating scale, and the numeric rating scale. Moreover, the use of electronic rating scales (delivered via a web-based platform, mobile devices or interactive voice record technology) results in high compliance and positive user experience.

Rating Scales: Types, Benefits, and Applications

Rating scales are valuable tools in digital health and routine clinical care, used to assess a variety of subjective phenomena, such as fatigue, user satisfaction, asthma, and even provider performance. Interestingly, such measurements are most often applicable to pain assessment and pain management. Common rating scales with high psychometric properties and a wide range of applications include the visual analog scale, the verbal rating scale, the faces rating scale, and the numeric rating scale, as well as their electronic versions:

  • Visual analog scales: The visual analog scale is one of the most popular tools, which allows patients to assess various internal events with high precision. The scale can be utilized to assess asthma, user satisfaction, cancer pain, headaches, labor pain, fatigue, appetite, and health-related quality of life. The visual analog scale is simple to administer, and it’s able to identify cut-off scores of patients with clinically significant symptoms (Safikhani et al., 2018). Note that mechanical, pen-and-paper and electronic formats of the instrument exist. Patients express preferences for this scale as it doesn’t restrict their ratings to a number of categories. For instance, when applied to pain assessment, visual analog scores are ranked on a 10-cm line (either vertical or horizontal) that stretches between “no pain” and “worst pain” (Delgado et al., 2018).
  • Verbal rating scales: The verbal rating scale is a valid tool to assess abstract concepts, such as patients’ perceptions and provider performance. Furthermore, verbal scales are highly beneficial to assess pain experiences. Data shows that patients who reported higher verbal rating scores were more likely to receive pain treatment in a timely and effective manner. Note that such verbal rating scales are also known as verbal pain scores and verbal descriptor scales. Verbal rating scales consist of a number of descriptors or statements designed to describe pain intensity and quality (Karcioglu et al., 2018). One of the most popular sets of descriptors includes the following rankings: “None,” “mild,” “moderate,” “severe,” “very severe,” and “not at all.” Because participants have to read the descriptors to rate their pain, verbal rating scales reveal high compliance. Consequently, these rankings have wide applications, including across geriatric populations with high levels of cognitive impairment.
  • Faces rating scales: The faces rating scales are another well-accepted category of self-report tools applicable to different settings, such as user experience, mood, and patient satisfaction with the hospital stay. Faces scales are defined as graphical tools that employ pictures or photographs of facial expressions to help patients rate their subjective experiences (Safikhani et al., 2018). Because of their attractive interface, graphical tools are most often used in pain assessment. The faces pain scale facilitates pain assessment and treatment in patients with low verbal skills and pediatric populations. In fact, the faces scale-revised (FPS-R) is one of the most valuable instruments designed specifically for children. Note that another popular pictorial scale is the Pain Thermometer scale.
  • Numeric rating scales: The numeric rating scale is among the most popular rating scales, which reveals good psychometric properties and high compliance. The scale can include an 11-point scale (0-10) to help patients assess events, such as response to treatment, hedonic qualities of a product, and migraine. In fact, research shows that the scale has high discriminative power for cancer and chronic pain. The 11-point numeric rating scale requires minimal language skills and translation, which makes it a popular tool across cultures (Hjermstad et al., 2011). Additionally, numeric assessments can consist of 21 points (0-20) and 101 points (0-100). The scale can benefit patients with limited English proficiency, and it can erase discrepancies in pain management based on ethnicity and gender.

Comparison of Rating Scales in Digital Health Care

Rating scales, such as the verbal rating scale, the numeric rating scale, and the visual analog scale, reveal good psychometric properties and meet fundamental regulatory requirements for pain assessment. As rating scales cannot be replaced or used interchangeably, choosing the right measurement can be challenging. The selection of a tool depends on study conditions, such as demographic factors, methods of administration, instructions, specific circumstances, and interpretation of clinical significance (Williamson and Hoggart, 2005). That said, although clear comparisons between rating scales cannot be made, evidence suggests:

  • Research in the field of pain assessment shows that the numeric rating scale correlates well with other pain instruments. The scale shows good sensitivity and provides data which can be used for audit purposes. The verbal numeric rating scale, in particular, reveals high convergent validity, known-groups validity, responsivity, and reliability in children (6-17 years) (Tsze et al., 2018).
  • Evidence shows that an 11-point numeric scale for assessing migraine is 55% more sensitive than a 4-point verbal rating scale (Kwong and Pathak, 2007). Yet, there’s a strong correlation between verbal rating scales and numeric rating scales, as well as an agreement regarding pain reduction in patients.
  • When it comes to vulnerable subjects, recent studies and review articles claim that the 11-point numeric rating scale is perhaps the optimal response scale to evaluate pain among adult patients without cognitive impairment (Safikhani et al., 2018). On the other hand, the faces rating scale-revised is a preferred scale in cognitively impaired individuals and children. As explained above, pictorial scales and emoji-based tools are highly beneficial in children.
  • Demographic factors influence the understanding and interpretation of ratings. Cultural differences, for instance, affect the interpretation of verbal rating scales and descriptors. The orientation of the visual analog scale, on the other hand, influences the statistical distribution of the data. Interestingly, evidence shows that the reading tradition of the population affects visual analog ratings (Willimason and Hoggart, 2005).
  • Hjermstad and colleagues conducted a systematic review and concluded that out of 19 studies, the numeric rating scales showed better compliance in 15 studies when compared to the visual analog scale and the verbal rating scale (Hjermstad et al., 2011). Furthermore, to increase compliance, digital rating scales are highly recommended.

While research proves that all rating scales work well, the interpretation of patients’ scores is crucial. For instance, using raw scores to assess pain reduction may lead to error. To set an example, a change from 51 to 48 mm on a 100-mm visual analog scale is a change of 6%. On a 0-10 numeric scale, this can be represented as a change from 5 to 4, which, however, is a change of 20%. Therefore, multidimensional assessments are also needed to evaluate a patient’s subjective experience with all its psychological, social, and financial aspects.

Rating Scales and Pain Assessment: The Key to Effective Pain Treatment

With their numerous applications in practice, rating scales are among the most popular subjective measures employed in pain assessment. Note that a comprehensive pain assessment tackles the unidimensional evaluation of pain intensity and the multidimensional assessment of a patient’s pain perception. Rating scales, in particular, provide a quick and reliable way to assess the unidimensional pain intensity regarding the area of pain or specific circumstances (e.g., hip pain when sitting). As pain is a complex and internal phenomenon, the vast majority of pain experiences cannot be detected by standard observations or laboratory tests. Thus, only self-reports can help experts understand, evaluate, and treat both acute and chronic pain.

Given the multidimensional aspect of pain, each rating scale has various benefits in pain assessment, which is the main key to effective treatment. Acute pain, for instance, can be caused by an injury or disease and serves a biological purpose. In order to be reduced in an effective manner, evidence shows that acute pain should be documented within the first 20-25 minutes of the initial assessment in the emergency department. Note that treatment should tackle the underlying cause of a patient’s sensory experience (Karcioglu et al., 2018). Chronic pain, on the other hand, is defined as pain lasting more than 12 weeks with no-recognizable end-point. As chronic pain is perceived as a disease itself, treatment must embrace a multidisciplinary approach and multidimensional measures. Rating scales facilitate both the evaluation of pain and its comprehensive assessment.

Rating scales are popular instruments in pain assessment, with verbal rating scales, visual analog scales, and numeric rating scales being valid and reliable tools in health care. Subjective measures can help experts evaluate factors, such as location and nature of pain. Note that pain is a complex experience and may be influenced by demographic factors and social pressure. In fact, beliefs, cultural differences, and expectations have an impact on patients’ experiences. Fear and anxiety, for instance, may lead to an increase in pain intensity. When assessing pain, there are several major factors to consider:

  • Pain intensity and severity (e.g., moderate pain): Pain is a subjective experience, and as such, patient reports are the most valuable tools to obtain a complete understanding of patients’ sensory experiences. Note that pain intensity may be influenced by the meaning of the pain, previous experience, and expected duration. To assess pain intensity and severity, rating scales can be applied across different settings and populations, including pediatric populations.
  • Pain duration: While acute pain may be severe, chronic pain is often considered a disease itself. As chronic pain may lead to numerous social and emotional problems, a multidisciplinary approach is needed. The McGill Pain Questionnaire, for instance, is one of the most powerful tests used in research and practice.
  • Pain behavior (e.g., facial expressions): Although pain is a subjective experience, pain can lead to various behavioral and social changes. Note that nonverbal rating scales can be highly beneficial in vulnerable and nonverbal patients (e.g., intubated patients). Behavioral scales can assess factors, such as movement of limbs, physiological signs, and facial expressions.
  • Pain quality: Pain is a complex phenomenon which consists of a wide range of qualities. Research shows that there are six main categories of pain quality: numbness, pulling pain, sharp pain, pulsing pain, dull pain, and affective pain. The effective evaluation of pain quality can improve pain treatment, with the sole purpose of increasing patients’ health-related quality of life.
  • Pain location: Pain location can also improve pain treatment. Note that pain drawings are valuable tools in pain assessment. They can be used to differentiate between organic and functional pain and reach a correct diagnosis in the presence of comorbidity. Interestingly, research shows that people who report multiple areas of pain reveal a high psychological factor in their sensory experiences, which requires complex pain management.
  • Affective qualities of pain: Pain experience is a mixture of psychological, cultural, social, and physiological factors. Thus, the affective qualities of pain influence pain assessment and treatment. As explained above, personal, cultural and demographic differences may influence pain ratings. That said, research shows there are discrepancies in pain treatment based on race and gender. As pain management has a detrimental effect on one’s quality of life, health care providers must implement rating scales and reassessments as a key element in the effective treatment of both acute and chronic pain.

Rating Scales and Digital Health: Conclusion

Rating scales are popular subjective instruments used to assess abstract events and internal experiences, such as food intake, emotional arousal to daily activities, asthma, and service satisfaction. While there’s a potential for error within ratings, rating scales (such as the verbal rating scale, the numeric rating scale, the visual analog scale, and the faces rating scale) are valid and reliable tools in medical research and routine clinical care. Such measurements reveal numerous benefits and applications across a wide variety of settings and populations; particularly in pain assessment. As pain, described as the fifth vital sign, is one of the biggest health concerns worldwide, rating scales are commonly employed alongside other multifaceted patient-reported measurements.

What’s more, with the leveraging role of health technologies in today’s health care industry, electronic rating scales are becoming more and more popular. Digital tools improve data collection and analysis, as well as interoperability. After all, patients are active participants in today’s digital health world – with the right to voice their subjective experience and opt for high health-related quality of life.

Resources

  1. Delgado, D., Lambert, B., Boutris, N., McCulloch, P., Robbins, A., Moreno, M., & Harris, J. (2018). Validation of Digital Visual Analog Scale Pain Scoring With a Traditional Paper-based Visual Analog Scale in Adults. Journal AAOS, 2 (3).
  2. Haefeli, M., & Elfering, A. (2006). Pain assessment. European Spine Journal, 15 (1).
  3. Hjermstad, M., Fayers, P., Haugen, D., Caraceni, A., Hanks, G., Loge, J., Fainsinger, R., Aass, N., & Kaasa, S. (2011). Studies comparing Numerical Rating Scales, Verbal Rating Scales, and Visual Analogue Scales for assessment of pain intensity in adults: a systematic literature review. Journal of Pain and Symptom Management, 41 (6), p. 1073-1093.
  4. Karcioglu, O., Topacoglu, H., Dikme, O., & Dikme, O. (2018). A systematic review of the pain scales in adults: Which to use? The American Journal of Emergency Medicine, 36 (4), p. 707-714.
  5. Kwong, W., & Pathak, D. (2007). Validation of the Eleven-Point Pain Scale in the Measurement of Migraine Headache Pain.
  6. Safikhani, S., Gries, K., Trudeau, J., Reasner, D., Rudell, K., Coons, S., Bush. E., Hanlon, J., Abraham, L., & Vernon, M. (2018). Response scale selection in adult pain measures: results from a literature review. Journal of Patient-Reported Outcomes.
  7. Tsze, D., von Baeyer, C., Pahalyants, V., & Dayan, P. (2018). Validity and Reliability of the Verbal Numerical Rating Scale for Children Aged 4 to 17 Years With Acute Pain.  Annals of Emergency Medicine, 71(6), p. 691-702.
  8. Williamson, A., & Hoggart, B. (2005).  Pain: a review of three commonly used pain rating scales. Journal of Clinical Nursing, 14 (7), p. 798-804.

Grant Proposal: Avoiding Common Errors

By | Grant Writing, Health Sciences Research

Over the years obtaining research funding has become increasingly tough and competitive what with the limited grant availability and bombardment of manuscripts and proposal applications. Nonetheless, it is important for an investigative scientist to secure grant support in order to establish a successful research career.

In today’s competitive research environment, however, just a novel or creative idea will not secure the funding for a research project. Most funding agencies, such as the National Institute of Health (NIH), require researchers and scientists to submit grant proposals describing the aims, goals, data, detailed methods as well as potential impact of their project on healthcare (Chung and Shauver, 2008). An investigator’s proposal, therefore, must stand out of the rest and convince the reviewers it will produce the meaningful results benefitting the medicinal, clinical and biomedical research as a whole.

Writing a flawless grant application is fundamental to making an impact because certain types of deficiencies and mistakes negatively influence grant scores and can seriously hamper your odds of being funded. Evidence shows that grant reviewers do not want to struggle with a poorly-written grant because they are time-consuming and, simply put, annoying. The reviewer would simply move on to the next and better application. (Chung, Giladi and Hume, 2015).

Common Grant Writing Errors you need to be mindful of:

1.     Formatting Errors – When You Forget/Neglect Reading Formatting Guidelines of the Funding Agency

This is one of the most recurring errors. At times, a grant applicant may write the application haphazardly ignoring the formatting and organizational, and aesthetic guidelines stipulated clearly by the funding agencies.

The NIH, for instance, has set basic rules on the page margin, font sizes, and figures, which you should see before writing the proposal.

These instructions are so simple and legible, yet most grant writers fail on the basics. As a result, it accounts for most of the application rejections. The viewer will not even bother reading your proposal if it fails to meet the basic format rules.

Some of the most common organizational errors are:

  • Irregular Font: such as small or compressed font to save space
  • Long, Unbroken Paragraphs: with information overload and no breathing spaces
  • Illegible Figures or Tables: where illustrations are so minimized, it becomes difficult to review them
  • Use of Jargon: such as complicated technical terms, non-spelled acronyms, vague claims, and trendy vocabulary. This one is an unpardonable sin in grant writing and definitely does not impress the reviewer.

The NIH has specified the:

  • Font to be at least 12 point
  • Page margins at least half an inch
  • Figures large enough to be legible and decipherable
  • Information organized into several sections
  • Prose to be simple and story-like

The NIH has even set page limit and word count for Fellowship (F-series), Individual Career (K-series), Training (T-series) and Research (R-series) awards to make things simpler and understandable.

All in all, the application should not be an eye-sore. It should look neat, organized and aesthetic. Furthermore, it ought to have an outline, where information is presented in a logical train of thoughts. This will make it easier for the reviewer to navigate through the application, find the key information and absorb the crux of the proposal.

2.     Uncertainty, Vagueness & Dullness – Your Application Being Too Bland & Insipid & Lacking Clarity

A well-written grant proposal tells a bewitching tale that captivates the reader. The reviewer, being the reader, should be curious about what happens next. The “Background/Significance” of the proposal should reveal the plot of the story making the reviewer dig deep to find out more. The questions you propose in the “Specific Aims” should give away parts of the story and reveal the context.

Bare the details of the story in the heart of the application – “Research Design and Methods.” By the time the reviewer reaches this section, he will be prepared for the most technical part of the proposal appropriately and neatly structured and assembled for him.

Your application is your story; tell it interestingly.

String it like beads in a rosary so that it has a natural rhythm and cadence to it. The ultimate goal of your application is to convince the reviewer that you have what it takes to conduct fruitful and meaningful research with the potential to contribute to healthcare. You should be able to sell and market your story. Work on it.

Some of the common mistakes grant applicants make are:

  • Providing detailed and exhaustive information in the “Background & Significance” section. The reviewer is not impressed by the depth of your knowledge, but by your thinking and prioritization.
  • Listing parts of application irregularly. If parts of the story do not fit together to make a great tale, it loses the audience.
  • Cluttering the application with details. Too much information distracts and annoys the reader.
  • Making “Specific Aims” section unnecessarily long. The NIH states clearly this section should not be longer than a page.
  • Being too dull with the “Statement of Need.” If there is one place for earnestness and passion in the grant application, it is the Statement of Need. The reviewer wants to see how passionate you are for your research and career. Show your true colors and vigor here. If you do not sell your story convincingly here, chances are you may not be funded no matter how articulate or factual rest of your proposal is.

On its website, the NIH has provided a clear roadmap to develop a strong and high-quality grant application to catch the reviewer’s eye. Following these form-by-form and field-by-field instructions step by step will surely increase your overall impact score.

3.     Not Defining the Problem Clearly or Talking Mainly about the Problem, Not the Solution

Despite application outlines stipulated clearly by the funding agencies, many grant seekers miss out on one thing or another and get disqualified. A little research into it reveals that the mistakes are actually minor and can easily be avoided in the first place by getting a peer- or editorial review. (Buckeridge, de Oliveira and dos Santos, 2017)

One obvious problem is ambiguity; many applications are unfocused, ill-defined or non-specific with the problem not well-articulated or explained. It fails to provide a clear picture. This lack of coherence does not bode well with the reviewer who is looking for a strategic and well-thought-out plan and solution to an exasperating clinical condition.

Another problem with many applications is that applicants start educating the reviewers. This is a fallacy that you need to be mindful of. The reviewers are highly educated and practicing professionals with knowledge far more than you. Your proposal should show that while you are aware of the problem, you are more focused on the solution. It is your plausible solution that will ultimately win you the award.

At times, the applicants get too emotional or lengthy about the problem and not the solution. The success rate of the application depends more on the clear-cut solution and not on the detailed problem.

4.     Application Lacking Preliminary Data (PD)

Save a few (such as R03 and R21); most grant applications require preliminary data. The preliminary data demonstrate the feasibility of your research and make your case strong, particularly for R01 grant seekers. It has two purposes:

  • To support the scientific basis of a project
  • To demonstrate that the researcher possesses the capability of conducting the research

Problems arise when applicants:

  • Fail to provide a clear source of data. Applicants sometimes confuse between data in the ‘Preliminary Data’ section and ‘Background and Significance’ section. For R-series, research data should be mentioned in the PD section. This data should come directly from your own laboratory. For K and F series, the source of data should be the laboratory of the mentor.
  • Include Complicated Figures/Tables: As a general rule, the more the tables and figures, the better the application. However, the problem arises when the applicant inundates the application with complicated multiple-armed figures, tables, and illustrations making multiple points. This is frustrating to the reviewer who extracts most of the information from the figures. The figures and tables should be many but simple and legible.

5.     Miscellaneous Problems with Different Sections of Grant Application

The NIH has enlisted some of the most common grant application mistakes linked with specific parts and sections of applications. A quick look will help avoid them altogether.

Majority of the rejections occur when specific data are missing in the following sections:

  • Specific Aims: lack clarity and goals or are overambitious, unfocused, and uncertain
  • Significance: study or proposal does not offer a valuable addition to science or lacks impact
  • Investigator(s): lack expertise, a history of publications, or collaborators
  • Innovations: study or project not innovative
  • Approach: impractical approach, cluttered information application lacking alternate hypothesis, preliminary data, and interpretation of data
  • Environment: lack of necessary equipment, institutional support or collaborators. Institutional support is paramount to securing grants. It demonstrates that your project will be sustained and supported in the future.
  • Budget: use of incorrect modular budget form, inadequate budget plan, unjustified budget shifts

Furthermore, proofreading is absolutely necessary and should be the final step of writing the grant application. Making a proofreading checklist will help to omit even the minutest error. The list should verify:

  • Grammatical or typing mistakes
  • Whether facts support the statement of need
  • Whether the project plan syncs with your goals
  • Whether titles and headings go with the scope
  • Whether references and citations are accurate

Finally, all investigators planning or submitting their applications to the NIH or other funding agencies should go through online forums discussing causes of grant rejection. By thoroughly addressing the common mistakes, they will be able to craft an immaculate application and increase their chances of obtaining funding.

References

  1. Chung, K.C., & Shauver, M.J. (2008, April). Fundamental principles of writing a successful grant proposal. J Hand Surg Am, 33(4), 566-72.
  2. Chung, K.C., Giladi, A.M., & Hume, K.M. (2015, February). Factors Impacting Successfully Competing for Research Funding: An Analysis of Applications Submitted to The Plastic Surgery Foundation. Plast Reconstr Surg, 135(2), 429e–435e. http://dx.doi.org/10.1097/PRS.0000000000000904
  3. Buckeridge, M.S., de Oliveira, D.M., & dos Santos, W.D. (2017, February 2). Ten Simple Rules for Developing a Successful Research Proposal in Brazil. PLoS Comput Biol, 13(2), e1005289. https://doi.org/10.1371/journal.pcbi.1005289

Top 7 Tips for Writing a Successful Grant Proposal

By | Grant Writing, Health Sciences Research

Funding has never been easy to obtain. Each year, the National Institute of Health (NIH) receives thousands of applications for competing and non-competing Research Project Grants (RPGs). The NIH is part of the United States Department of Health and Human Services that supports over 25000 organizations worldwide. In 2017, the NIH received over 50,000 RPG applications, of which only about 640, involving 11,000 principal investigators, secured funding (Lauer, 2018).

What happened to the rest?

Rejection!

Thousands of applications do not qualify for funding due to formatting and structural errors that can easily be avoided with guidance and preparation.

Grant applications are a window of opportunity for the researcher; these present your work to people who matter and decide whether they want to invest in the project. The manner in which you present your application is absolutely crucial. There are steps a potential grant writer can take to ensure their proposals do not end up in the bin, but in fact, make it to the top of the stack.

1.     Always Follow Instructions

Before you start typing your grant application, read instructions – all of them. This may seem like a daunting task since NIH grants booklets comprise 200+ pages. It may be tempting to skim through them; however, it is critical for a grant application to follow specifics such as file names, font size, citations, and data limit, etc. Your application can be returned if the format does not follow instructions. Inouye and Fiellin (2005) reported that of all the grants submitted to the NIH; more than 20% of the rejected applications had avoidable formatting errors for which instructions were clearly given.

The NIH provides clear guidelines of what should and should not be included in the grant proposal. It may take a little while to read, but it is worth the time.

2.     Start Small but Early

Grant writing is an arduous process; writing a proposal is much harder than conducting the research per se. It is therefore imperative to start early; particularly because there is a hyper-competition among the biomedical researchers. In the last 20 years, applications to the NIH have increased quadruple (Couzin & Miller, 2007). However, graduate students and early-career scientists have an inherent edge over the senior investigators and seasoned veterans in terms of grant mechanisms. The NIH early-career awards require little or no preliminary data which puts the young workforce, lacking ample experience, into a fortuitous position. Funding mechanisms for early-career awards depend on the potential of the candidate which in turn depends on:

  • Education and credentials
  • Mentors, sponsors, consultants, and collaborators
  • Public health value of the proposal

While crafting a winning grant proposal, here are a few things to keep in mind:

  • Make a checklist of all the material and documents that need to be submitted, including forms to be filled by the institution’s research office and letters of support, etc.
  • Construct a plan and course of action, and find alternative solutions for unobtainable items and documents.

The average time for grant proposal writing is between 3-months to 1-year (Inouye and Fiellin, 2005). For early-career scientists, starting early is additionally beneficial because, in the absence of preliminary data, 3 months would be more than sufficient for the grant proposal. With preliminary work, the process becomes lengthier than a year. Experts advise keeping at least a one-month gap between the completion and submission of the application so that you will always have time to revisit, re-touch and perfect the application. A detailed grant has a pleasant and professional appeal to it.

3.     Familiarize Yourself with the Grant Structure

In order to write a grant proposal of high quality, it is important to acclimate yourself to the structure. A grant has the following structure:

  • Specific Aims: The first section of the grant is the statement of specific aims of the project. This is the cornerstone of the application that focuses on the 5-NIH Review Criteria. Here, you should mention the purpose and method of the proposed research subject. All specified aims should be interrelated, focused and achievable within the proposed time and budget. If aims are not correlated, odds are your application will be rejected (Davidson, 2005).
  • Background & Significance: This section deals with the literature search and provides an in-depth analysis of the history and current status of the literature. As a researcher, you will highlight the deficiencies and discrepancies in the existing body of evidence and show how your research will fill the literary gaps.
  • Preliminary Studies: Here, you will include previous research and studies that are directly related to your project. This section serves to support and promote your investigation. This is your opportunity to impress the reviewers by demonstrating your expertise in the research field. The data you provide here will show that your hypothesis has merit. Market yourself here. Avoid using redundant and irrelevant paragraphs though. Use charts, graphs, tables, and figures to present and enhance the study.
  • Research Design & Methods: This is the most extensive section of the grant that contains approximately 50% of the content. You will describe proposal and data collection methods here, e.g., study design, demographics, population recruitment, and study analysis. Use algorithms to add clarity and visual appeal to the study.
  • Limitations & Conclusions: This section comprises half the page but is important nonetheless. Here, you will acknowledge the potential problems, confounders, and biases. Finally, conclude the section with a definitive statement highlighting the impact of the study on public health.
  • Abstract: This is the summation of the proposal. This is your chance of impressing the reviewers by concisely and eloquently summarizing the study. Spend sufficient time on it. This section should be comprehensive, dense and different from the introduction. Remember that only a few reviewers will read the entire proposal; rest will rely on the Abstract (Bordage and Dawson, 2003). Therefore, this section has to be extremely polished and sophisticated. It should stand out from the rest of the grant. Seek expert help, if possible.

4.     Seek Expert Advice

The grant submission process is confusing and can seem daunting to the first timer. To tackle the confusion, every institution has administrative offices and trained personnel possessing a wealth of knowledge to assist the applicants in filling out the form, collecting required documents and signatures, and double-checking the proposal. They even provide technical assistance and logistics should the research be multi-centric.

Sponsored research offices receive numerous applications and have their deadlines too. You should contact them at least a month before the deadline so that they have ample time to review the application in detail. Normally, they take 7-10 days to review the application and obtain institutional approval.

5.     Be Careful About Language, Organization, and Aesthetics of the Grant

Given the reduction in research budgets and application success rates, it has become extremely important to craft an articulate and outstanding grant proposal to secure adequate funding and improve the odds of success. The grant application process has become extremely detail-oriented; writing a stellar proposal has become more important than ever. Your proposal should be well-written, immaculate, proof-read and nothing short of perfect. Organization and neatness have an impact on the reviewer. The application should be well-organized and giving the impression that the author has conducted a thorough review before submission.

The grant should also meet 5 NIH Review Criteria. This can be achieved by getting feedback early in the drafting process. The reviewer, preferably an experienced grant writer or an editor who specializes in health sciences, can point out stylistic, organizational, grammatical and formatting errors as well as highlight the gaps in the structure and flow of the content.

Furthermore, try using figures, charts, and tables frequently in the application – the more, the better. As a general rule, tables and figures are relevant for all sections of the grant application. Tables and figures help crystallize the aims, study designs and methods, and calculations. These not just demonstrate your organizational skills but also save the time of the reviewers who are bombarded by the applications.

Other tips to maximize your chances of scoring the award include:

  • Providing supporting document wherever needed, i.e., cover letters, table of contents, abstracts, etc.
  • Providing your organization’s background and services in a statically-clear manner
  • Building a strong study design with clear aims and goals that manages to spark the reviewer’s interest

6.     Be Smart with Time

Old or new, grants basically have the same structure. If you have written a grant application earlier, there is no need to recreate the new application from scratch. You can ingeniously insert sections into the new proposal. This will save substantial time. However, caution is recommended to ensure that the integrated paragraphs are harmonious with the content and the structure of the application. Meanwhile, a literature search should be done simultaneously to include any updated information. The grant should appear scholarly and up-to-date.

7.     Think like a Reviewer

Finally, you ought to look at your grant through the reviewer’s critical eye to identify the faults and shortcomings. This will help you fix the remaining glitches in the application. Remember, reviewers, are busy people with lives and careers outside the piece of paper. Their time is of the essence, and you do not want them to work harder than they have to.

As mentioned earlier, most of the reviewers are concerned mainly with the abstract of the grant. This gives them a fleeting look of the whole proposal. The first paragraph of every section should also serve as the abstract, the last as the conclusion and restatement of the main idea.

The ability to craft and articulate a technical grant is artistry. The language should be elegant and appealing yet strictly focused. The grant should follow the format. Use plenty of white spaces, i.e., by inserting flow charts, timelines, bullet points, randomization schemes, and preliminary data. A cascade of redundant text is tiring to eyes. The review does not like to be overwhelmed with an endless flow of texts.

Further information and details about grant tutorials are available here.

References

  1. Lauer, M. (2018, March 7). FY 2017 By the Numbers [Blog post]. Retrieved from https://nexus.od.nih.gov/all/2018/03/07/fy-2017-by-the-numbers/
  2. Fiellin, D.A., & Inouye, S.K. (2005, February 15). An evidence-based guide to writing grant proposals for clinical research. Ann Intern Med, 142(4), 274-82
  3. Davidson, N.O. (2005, May). Grant writing and academic survival: what the fellow needs to know. Gastrointest Endosc, 61(6), 726-7
  4. Couzin, J., & Miller, G. (2007, April 20). NIH budget: Boom and bust. Science, 316 (5823), 356-61. http://dx.doi.org/10.1126/science.316.5823.356
  5. Bordage, G., & Dawson, B. (2003, April). Experimental study design and grant writing in eight steps and 28 questions. Med Educ, 7(4), 376-85

BYOD in Healthcare

By | Health Sciences Research

BYOD: Definition and IT Consumerization

Definition: Bring your own device (BYOD) has become a leading approach in healthcare settings. BYOD is a term that refers to the implementation of patients’ own mobile devices in clinical trials and healthcare practice. Note that only in the US, 95% of people own a cell phone, while 75% own a computer. From smartphones to laptops, the BYOD movement embraces the latest innovations in mobiles solutions and technological services to engage participants and clinicians. With increased familiarity and reduced costs, BYOD can facilitate patient-doctor communication and interoperability.

Transition from paper-based to electronic data: There’s no doubt that mobile technology facilitates doctor-patient communication, data collection, and statistical analysis. In fact, according to data, 80% of healthcare workers use tablets in practice, followed by smartphones (42%). With the inevitable transition from paper-based to electronic data, it’s no surprise that a vast majority of experts and sponsors turn to electronic clinical outcome assessments (eCOA) (“9 Key Factors to Evaluate When Considering BYOD”). Although provisioned devices (provided to the subjects) are still widely used to collect electronic data, the use of BYOD in healthcare research and practice is increasing in popularity.

History of BYOD and IT consumerization: Interestingly, BYOD in healthcare settings is not an isolated phenomenon. Since mobile technologies have become an integrated part of people’s lives, the consumerization of information technology (IT) is more than logical. We should mention that the term BYOD was introduced in 2004 when a voice over Internet Protocol (VoIP) service allowed businesses to use their own devices. In 2009, companies started allowing employees to bring and connect their own mobile devices to work. Two years later, BYOD became a leading term in marketing strategies, marking the new consumer enterprise. Note that BYOD can lead to an increase in productivity and a decrease in hardware costs. In addition, research shows that BYOD has numerous benefits across educational settings, as well as other industries.

BYOD in Healthcare Settings: Benefits, Challenges, and Risks

Benefits of BYOD: BYOD is an effective approach in healthcare settings and clinical trials as it allows subjects to provide medical information via their own internet-enabled device. It’s not a secret that recruiting participants and collecting data are among the most challenging aspects of research. With numerous benefits, BYOD is preferred over traditional methods. For instance, users can either access an online platform or download a medical app. The BYOD approach has been implemented even in Phase II and Phase III clinical trials. Some of the major benefits include:

  • Access to data in real-time: Just like with any eCOA and provisioned devices, BYOD ensures access to high-quality medical information. As data collection occurs in real time, 24/7, errors and bias are minimized. As a result, clinicians have access to accurate and valid data.
  • High engagement: Studies show that BYOD boosts engagement and improves compliance. Via SMS, notifications, and emails, doctors can establish a good relationship with their patients and monitor non-compliance. In addition, up-to-date images and visuals can help people track their condition and progress over time.
  • Usability: BYOD means accessibility. Since a vast majority of people own and use a mobile device on a daily basis, experts can reach a wide range of participants. According to data, there are approximately 2.5 billion smartphone users today; and these numbers are increasing. By not carrying an additional device and having optimal familiarity, training costs can only decrease.
  • Better user experience: Customized options improve the user experience. mHealth apps, in particular, are gaining more and more popularity. Interestingly, statistics show that 105,912 apps in the Google Play store and 95,851 apps on the iTunes app store are marketed as health apps (Bol et al., 2018).
  • Productivity: The implementation of BYOD in practice improves clinicians’ productivity. By having access to real-time data, experts can access reports 24/7, which benefits decision-making. In fact, electronic health records (EHRs) that contain data about patient health, demographics, medications, and lab tests, can improve medical workflow.
  • Cost-effective: By implementing the BYOD approach in research, experts can reduce training costs and improve resource efficiency. When a patient brings their own device, there’s no need to store tech gadgets on-site or deal with logistics.
  • Limited site involvement: Automatic updates and online platforms eliminate the need for site involvement and the burden of commuting. What’s more, with the integration of Help buttons, patients can find online support, which can boost participation and outcomes.
  • Advanced features: Mobile devices are equipped with numerous advanced features (such as GPS, barcode scanning, etc.). For instance, GPS options can help researchers monitor a patient’s location and activities in a study in which activity levels are used as an endpoint. BYOD gives access to reports which are available in different formats (e.g., PDF) across different devices (e.g., Android). While clinicians can access biomedical research to provide support, users can connect their devices with other wellness and fitness wearables.

Challenges in the implementation of BYOD: Although BYOD is increasing in popularity, there are a few challenges researchers need to overcome in order to implement BYOD in clinical trials (Marshall, 2014). Researchers need to create a good study design, taking in account patient rapport, data accuracy, and technical aspects (e.g., screen size). Factors, such as lack of a mobile device, demographics, reimbursement, and IT use support, should also be considered. Note that one of the major concerns is data security and HIPAA regulations.

Risks associated with BYOD: Possible risks in clinical research are alarming. Data security and patient privacy are among the major concerns. Since medical data is sensitive, networks must be protected. Virtual sandboxes can be installed on a device to protect apps that deal with medical data. Thus, clinicians won’t breach HIPAA policies, support staff won’t access certain services, and patients will access a hospital’s patient portal only for relevant information.

BYOD in Healthcare Practice

BYOD in healthcare practice and aspects to consider: With its numerous benefits, BYOD is becoming one of the most effective approaches in research. When implementing BYOD in practice, clinicians and IT specialists should consider the following aspects in order to overcome challenges and possible risks (“9 Key Factors to Evaluate When Considering BYOD”):

  • App-based and web-based BYOD: Experts must decide on either an app-based or web-based BYOD. mHealth apps, as stated above, are increasing in popularity. They allow patients to complete a wide variety of PRO’s, including diaries, reports, and reminders. App-based BYOD can benefit populations that use smartphones on a daily basis, as well as the administration of simple questionnaires. Note that unexpected events (e.g., changing phones) should be considered. Web-based BYOD, on the other hand, allows patients to enter data through a web browser (e.g., Chrome) via their own devices (e.g., PC). They are effective in Phase IV studies and in a large number of patients. Note that web-based questionnaires can automatically resize according to the screen size of any patient’s mobile device.
  • Usability and availability: BYOD can improve usability. Nevertheless, although some patients prefer BYOD over provisioned devices, experts should consider patients who need additional training or have privacy concerns. Note that with the increasing range of mobile devices on the market, staff may also need additional training to provide support – regarding OS, brands, and study schedules. Also, although more and more people use technology on a daily basis, researchers need to make sure that enrollment is not biased by mobile device ownership (e.g., age and location). In fact, experts can employ a combined approach, and provide a provisioned device to subjects without a personal device. Note that to ensure accuracy and safety, provisioned devices have their calling and browsing options disabled and run only study software.
  • Compliance: In research, eCOA reveal high levels of compliance (over 90%). Yet, when compared to other eCOA, BYOD can improve the user experience. To set an example, a study with 89% BYOD usage revealed compliance rates at 91.5% (“9 Key Factors to Evaluate When Considering BYOD”). Another study about the use of probiotic supplement showed that there’s a difference between provisioned devices and BYOD regarding compliance and engagement. Participants (n=87) were assigned to use a mobile application or no intervention. The mobile application subjects were additionally randomized into two groups: BYOD and a provided smartphone. Results revealed that BYOD is feasible in healthcare: the BYOD subgroup showed higher engagement, use of an application, and frequency. Nevertheless, when designing a clinical study, experts should consider the fact they can’t control or lock down personal devices (Pugliese et al.)
  • Costs and training: BYOD can result in lower research costs. Although provisioned devices are widely used in research, experts agree that training, maintenance, and delivery can be costly. BYOD, on the other hand, can eliminate additional costs and logistic obstacles, as well as improve the user experience. Note that although BYOD can reduce costs associated with delivery and provisioned devices, sponsors still need to consider factors, such as training of staff, availability of support (e.g., support desk), and burden on participants. Since participants are using their own devices, sponsors should include reimbursements for all the data sending costs.
  • Regulations and privacy risks: Before implementing BYOD in research, sponsors must ensure data consistency, quality, and transparency (Marshall, 2014). Note that even screen size may lead to bias. The stage of the study can also affect the implementation of BYOD. Therefore, experts must always consult regulatory bodies, follow existing regulations, and maintain clear documentation. Researchers should also establish a good relationship with the owners of the original questionnaires and get the owner’s permission to migrate the tool onto an electronic platform. Most of all, privacy concerns must be addressed. In fact, when it comes to privacy concerns, BYOD can ensure safety. Protection features (e.g., unique PIN) and encrypted data can improve the security of any software.

BYOD checklist: Creating a good study design is one of the major factors for scientific success. Apart from the aspects described above, researchers should consider the following checklist (“9 Key Factors to Evaluate When Considering BYOD”):

  • The phase of the study
  • Type of questionnaires (including items and images)
  • Type of data (e.g., symptoms, data reported by patients)
  • Frequency and duration of data collection
  • Data collection (e.g. on-the-go)
  • Characteristics of the population and the geographical situation (e.g., access to technology, shipping costs)

Clinical trials with BYOD: Research proves that BYOD is an effective approach in practice, considering patient management, compliance, and satisfaction. Based on Google Analytics, the findings from the randomized trial described earlier revealed that the BYOD subgroup showed higher engagement with the intervention, more application sessions per day, and longer sessions. Interestingly, the BYOD subgroup showed a significant effect of engagement on drug compliance in the end-line period of the trial. In fact, interviews revealed that BYOD users found it easy to integrate the mobile application into their daily routines (Pugliese et al., 2016).

BYOD: Conclusion

With the effects of IT consumerization in real life, there’s no doubt that technology facilitates industries. Mobile devices have become an integrated part of people’s lives and business strategies. In healthcare, in particular, BYOD can lead to an increase in productivity, engagement, compliance, high-quality data, real-life communication, and cost-effective research. By considering data type, BYOD-related training, demographics, and privacy risks, experts can manage the successful implementation of BYOD in clinical trials in accordance with safety and ethical regulations.

Most of all, BYOD can improve user experience and well-being. Patients can bring their own device, which eliminates the need for carrying an additional device or training. Real-life data, smooth doctor-patient communication, and online support can only improve patient outcomes and quality of life. In the end, patients are the core of research and digital health.

References

Bol, N. Helberger, N., & Weert, J. (2018). Differences in mobile health app use: A source of new digital inequalities? The Information Society: An International Society, 34 (3).

Pugliese, L., Woodriff, M., Crowley, O., Lam, V., Sohn, J., & Bradley, S. (2016). Feasibility of the “Bring Your Own Device” Model in Clinical Research: Results from a Randomized Controlled Pilot Study of a Mobile Patient Engagement Tool. Cureus, 8 (3).

Marshall, S. (2014). IT Consumerization: A Case Study of BYOD in a Healthcare Setting. Retrieved from  https://timreview.ca/article/771

9 Key Factors to Evaluate When Considering BYOD. Retrieved from https://resources.crfhealth.com/ebooks/9-key-factors-to-evaluate-when-considering-byod