How MyProgress can Help Support Programmatic Assessment in Clinical Education
- Tess

- Jul 10
- 13 min read
A Mosaic of Competence: How Diverse Assessments Create a Complete Picture
Imagine trying to recognise someone from a single pixel of an image, a dot of colour with no shape or meaning. But as more pixels accumulate, each from a different angle or context, the image begins to take form. Eventually, with enough varied inputs, a clear and complete portrait emerges.
This is what programmatic assessment aims to achieve. Each data point, whether it’s a mini clinical observation, a supervisor’s feedback, or a formal exam result, represents one pixel in a learner’s evolving portrait. Alone, these assessments can only offer fragments of insight. But together, they form a rich, multidimensional mosaic of competence.
Just as sharper images are created from more, and more diverse, pixels, a robust judgment of a healthcare student’s readiness depends on pulling in varied evidence: not only clinical observations and low-stakes feedback, but also summative exams, OSCEs, reflective entries, and patient surveys. Structured assessments like written papers and knowledge tests bring clarity to the foundational areas, giving the image shape, while practical, real-world encounters fill in the colour, detail, and texture.
MyProgress provides the digital canvas for this mosaic. It unifies all forms of assessment, practical and theoretical, formative and summative, to give educators and students a single, coherent view of development. No matter the format, each piece of evidence contributes to a more accurate, reliable, and fair picture of performance over time.

Medical and nursing education is increasingly embracing programmatic assessment, a holistic approach where every clinical assessment and feedback point contributes to a learner’s development over time. In this model, students undergo frequent low-stakes assessments designed to yield rich formative feedback, and these are combined with structured assessments such as exams or OSCEs to inform high-stakes decisions (such as promotion or certification) are made based on an aggregate of evidence collected longitudinally across various contexts and assessors. The goal is to shift away from one-off high-stake exams toward a continuous picture of competence, fostering learning and ensuring rigorous decisions about competency. However, implementing programmatic assessment in real clinical training environments comes with significant challenges. This article examines those challenges, from tracking numerous assessments to engaging learners and ensuring consistency, and discusses how the MyProgress ePortfolio platform helps overcome them. We also consider how MyProgress aligns with regulatory expectations in healthcare education and how such technology can evolve further.
Challenges in Programmatic Assessment Implementation
Programmatic assessment promises better learning and more valid decisions, but educators face practical hurdles turning this ideal into reality. Key challenges include managing the volume of low-stakes assessments, integrating feedback into learning, keeping learners engaged, and dealing with variability in assessor judgments. Below, we unpack each challenge:
Frequent Low-Stakes Assessments, High Volume
In a programmatic approach, assessments happen continually; each clinical encounter or skill observation can generate an assessment record. While this frequency provides rich data, it can overwhelm traditional tracking methods. Both trainees and faculty may perceive an overwhelming volume of assessments, which can create stress and logistical headaches. Manually collating countless forms or checklists (often on paper) is burdensome and prone to error. Indeed, prior to ePortfolio solutions, workplace-based assessments (WBAs) were often viewed as administratively heavy. A core challenge is thus ensuring that every mini-assessment is captured, stored, and easily retrievable to build a longitudinal picture, without overloading faculty or learners.
Programmatic assessment includes both real-time workplace assessments and structured inputs like progress tests or end-of-placement OSCEs, all of which must be captured and aligned to give a cumulative picture of development
Integrating Feedback and Closing the Loop
Programmatic assessment is only effective if the myriad of observations come with high-quality feedback that students actually use for improvement. It’s not enough to collect forms; each assessment should facilitate timely, specific feedback and prompt learner reflection. In practice, however, making sure feedback is given in the moment and integrated into learning is difficult. Busy clinical preceptors may skip detailed comments, or feedback might be delivered but never revisited by the student. The challenge is providing a mechanism for capturing feedback at each assessment point and ensuring learners engage with it, essentially closing the feedback loop. Without this, the wealth of data risks becoming unheard information that doesn’t inform growth. An effective programmatic assessment system must therefore embed feedback into the process (for example, prompting assessors to comment on performance and enabling learners to review and reflect), rather than treating feedback as an afterthought.
Whether feedback is generated through direct clinical supervision or via a post-exam review, it must be structured in a way that students can act upon it. MyProgress enables integration of narrative exam feedback and scores into the student’s ongoing development plan
Learner Engagement and Agency
Another concern is maintaining learner engagement with an ongoing assessment process. When every clinical encounter is an assessment opportunity, students can become cynical or fatigued, viewing the process as incessant scrutiny. If not handled well, continuous low-stakes assessments may feel like an overwhelming volume of hoops to jump through. The programmatic approach calls for a cultural shift where learners see assessments as learning opportunities rather than punitive measures. Achieving this means fostering learner agency, encouraging students to take ownership of logging their experiences, reading feedback, and seeking help. Programs have reported issues of trainee disengagement, and we know that greater engagement leads to better learning outcomes. Thus, a key implementation challenge is to present the assessment system in a way that motivates and empowers students (e.g. through clear visual progress indicators or goal-setting features) instead of alienating them. In short, the technology and process must be learner-friendly and rewarding to use, not just an extra workload.
By surfacing feedback from exams alongside clinical assessments in the same dashboard, students can see how their theoretical understanding aligns (or contrasts) with their observed practice, encouraging deeper self-reflection
Assessor Variability and Data Quality
In clinical education, multiple supervisors and mentors contribute to assessments. This multi-rater input is a strength of programmatic assessment, but it introduces assessor variability, differences in how individuals observe and rate performance. One supervisor’s 'meets expectations' might be another’s 'outstanding.' Additionally, some assessors are more stringent or more generous, and some may provide richer feedback than others. Reliance on a single assessor can introduce bias, whereas collecting data from a variety of assessors and tools makes decisions fairer and more reliable. The challenge is twofold: (1) to capture multi-source feedback from faculty, clinical staff, peers, and even patients in a manageable way; and (2) to aggregate this evidence such that no single outlier assessment unduly influences the outcome. Without a smart system, the variability in assessments could either confuse the picture or require inordinate effort for faculty committees to interpret. Ensuring consistency also ties into training faculty on assessment criteria, but a good platform can help by providing standardised forms and analytics to highlight inconsistencies. In essence, we need tools to harness assessor diversity as a benefit (through information triangulation and combined judgment) rather than a drawback.
Standardised exam results offer a useful calibration point in the mosaic of assessment data. When juxtaposed with more variable observational data, they help triangulate overall performance, especially when visualised together in a longitudinal view.
How MyProgress Facilitates Programmatic Assessment
MyProgress, a web-based ePortfolio with a companion mobile app, is designed specifically to support training and assessment in challenging clinical environments. It addresses the above challenges through technology and thoughtful workflows that align with programmatic assessment principles. Below, we explore how MyProgress’s features help implement effective programmatic assessment in medicine and nursing programs:
Enabling Longitudinal, Real-Time Progress Tracking
Programmatic assessment generates a longitudinal record of each learner’s performance. MyProgress makes this feasible by allowing students and faculty to collect and view evidence of competence development in real time across various settings. Each assessment, whether it’s a Mini-CEX, a direct observation, or a reflective entry, is logged into the system immediately, building a continuous timeline of progress. The platform provides at-a-glance dashboards that give an instant overview of individual or cohort progress, replacing manual spreadsheets or scattered records. Faculty can literally watch a student’s growth unfold on the dashboard, seeing which competencies have ample evidence and which need more attention. Importantly, analytics update in real time, so as soon as an assessment form is submitted via MyProgress, the data feeds into progress charts and reports. This immediacy supports programmatic assessment by enabling timely responses, for example, if a student’s dashboard shows they are falling behind in a certain skill area, faculty can intervene early with remediation or extra practice opportunities. MyProgress thus serves as the backbone for longitudinal assessment, ensuring that the wealth of low-stakes data is continuously organised and readily accessible for high-stakes review or coaching conversations.
Whether it’s a knowledge test result, a station score from an OSCE, or a Mini-CEX from clinical practice, MyProgress can capture each as a data point in a student’s progress timeline
Mobile and Offline Use in Clinical Placements
The MyProgress mobile app’s student dashboard gives an instant overview of a learner’s progress and logged clinical practice hours, providing a current view of required assessments and activities in each placement.

Clinical training often happens in busy hospitals or community settings where computer access is limited and internet connectivity can be unreliable. MyProgress directly tackles this with a mobile app that works fully offline, allowing assessments to be completed anytime, anywhere, even in ward areas with poor WiFi. Both students and their supervisors can use a tablet or smartphone to fill out assessment forms or sign off procedures on the spot. The offline functionality means that if a nursing preceptor observes a student performing a catheterisation in a rural clinic, they can complete the evaluation form on the app immediately; the data will sync automatically once back online, preserving the in-the-moment feedback. This is crucial for capturing those frequent low-stakes assessments without disruption, practice assessments can take place anytime, anywhere without technical difficulties. The app’s intuitive interface is designed for clinicians who may not be tech-savvy or have much time; assessors can even use speech-to-text to leave feedback in the heat of the moment, speaking into the device to record their comments.

By lowering the barriers to documenting assessments on placement (no need to log into a desktop, no need for constant internet), MyProgress greatly increases the likelihood that all those mini-assessments actually get recorded. This mobile/offline capability directly addresses the volume and logistical challenge of programmatic assessment, it streamlines the evaluation process in any setting so that neither remote location nor time pressure prevents the flow of assessment data. In short, MyProgress meets learners and faculty where they are: on the wards, in clinics, on the move, ensuring the assessment process integrates seamlessly into clinical routines rather than interrupting them.
Capturing Multi-Source Feedback to Mitigate Variability
A cornerstone of programmatic assessment is gathering multiple perspectives on a learner’s competence. MyProgress is built to facilitate multi-source feedback: it enables not only faculty supervisors but also other healthcare professionals, and even the students themselves, to contribute to a learner’s portfolio of assessments. The platform supports a range of workplace-based assessment tools (mini-CEX, DOPS, case-based discussions, etc.) and allows configuration of forms for different assessor groups. For example, a medical student’s MyProgress log might include skill checklists filled by nurses, teamwork feedback from peers, patient feedback questionnaires, and self-reflections, alongside faculty evaluations. All of these feed into the single ePortfolio, giving a rich 360° view of performance. By aggregating many data points from a variety of assessors and assessment formats, MyProgress helps ensure that competency decisions are based on a broad evidence base, leading to more informed and equitable judgments. The system’s design also reduces barriers for assessors: clinicians who are not regular faculty can self-register for an account when needed, so if a resident or allied health professional observes the student, they can easily log in and complete an assessment form. This is critical in clinical education, where learning is interdisciplinary, MyProgress doesn’t restrict input to a fixed set of instructors. Moreover, by standardising the digital forms and guidance for feedback, MyProgress can improve the consistency of ratings; it essentially provides a common framework within which all assessors operate, helping to temper individual variability. To address the 'failure to fail' phenomenon (where assessors hesitate to give low marks), the ability to see aggregated data may empower faculty committees to spot patterns – for instance, if one supervisor consistently rates a struggling student as 'satisfactory' while others flag issues, that outlier can be identified in the reports. In summary, MyProgress not only simplifies capturing multi-source feedback, it also turns the diversity of inputs into a strength by consolidating them for big-picture analysis.
Data Aggregation and Analytics for Actionable Insights
Perhaps one of the most powerful contributions of MyProgress is its robust data aggregation and analytics, which transform raw assessment data into actionable insights. The platform offers customizable analytics dashboards that present trends and flags in the data at both individual and program levels. For faculty and program directors, this means no more sifting through piles of evaluations, the system will automatically compile, for example, how many successful vs. marginal performances a student has in each competency domain, or how their performance progresses over time. Educators get an instant cohort overview to identify at-risk students and can drill down into any student’s record to diagnose specific issues. For instance, MyProgress might highlight that a particular nursing student has consistently weaker evaluations in medication administration skills compared to peers, triggering an early advising meeting. The analytics support early, targeted intervention and help institutions reduce attrition by not waiting until a major exam to discover a problem.
On the flip side, high performers can be identified and perhaps given advanced opportunities – the point is that data drives timely decisions. MyProgress also provides longitudinal views of engagement and assessment completion. A recent enhancement, the Year View analytics, gives a consolidated visual of each student’s activity across an entire year. This makes it easy to spot patterns like declining engagement or gaps in assessment collection during certain rotations. Program leaders can literally see if a student went relatively 'un-assessed' during, say, a dermatology rotation, prompting a follow-up. Additionally, the system’s reporting features allow exporting of summary reports for faculty committees or external examiners, wherein all of a learner’s evidence is organised and summarised. This greatly facilitates high-stakes decisions: when it’s time for a promotion or graduation review, the committee has a coherent report rather than hundreds of separate forms. It’s worth noting that the analytics are configurable to align with different program needs – whether tracking logged clinical hours, procedural skills, or EPA (Entrustable Professional Activity) milestones, MyProgress can aggregate those data points relevant to your outcomes. In essence, the platform turns the daunting 'big data' of programmatic assessment into digestible intelligence, supporting both day-to-day coaching and summative assessment in a defensible way.
The platform doesn’t just aggregate mini-assessments, it also includes structured exam data, enabling comparisons across assessment formats and surfacing discrepancies between theoretical knowledge and observed performance.

MyProgress 'Year View' analytics provides educators with a consolidated, year-long visualisation of student engagement and assessment activity, enabling at-a-glance identification of at-risk learners and patterns in performance across placements.
Meeting Regulatory and Quality Assurance Expectations
Implementing programmatic assessment is not just an internal improvement; it also aligns with evolving regulatory and accreditation requirements in health professions education. Accrediting bodies and professional regulators (such as the GMC in medicine or the NMC in nursing) increasingly expect schools to demonstrate that graduates have achieved certain competencies and that robust monitoring is in place throughout training. MyProgress is explicitly designed with these expectations in mind. It allows programs to map assessments and outcomes to regulatory frameworks, for example, linking each data point to the GMC’s Outcomes for Graduates or a nursing proficiency standard. This means that at any moment, an educator can pull up evidence showing a student’s progression in each required outcome domain, which is invaluable during accreditation reviews or audits. In fact, MyProgress includes customisable reporting for regulators, and many institutions use it to generate the reports or proof required for program accreditation. Instead of scrambling to collect documentation when an accreditation visit looms, schools can rely on MyProgress to have a continuous record of training quality. The platform supports regulatory bodies’ frameworks to ensure students meet required proficiencies by course completion, essentially providing a built-in quality assurance system. For example, if a regulatory standard dictates that nursing students must log a minimum number of hours in certain clinical areas, MyProgress can track and display each student’s hours logged in each category, raising alerts if someone is short. Another area of compliance is data security and privacy (part of quality assurance); as a modern SaaS solution hosted on a secure cloud, MyProgress adheres to GDPR and other data protection standards, ensuring that student data is handled appropriately, a concern often raised by regulators. Moreover, the audit trail in MyProgress (timestamps of who signed off what and when) provides transparency and accountability, further satisfying quality assurance scrutiny.
Regulatory bodies expect evidence of both knowledge acquisition and clinical competence. MyProgress links exam results directly to mapped outcomes, allowing educators to demonstrate how academic and clinical domains align across the curriculum.
In summary, MyProgress not only helps institutions meet competency outcomes, but it also demonstrates that achievement to external observers. It aligns educational practice with the expectations of professional regulators by providing evidence of outcome-based training and continuous monitoring, thus giving decision-makers peace of mind that adopting programmatic assessment via MyProgress will uphold (and even enhance) their accreditation status.
Fostering a Culture of Continuous Improvement
Programmatic assessment represents a paradigm shift in clinical education, one that prioritizes continuous learning and informed decision-making over one-shot exams. MyProgress serves as a critical enabler of this shift, addressing practical challenges through technology: it streamlines the capture of frequent assessments, integrates feedback loops, engages learners with real-time progress data, reduces assessor variability via multi-source input, and produces analytics that inform both individual coaching and program-wide quality assurance. By doing so, it helps realize the promise of programmatic assessment: better supported learners and more trustworthy judgments of competence. Importantly, the success of such a system is not just about software features but about embracing a new culture of assessment. As the literature notes, implementing programmatic assessment requires transforming mindsets and institutional culture, not just procedures. Tools like MyProgress are most effective when coupled with faculty development (e.g. training faculty to give quality feedback) and student orientation (so learners understand the value of ongoing assessment). In other words, MyProgress provides the infrastructure, but leadership must cultivate the assessment-for-learning ethos around it.
Looking ahead, the integration of data and education will only deepen. We can anticipate continuous improvement in both the platform and the educational strategies it supports. Future enhancements may include more advanced learning analytics, for example, using predictive modelling or AI to flag subtle patterns (such as a student’s feedback sentiment over time) that might escape human reviewers. Indeed, accrediting bodies are placing greater emphasis on outcomes and continuous program improvement, and they recognise that leveraging data is key. MyProgress is well-poised for this future: its rich dataset can feed into research and insights on training efficacy, helping institutions iteratively refine their curricula and assessment methods. University decision-makers should view MyProgress not just as a static product but as a strategic asset that evolves with their needs, supporting curriculum mapping today, maybe enabling competency-based digital credentials tomorrow, and informing workforce readiness in the long run.
In conclusion, implementing programmatic assessment in clinical education is challenging but immensely worthwhile for producing competent, reflective health professionals. MyProgress demonstrates how a thoughtfully designed ePortfolio and assessment management system can turn the concept into day-to-day reality. By facilitating longitudinal assessment, real-time feedback, mobile data capture, multi-source inputs, and robust analytics, it addresses the major pain points that have historically held back programmatic approaches. Moreover, it does so in alignment with regulatory expectations, ensuring that innovation goes hand-in-hand with accountability. With MyProgress, medical and nursing schools can confidently move toward a more continuous, learner-centred, and evidence-rich assessment strategy, ultimately improving both educational outcomes and patient care. The journey requires commitment and culture change, but backed by the right technology and a mindset of continuous improvement, the result is a win-win: learners get richer development and institutions uphold the highest standards of competence in the next generation of clinicians.
If you would like to talk to us about how MyProgress could support programmatic assessment at your institution, please contact us.

Comments