Aptem Enhance: AI-powered innovation in apprenticeship delivery

Following the successful launch of our first AI-driven enhancement, Aptem Checkpoint, we invite you to see how we're continuing to develop a series of tools to help apprenticeship providers spend less time on admin and more time with learners. In this webinar, now available to watch on-demand, we share our AI-powered Aptem Enhance strategy. We were joined on this webinar by Dom Wilkinson, Product Owner at Lifetime Training who shared how Lifetime has already embraced the first of these features.

This webinar is relevant for current Aptem customers and those interested in embedding innovation in the working practices of our sector.

The webinar covers:

  • The potential of AI within apprenticeships. Richard Alberg, CEO, explores our vision for augmenting human expertise with AI, to create space for more meaningful relationships with learners and employers.
  • Customer-centric development. Paige Exeter, Product Manager, shares insights from extensive customer research about the potential the sector sees for AI-enabled tools, and how this has informed our feature development.
  • Aptem Checkpoint. Dom Wilkinson, Product Owner at Lifetime Training, discusses Lifetime's experience of adopting Aptem Checkpoint and the qualitative and quantitative benefits he expects to see from this recently launched AI-powered learning tool.
  • The future of apprenticeship delivery - Aptem Enhance roadmap. James Love, Chief Product Officer, gives an exclusive overview of the Aptem Enhance roadmap, including solutions for assisted marking, feedback, review summarisation and objective setting. He also discusses how an in-platform approach to adopting AI provides increased security, control and trust in the output.

 

Q and A transcript

The following is a transcript of the questions asked during the webinar, along with the answers provided by product experts. For any further information, do contact your Implementation Consultant, Customer Success Manager or Aptem Support.

 

Can you clarify where the tool draws the KSB related information from? Is it from a provider's learning content or is it from public content?

Checkpoint currently pulls from the IfATE KSBs, so the standards that are publicly available and that's how we generate the questions behind the scenes.


Are we contemplating the utilisation of Aptem checkpoint for onboarding to facilitate the initial assessment of recognition of prior learning?

We are considering that as a scenario we want to cover. Currently, you could raise one of the manual checkpoints and select the KSBs to do that initial assessment and get your results from that.
We have had feedback around scenarios such as initial assessments, gateway reviews, as well as return to learning and how we adapt and evolve Checkpoint to cater for those in a better way, better way and hopefully in the future we can announce some exciting things around those scenarios too.


Will a review summarisation have implications for how we design reviews in Aptem?

Simply yes.

We don't know the answer to that question fully yet, but what we've already identified as we've gone through some of the research and investigation of the technical approach around doing review summarisation is that there's a lot of improvements that we could make to reviews, even forgetting AI.
In the initial version of review summarisation, we want to get you value quickly, so we'll avoid making any sort of fundamental changes at that point. We'd look to iterate and improve over time, get you that value as quickly as we can to make sure that you're not losing out as AI develops over time.


I understand how the AI can auto check multiple choice questions well. But how about when learners write completely different but equally valid answers to a question? E.g. If the question was about how have you demonstrated good project management skills, learner A might write about design thinking agile, et cetera, and learner B writes about waterfall, Gantt, et cetera. Both are good answers but completely different with different examples.

Our early versions of checkpoint and of situational judgment will not be using what you'd use in that scenario, which is natural language processing, to interpret the response and then mark that against some kind of rubric to give some feedback. And we've deliberately done that to ensure that we can get the adoption out there with the multiple choice-based questions.

We've got the input for the learner in the situational judgment to allow for that tutor and learner response. And we will look to iterate towards that NLP future where actually you can provide instant feedback based on input. We are conscious that a lot of what we're working towards helps to optimise and make more efficient the time a tutor can spend with a learner, well that means they have more time to do some other things potentially and maybe that quality engagement is between the tutor and the learner talking about the inputs they put into the system.

 

If the KSB are being pulled from IfATE does it take into consideration the start date in terms of the standard versions?

Aptem’s generated Multiple choice questions will be aligned with the updates to the standard updates to ensure they are kept up to date.

 

Are you hosting your own Machine Learning models or going through third parties such as ChatGPT?

Aptem Checkpoint utilises an integration with OpenAI’s ChatGPT.

 

Can you use Aptem Checkpoint as part of the onboarding element to enhance a skill scan review?

Currently, a manual Checkpoint could be raised with all KSBs selected for this scenario. We are also exploring how we could automatically trigger checkpoints in scenarios such as initial assessment, return from learning and gateway reviews.

 

Where does the chatbot draw on for subject related questions e.g. 'what is active listening'? Does each provider upload their own curriculum materials?

The implementation of the virtual assistant/ chat bot will be specific to the individual learner’s context. We already have the data that signifies which programme and standard each learner is working towards for example as well as details such as the curriculum for each programme. We're still working through which pieces of context will be included in the final version of the product.

Was this article helpful?
0 out of 0 found this helpful