Learning new skills often leads to disengagement when individuals face obstacles arising from limited experience, poor instruction, or mismatched preferences. This is common in video games, where players frequently abandon challenging sections. While expert guidance can mitigate these issues, providing personalised advice at scale remains difficult. Automated systems typically offer generic feedback, lacking adaptability to individual playstyles. In this paper, we present an end-to-end system that generates personalised gameplay advice by learning from both preexisting datasets and individual player behaviour. The system is evaluated in two domains: a simple GridWorld environment and the more complex MiniDungeons benchmark. Experimental results with simulated agents show that adherence to the generated advice leads to measurable improvements in performance. Our findings advance scalable, personalised guidance in games, with broader implications for learning and skill development.