Earlier this year, Google announced it was working on Project Euphonia, an initiative aimed to use artificial intelligence (AI) to improve computers' abilities to understand diverse speech patterns, such as the speech of people with speech impairments. In a recent blog post, the company has provided details about its work on Project Euphonia, outlining a two-phase approach designed to improve automatic speech recognition (ASR) for people suffering from amyotrophic lateral sclerosis, a disease that can negatively affect a person's speech. Google's team has created a high-quality ASR model which was initially trained on thousands of hours of standard speech. The training was then fine-tuned with a personalised non-standard speech dataset. Google notes that the new approach resulted in 'significant improvements' in speech recognition for speakers with atypical speech over current models.