W3F | Oral presentations: Innovations and transformations in speech and voice
Tracks
Federation Plenary Hall
Culturally and linguistically diverse populations
Practice education and student supervision
Social justice and advocacy
Speech
Telepractice
Wednesday, May 24, 2023 |
1:30 PM - 3:00 PM |
Federation Plenary Hall |
Speaker
Professor Kirrie Ballard
Professor
University Of Sydney
A novel method supporting automated transcription and analysis of children’s speech production
1:30 PM - 1:45 PMPresentation summary
Introduction/Rationale
Research into acoustic speech analyses has not translated into practice, likely due to need for clinician training. Advances in automatic speech recognition (ASR) have brought automation of acoustic analyses within reach, yet ASR for children is lagging.
Aims
Develop and test a new procedure for (a) collecting speech samples, (b) separating child and clinician speech, and (c) annotating child productions. Such a tool could improve efficiency and reliability of clinical speech assessment.
Methods
Developing the new procedure involved building the Australian children’s speech corpus (AusKidTalk), developing the workflow, and evaluating performance. The corpus currently contains speech samples from 475 Australian English speaking children: 3–12 years, 40 with speech sound disorder. Here, we focused on a picture naming task (130 items), presented via Android app.
Children’s recordings contain off-target responses (e.g. mis-named pictures, conversation). Our workflow begins with IBM-Watson diarisation to separate child from clinician speech. Our custom UNSW ASR identifies and orthographically transcribes each word produced. Manual corrections can be entered. The output marks the position of each word in the audio file, for feeding to our ASR algorithm for phonemic transcription.
Three raters evaluated (a) number of words detected and (b) time saved (i.e. time to check raw recordings vs output of the workflow).
Results
The workflow resulted in 126/130 (~97%) target words being detected and 80% time-saving for raters.
Conclusions
The system achieved efficient orthographic annotation of children’s speech samples, ready for automated phonemic transcription. Such tools can provide clinicians with reliable, time-efficient speech analysis.
Keywords: speech disorder, assessment, transcription, ASR
Submission statement:
Reflection: Research into acoustic analyses for assessment of speech disorders has failed translation into clinical practice. Respect: This is related to constraints on clinician time and specialist training. Respond: We present a new method for rapidly collecting clinical speech samples, separating child and clinician speech, annotating and then analysing productions.
Research into acoustic speech analyses has not translated into practice, likely due to need for clinician training. Advances in automatic speech recognition (ASR) have brought automation of acoustic analyses within reach, yet ASR for children is lagging.
Aims
Develop and test a new procedure for (a) collecting speech samples, (b) separating child and clinician speech, and (c) annotating child productions. Such a tool could improve efficiency and reliability of clinical speech assessment.
Methods
Developing the new procedure involved building the Australian children’s speech corpus (AusKidTalk), developing the workflow, and evaluating performance. The corpus currently contains speech samples from 475 Australian English speaking children: 3–12 years, 40 with speech sound disorder. Here, we focused on a picture naming task (130 items), presented via Android app.
Children’s recordings contain off-target responses (e.g. mis-named pictures, conversation). Our workflow begins with IBM-Watson diarisation to separate child from clinician speech. Our custom UNSW ASR identifies and orthographically transcribes each word produced. Manual corrections can be entered. The output marks the position of each word in the audio file, for feeding to our ASR algorithm for phonemic transcription.
Three raters evaluated (a) number of words detected and (b) time saved (i.e. time to check raw recordings vs output of the workflow).
Results
The workflow resulted in 126/130 (~97%) target words being detected and 80% time-saving for raters.
Conclusions
The system achieved efficient orthographic annotation of children’s speech samples, ready for automated phonemic transcription. Such tools can provide clinicians with reliable, time-efficient speech analysis.
Keywords: speech disorder, assessment, transcription, ASR
Submission statement:
Reflection: Research into acoustic analyses for assessment of speech disorders has failed translation into clinical practice. Respect: This is related to constraints on clinician time and specialist training. Respond: We present a new method for rapidly collecting clinical speech samples, separating child and clinician speech, annotating and then analysing productions.
Ms Felicity Laurence
Speech Pathology Tasmania
Sharpen your tools – hone your transcription skills for working with Australian children with speech sound disorders
1:45 PM - 1:48 PMPresentation summary
Background and context
Accurate transcription of speech production using the International Phonetic Alphabet (IPA) is essential when working with children with articulation and phonological disorders. In Australia, many speech pathologists are using an outdated system of vowel transcription, that does not accurately reflect the vowels used in modern Australian English. Many speech pathologists also report low confidence transcribing phonetic features of speech, including suprasegmental features and atypical vowels and consonants (Nelson, Mok & Eecen, 2020). Inaccurate transcription and analysis of speech patterns can lead to inappropriate diagnoses and therapy planning. This workshop is presented by practising clinicians who have upskilled their phonetic transcription proficiency.
Learning outcomes
Workshop participants will:
• learn the modern HCE system of Australian vowel notation (Harrington, Cox and Evans, 1997);
• gain a deeper understanding of the phonetic features of the Australian accent;
• gain confidence in using diacritic symbols to support narrow phonetic transcription;
• learn how to apply these skills for assessment of speech sound disorders.
Key words: Speech Sound Disorders, phonetics, phonology, transcription, International Phonetic Alphabet, Australian accent
Submission Statement: Language and speech reflect culture. Accurate tools for transcription allow speech pathologists to record speech and language in a way that respects the culture of a person and their broader linguistic community. This enables clinicians to respond appropriately and make reasoned clinical judgments to support our clients.
Accurate transcription of speech production using the International Phonetic Alphabet (IPA) is essential when working with children with articulation and phonological disorders. In Australia, many speech pathologists are using an outdated system of vowel transcription, that does not accurately reflect the vowels used in modern Australian English. Many speech pathologists also report low confidence transcribing phonetic features of speech, including suprasegmental features and atypical vowels and consonants (Nelson, Mok & Eecen, 2020). Inaccurate transcription and analysis of speech patterns can lead to inappropriate diagnoses and therapy planning. This workshop is presented by practising clinicians who have upskilled their phonetic transcription proficiency.
Learning outcomes
Workshop participants will:
• learn the modern HCE system of Australian vowel notation (Harrington, Cox and Evans, 1997);
• gain a deeper understanding of the phonetic features of the Australian accent;
• gain confidence in using diacritic symbols to support narrow phonetic transcription;
• learn how to apply these skills for assessment of speech sound disorders.
Key words: Speech Sound Disorders, phonetics, phonology, transcription, International Phonetic Alphabet, Australian accent
Submission Statement: Language and speech reflect culture. Accurate tools for transcription allow speech pathologists to record speech and language in a way that respects the culture of a person and their broader linguistic community. This enables clinicians to respond appropriately and make reasoned clinical judgments to support our clients.
Mrs Mariam Seeney
Speech Pathologist
Speech Pathology Tasmania
Sharpen your tools – A workshop to hone your transcription skills for working with Australian children with speech sound disorders co
Professor Sharynne McLeod
Professor
Charles Sturt University
Children's speech development, assessment and intervention in 50 languages
1:48 PM - 1:51 PMPresentation summary
Introduction rationale: Most people across the world are bilingual or multilingual and there are over 7,000 languages spoken across the world. Most children accomplish the ability to intelligibly speak their home language(s) and dialect(s) by 5 years (McLeod & Crowe, 2018). Some children require input from speech pathologists, but language-specific information can be difficult to access.
Aim: To provide speech pathologists with knowledge about speech development, assessment, and intervention across 50 languages.
Methods/process: Data provided during 2022 by speech pathologists and linguists for inclusion the Oxford Handbook of Speech Development were analysed.
Results: A summary of knowledge about speech development, assessment and intervention will be presented for 50 languages: Afrikaans, Akan, Arabic (Egyptian, Kuwaiti, Lebanese), Azerbaijani, Bulgarian, Cantonese, Croatian, Danish, Dutch, English (African American, Appalachian, General American, Australian, Canadian, Cajun, English, Fiji, Irish, New Zealand, Scottish, South African), Filipino/Tagalog, Finnish, Flemish, French (Canadian, French, Swiss), German, Greek (Cypriot, Standard), Hebrew (Israeli), Hungarian, Icelandic, Irish, Italian, Jamaican Creole, Japanese, Korean, Kurdish, Laki, Malay, Maltese, Māori, Mandarin/Putonghua, Norwegian, Persian/Farsi, Polish, Portuguese (Brazilian, European), Samoan, Sesotho, Setswana, Slovak, Slovenian, Spanish (Andalusian, Castilian, Chilean, Mexican), Swedish, Tamil, Thai, Tok Pisin, Turkish, Urdu, Vietnamese, Welsh, isiXhosa, Zapotec, and isiZulu. There are many (free) resources that can be accessed by Australian speech pathologists to use in their clinical practice.
Discussion and conclusions: There are data from 50 languages about speech features (consonants, vowels, tones, syllables), speech development, assessments and interventions that can be used to support children with speech sound disorders.
Keywords: speech, development, assessment, intervention, bilingual, multilingual
Submission statement: Traditionally, Australian speech pathologists have focused on Standard Australian English during assessment and intervention. We need to respect multilingual Australians, reflect on the language(s) and dialect(s) spoken within our communities, and respond by applying cross-linguistic information to clinical practice.
Aim: To provide speech pathologists with knowledge about speech development, assessment, and intervention across 50 languages.
Methods/process: Data provided during 2022 by speech pathologists and linguists for inclusion the Oxford Handbook of Speech Development were analysed.
Results: A summary of knowledge about speech development, assessment and intervention will be presented for 50 languages: Afrikaans, Akan, Arabic (Egyptian, Kuwaiti, Lebanese), Azerbaijani, Bulgarian, Cantonese, Croatian, Danish, Dutch, English (African American, Appalachian, General American, Australian, Canadian, Cajun, English, Fiji, Irish, New Zealand, Scottish, South African), Filipino/Tagalog, Finnish, Flemish, French (Canadian, French, Swiss), German, Greek (Cypriot, Standard), Hebrew (Israeli), Hungarian, Icelandic, Irish, Italian, Jamaican Creole, Japanese, Korean, Kurdish, Laki, Malay, Maltese, Māori, Mandarin/Putonghua, Norwegian, Persian/Farsi, Polish, Portuguese (Brazilian, European), Samoan, Sesotho, Setswana, Slovak, Slovenian, Spanish (Andalusian, Castilian, Chilean, Mexican), Swedish, Tamil, Thai, Tok Pisin, Turkish, Urdu, Vietnamese, Welsh, isiXhosa, Zapotec, and isiZulu. There are many (free) resources that can be accessed by Australian speech pathologists to use in their clinical practice.
Discussion and conclusions: There are data from 50 languages about speech features (consonants, vowels, tones, syllables), speech development, assessments and interventions that can be used to support children with speech sound disorders.
Keywords: speech, development, assessment, intervention, bilingual, multilingual
Submission statement: Traditionally, Australian speech pathologists have focused on Standard Australian English during assessment and intervention. We need to respect multilingual Australians, reflect on the language(s) and dialect(s) spoken within our communities, and respond by applying cross-linguistic information to clinical practice.
Ms Kate Margetson
Charles Sturt University
Non-English sounds in English speech assessment: Patterns of cross-linguistic transfer in Vietnamese-Australian children’s speech
1:51 PM - 1:54 PMPresentation summary
Introduction
When assessing multilingual children’s speech, speech pathologists need to consider how dual phonological systems can interact. Cross-linguistic transfer, when a speech sound from one language is used when speaking the other language, can be part of typical multilingual speech development. Vietnamese is the third most spoken home language in Australia, but there is little evidence about how Vietnamese and English might interact in speech development.
Aim
To investigate the nature and frequency of cross-linguistic transfer in Vietnamese-English speaking children.
Methods
Vietnamese-English-speaking children (n=66) aged 2;0-8;11 were assessed using the Vietnamese Speech Assessment and the Diagnostic Evaluation of Articulation and Phonology. Cross-linguistic transfer of non-shared consonants was identified in the speech samples.
Results
Cross-linguistic transfer of non-shared consonants in at least one direction (from one language to the other) was observed in the speech of 57 children (86.36%). Bi-directional cross-linguistic transfer was observed in 17 children (25.76%). Transfer generally occurred when the target was phonetically similar. The most frequently transferred non-shared consonants were: /c, ʂ, Ɂ/ to English and /ɡ, ɹ, θ/ to Vietnamese. All non-shared English consonants were used in at least one child’s Vietnamese speech sample.
Conclusions
Most children in this study demonstrated cross-linguistic transfer. If speech pathologists cannot identify non-English sounds from children’s home languages, they may mistake cross-linguistic transfer as atypical speech errors. Speech pathologists should be familiar with non-English consonants that occur home languages to ensure accurate diagnosis of speech sound disorders.
Keywords: speech, children, multilingual, assessment, cross-linguistic, Vietnamese
Submission Statement: This presentation will encourage speech pathologists to: a) reflect on their current speech assessment practices with multilingual children; b) demonstrate respectful, culturally responsive practice by considering cross-linguistic influences on speech; and c) respond by learning about and practicing transcription of speech sounds from children’s home languages.
When assessing multilingual children’s speech, speech pathologists need to consider how dual phonological systems can interact. Cross-linguistic transfer, when a speech sound from one language is used when speaking the other language, can be part of typical multilingual speech development. Vietnamese is the third most spoken home language in Australia, but there is little evidence about how Vietnamese and English might interact in speech development.
Aim
To investigate the nature and frequency of cross-linguistic transfer in Vietnamese-English speaking children.
Methods
Vietnamese-English-speaking children (n=66) aged 2;0-8;11 were assessed using the Vietnamese Speech Assessment and the Diagnostic Evaluation of Articulation and Phonology. Cross-linguistic transfer of non-shared consonants was identified in the speech samples.
Results
Cross-linguistic transfer of non-shared consonants in at least one direction (from one language to the other) was observed in the speech of 57 children (86.36%). Bi-directional cross-linguistic transfer was observed in 17 children (25.76%). Transfer generally occurred when the target was phonetically similar. The most frequently transferred non-shared consonants were: /c, ʂ, Ɂ/ to English and /ɡ, ɹ, θ/ to Vietnamese. All non-shared English consonants were used in at least one child’s Vietnamese speech sample.
Conclusions
Most children in this study demonstrated cross-linguistic transfer. If speech pathologists cannot identify non-English sounds from children’s home languages, they may mistake cross-linguistic transfer as atypical speech errors. Speech pathologists should be familiar with non-English consonants that occur home languages to ensure accurate diagnosis of speech sound disorders.
Keywords: speech, children, multilingual, assessment, cross-linguistic, Vietnamese
Submission Statement: This presentation will encourage speech pathologists to: a) reflect on their current speech assessment practices with multilingual children; b) demonstrate respectful, culturally responsive practice by considering cross-linguistic influences on speech; and c) respond by learning about and practicing transcription of speech sounds from children’s home languages.
Ms Leah Hanley
University Of Canberra
Does parent-delivered speech therapy, supported by a mobile app, improve speech outcomes for children with cleft palate?
1:54 PM - 2:09 PMPresentation summary
Introduction: Many children with cleft palate require therapy for speech sound errors. However, timely access and funding for intervention to the frequency and intensity they require is not always possible. There is growing evidence to support the use of principles of motor learning (PML) in the treatment of motor speech disorders in children, and recently with cleft palate. As a set of principles, PML has been automated into an electronic speech therapy aid, a mobile speech therapy game. The app can be used with parents to support speech therapy practice at home.
Aim: This research seeks to understand if parents, with the support of a speech therapy app based on PML, can effectively provide speech intervention to children with speech errors associated with cleft palate. It also seeks to understand children’s and parent’s experiences and satisfaction using an app-based speech intervention.
Method: Multiple baselines across participant and behaviours single-case experimental design was used. Participants were seven children (5-9 years) with cleft-type speech errors. The primary dependent variable was percentage of words correct. Effect sizes were calculated to quantify treatment effect. Participants played the app, which was programmed with their speech targets, four times/week with 100 attempts/session for two 4-week phases. Parents provided correct/incorrect feedback on responses.
Results: All children showed improvements in treated behaviours during treatment, and six children maintained their improvements 1-month post-therapy.
Conclusions: Results indicate parents can be supported with a mobile app to provide speech intervention for children with speech errors related to cleft palate.
Keywords: cleft palate, speech, PML, parents
Submission Statement: Children with cleft palate often have persistent speech errors and families are seeking therapy that not only improves speech outcomes, but is timely, cost effective, and accessible. This research reimagines service delivery for children with speech errors associated with cleft palate, by using parent-delivered therapy supported by a mobile app.
Aim: This research seeks to understand if parents, with the support of a speech therapy app based on PML, can effectively provide speech intervention to children with speech errors associated with cleft palate. It also seeks to understand children’s and parent’s experiences and satisfaction using an app-based speech intervention.
Method: Multiple baselines across participant and behaviours single-case experimental design was used. Participants were seven children (5-9 years) with cleft-type speech errors. The primary dependent variable was percentage of words correct. Effect sizes were calculated to quantify treatment effect. Participants played the app, which was programmed with their speech targets, four times/week with 100 attempts/session for two 4-week phases. Parents provided correct/incorrect feedback on responses.
Results: All children showed improvements in treated behaviours during treatment, and six children maintained their improvements 1-month post-therapy.
Conclusions: Results indicate parents can be supported with a mobile app to provide speech intervention for children with speech errors related to cleft palate.
Keywords: cleft palate, speech, PML, parents
Submission Statement: Children with cleft palate often have persistent speech errors and families are seeking therapy that not only improves speech outcomes, but is timely, cost effective, and accessible. This research reimagines service delivery for children with speech errors associated with cleft palate, by using parent-delivered therapy supported by a mobile app.
Mrs Rebecca Shields
Charles Sturt University
Intervention for residual speech errors in adolescents and adults: a systematized review
2:09 PM - 2:12 PMPresentation summary
Introduction: When speech sound errors persist beyond childhood they are classified as residual speech errors (RSE) and may have detrimental impacts on an individual’s social, educational and employment participation. Despite this, individuals who present with RSE are usually not prioritised on large caseloads.
Aim: This literature review aimed to examine what intervention approaches are available in remediating RSE, and how effective are they for adolescents and adults?
Methods: A systematised review was undertaken. Comprehensive and systematic searching included search of terms across seven databases, forward and reverse citation searching, and key author contact. Thirty articles underwent critical appraisal before data extraction. Inductive thematic analysis before completion of a narrative review.
Results: Twenty-three (76.6%) of the articles were from the US and most studies involved intervention for ‘r’ (90%). Intervention approaches for RSE involved traditional articulation therapy, auditory perceptual training, instrumental approaches, and approaches based on principles of motor learning. Twenty-one studies (70%) investigated the use of more than one intervention approach. Measures of intervention efficacy varied between studies; however, any intervention approach tended to be more successful if delivered in a more intensive schedule.
Conclusions: A variety of approaches can be used for RSE, but a combination of high intensity, traditional therapy with adjunctive instrumental biofeedback may be most effective, especially with highly motivated individuals. Unfortunately, this requires costly equipment and training to implement. More information about best dosage and intensity intervention for RSE, evaluated for a larger number of phonemes across other languages and dialects is required.
Submission Statement:
Speech pathologists are ethically required to reflect on the effectiveness of interventions we provide; respect the needs, time and goals of our clients; and respond to the relevant, current evidence. This review facilitates the application of evidence-based intervention that will best serve a client population that is frequently underserved.
Keywords:
Residual speech errors; Intervention; Adolescents; Adults
Aim: This literature review aimed to examine what intervention approaches are available in remediating RSE, and how effective are they for adolescents and adults?
Methods: A systematised review was undertaken. Comprehensive and systematic searching included search of terms across seven databases, forward and reverse citation searching, and key author contact. Thirty articles underwent critical appraisal before data extraction. Inductive thematic analysis before completion of a narrative review.
Results: Twenty-three (76.6%) of the articles were from the US and most studies involved intervention for ‘r’ (90%). Intervention approaches for RSE involved traditional articulation therapy, auditory perceptual training, instrumental approaches, and approaches based on principles of motor learning. Twenty-one studies (70%) investigated the use of more than one intervention approach. Measures of intervention efficacy varied between studies; however, any intervention approach tended to be more successful if delivered in a more intensive schedule.
Conclusions: A variety of approaches can be used for RSE, but a combination of high intensity, traditional therapy with adjunctive instrumental biofeedback may be most effective, especially with highly motivated individuals. Unfortunately, this requires costly equipment and training to implement. More information about best dosage and intensity intervention for RSE, evaluated for a larger number of phonemes across other languages and dialects is required.
Submission Statement:
Speech pathologists are ethically required to reflect on the effectiveness of interventions we provide; respect the needs, time and goals of our clients; and respond to the relevant, current evidence. This review facilitates the application of evidence-based intervention that will best serve a client population that is frequently underserved.
Keywords:
Residual speech errors; Intervention; Adolescents; Adults
Dr Suzanne Hopf
Charles Sturt University