The rule should become part of the culture in urban driving in order to improve traffic flow.
It would allow us to slow down when we want to give space for other vehicles to turn in front of us or merge into the traffic. Sure we lose a second but we regain it when the next person lets us in. The whole traffic network would flow more smoothly and benefit everyone.
[image description: 6 postage stamps with simple diagrams highlighting accessibility for disabled people. They include a wheelchair symbol, Braille for United Nations, two hands joined in love]
Open Letter to NZ Minister for Education, Chris Hipkins
Emailed 26 February 2019
Subject Heading: NZ Breach of UN Convention in relation to Deaf children
I draw your attention to the standard advertisement your Department runs for Adviser on Deaf Children (AODC) jobs. You will note that the very last qualification listed is in relation to NZSL and even this is not required. Thus it is hearing people advising parents of Deaf children about Deafness. Deaf people are not deeply involved and your First Signs program is yet to be properly evaluated as far as I can see.
The science now is very clear that Deaf children need Sign Language first (see research by Tom Humphries, Mairead MacSweeney and others). NZ pays lip service to the notion of bilingualism in Deaf education but this means in practice that NZSL is more often an afterthought.
Thus, an estimated one third of healthy Deaf babies in NZ (by which I mean not Deaf-Plus babies), despite being on a Cochlear Implant Program (CIP) will become Persistent Language Delay (PLD) people. The CIP fails for a variety of reasons. This means they will have no functional language for the rest of their lives. I have PLD friends and I know how hard their lives are even within the Deaf community. I urge you to obtain the statistics that show one third of children will become PLD. The statistics for NZ are held by Neil Heslop the General Manager of the NZ Southern Cochlear Implant Program (SCIP) who confirmed the estimate to me in a recent conversation.
Neil and the cochlear implant industry may be reluctant to have these figures become public, but the failure by your government to address the PLD problem for Deaf children is a breach of the UN Convention on the Rights of Persons with Disabilities (CRPD/UNCRPD). Article 24 of the CRPD in relation to the right of a child to education says “In order to help ensure the realization of this right, States Parties shall take appropriate measures to employ teachers, including teachers with disabilities, who are qualified in sign language and/or Braille, and to train professionals and staff who work at all levels of education”
The AODC job ad run by your Department also touts “Informed Choice” for parents in deciding on language options for their Deaf child. Unfortunately, 90 percent of Deaf babies are born to hearing parents who are often in grief at having a Deaf baby. These parents have usually never met a Deaf person in their lives. “Informed Choice” delivered by a hearing AODC to grieving parents results in an almost universal rush for early implantation of children with a Cochlear Implant. Please be clear, I am totally in support of this amazing technology, but again, the science is clear that it is of fundamental importance for a Deaf child to have Sign Language firmly established BEFORE a cochlear implant. The case of Leah Coleman in the USA proves this.
I appreciate that you may be given varying scientific advice on the points I have raised, but I would regard scientists seeking to refute these points as I would regard scientists who used to deny global warming was human driven. I urge you, if you are in doubt on the science, to liaise with the Prime Minister Jacinda Ardern in order to get a report from Professor Juliet Gerrard, her Chief Science Advisor.
The solution is simple. NZ should adopt a legislative framework similar to the SB-210 framework in California where if Deaf children do not meet educational milestones then appropriate intervention on behalf of the child becomes mandatory, just as we have mandatory education for all children in NZ.
If you feel it would help NZ to honour its CRPD obligations to set up a working group to achieve such legislation I know plenty of Deaf people who would love to be involved. Unfortunately this mahi is not being pushed by Deaf Aotearoa NZ (DANZ).
[Image description: Resistance by David Call. Linocut of David’s childhood experience. Cartoon character with their head replaced by a hand. The hand has an eye in the middle. This shows the importance of eyes and hands for Sign Language. The character is sitting at a table with one hand chained down but the other hand is smashing wind-up chattering teeth with a hammer. The teeth represent oralism.]
Audism is a belief system that causes discrimination against Deaf people. Tom Humpries defined Audism in 1975.
. hearing people are superior to Deaf people
. Deaf people need pity
. Deaf people are disabled
. Deaf people can’t drive or get an education or become professionals
. Deaf people should be taught to speak and become like hearing people
. Sign Languages are inferior and not really languages
. Deaf culture is inferior to hearing culture
Audism results in experiences like a Deaf child never meeting another Deaf person until they are almost in their teens or older!!!
[Image description: 21 Feb 2019 tweet by Mark Ramirez
Today I became an 11 year old hard of hearing boy’s first Deaf/hard of hearing person he ever met… in his life.]
[Image description: Marion Towns painting Helping Hands which she describes here.]
New Zealand’s treatment of Deaf babies is OBSCENE and a disgusting abuse of their human rights.
Here are two extracts from a job advertisement posted 31 January 2019 for a Ministry of Education position as an Advisor on Deaf Children (AODC). The low priority for NZSL is an obscene joke, a politically correct afterthought.
• Qualifications required are a Master of Special Education (Sensory Disabilities/Hearing Impairment) or a Diploma of Adviser on Deaf and Hearing-Impaired Children. Applicants who do not have either of the required qualifications are required to complete the Master of Special Education (Sensory Disabilities/Hearing Impairment) within 4 years following appointment to an AoDC position
• Experience and attributes preferred
– A teacher of the deaf qualification or speech language therapist qualification or equivalent qualification with at least two years experience in that role
– Experience in deaf education.
– An understanding and applied practice of the principles and strands of Te Whāriki (Early Childhood Curriculum) and the key competencies and learning areas of the New Zealand Curriculum.
– Knowledge and competency in overall language and communication development and child development.
– Knowledge and competency in the educational and audiological management for children who are deaf or hard of hearing.
– Experience and knowledge of New Zealand Sign Language and a commitment to ongoing learning of New Zealand Sign Language.
Extract 2 (emphasis on audiology and cochlear programs is way ahead of NZSL)
The AoDC needs to have strong working relationships with other MoE colleagues, as well as the following external relationships to realise the potential of children who are deaf or hard of hearing:
• Develop and maintain effective working partnerships with parents/caregivers, families and whānau.
• Liaise and collaborate with Audiologists, Otolaryngologists and other health professionals, to effectively manage hearing needs through the use of technology and ongoing diagnostic testing.
• Work collaboratively with the Cochlear Implant programmes to effectively manage hearing needs through the use of technology and habilitation.
• Work collaboratively with First Signs Facilitators programme (Deaf Aotearoa) to provide families and whānau access to New Zealand Sign Language and Deaf Culture.
• Provide support to early childhood services and schools, to optimise inclusion.
• Work collaboratively with the Deaf Education Centres to provide effective, responsive and equitable services across the region.
• Maintain effective relationships with the New Zealand Federation for Deaf Children (NZFDC) and local parent groups.
[Image description: Chinese characters for mystify, confuse 迷惑。]
The following attachment (cut and pasted below) is an excel file of 239 sets of Chinese characters which can be confused because they look alike. There’s a column of random numbers to sort on for testing yourself.
I downloaded the caption file from youtube and improved on it where there were gaps or mistakes.
I’m Ann Geers. I used to be at Central Institute for the Deaf, and now I’m, l live in North Carolina but I work out of the University of Texas, Dallas, and my colleagues on this NIH project are Johanna Nicholas at Washington University and Emily Tobey who is at UT, Dallas. This is not the CDACI cohort that John was talking about earlier; this is a separate sample as you’ll see pretty quickly, no financial relationships.
OK, we have a longitudinal design that’s a little bit different than the kind John was talking about. OK. [pause] I’m technologically challenged today. OK, here we’ll do here.
These are kids who were implanted between twelve and thirty six months of age, so they’re all early implant kids.
We got a language sample from seventy six of them when they were three and a half years old, brought them back, got a language sample at four and a half years of age, and then we didn’t see them again until an average age of ten and a half, when we got sixty of them to come back for these data research camps that we do that’s sort of our MO for a lot of the research we’ve done and we run camps, and bring kids from all over the country for three or four days and we test them and we entertain them and take them to Six Flags and stuff like that. It was a lot of fun.
So these kids are not from any one geographical location, twenty seven different states, one Canadian province, not representing any particular program. I’m going to be talking about those sixty kids who completed the entire battery, thirty boys, thirty girls, just turned out that way, all deaf from birth, all in auditory-oral education.
We’ve looked at differences between auditory verbal and auditory oral, and we don’t find any, so we put them all together, and they all come out of early, but good option in auditory verbal settings.
Implanted between one zero and three years two months, 1998 to 2003. Half of them received a second implant, somewhere between forty six and a hundred and nineteen months of age. This gives you an idea how their education changed between four and ten.
At age four, seventy eight percent were in special education, you know, usually at an option center. Twelve percent were fully mainstreamed by that time. Two percent partially mainstreamed, and eight percent is what we’re calling home schooled, which means they were still home with their moms but with an auditory verbal therapist seeing them on a regular basis.
By age ten, this had changed dramatically and it has for a lot of these implant kids. Only two percent were in special education, eighty five percent fully mainstreamed, eight percent partially mainstreaming, and five percent were still being home schooled with their mom.
This is a select sample; it’s not an unselected sample by any means. Most of these moms had a college education. We asked the moms at the follow up camp about their communication mode; this is a mean age of ten. The vast majority of these kids were rated as using speech with ease, a few used speech with difficulty, and one child used occasional signing.
They wore their implant all day every day. Very few, two kids, wore hearing aid in the other ear. Their use of the device either increased or remained the same over this time period between four and ten years. The benefit from the device was described as very useful in most of the cases.
Now, this is a list of speech processors used as some point by kids in this study. In terms of generational processor, this turns out to be important in several of the research studies we’ve done; this processor rating indicates what generation of processor it was. So, at three and a half, the first implant that these kids had, nineteen of the kids had an Advanced Bionic PSP, and then there was Med Ed Tempo, and you see twenty eight had a Nucleus Sprint. By the time-oh, oh, oh [pause] By the time they were ten, most of them had moved up to Nucleus Freedoms, and their second device most of them were using the most recent generation; those were generation four processors, and I’m going to be referring back to that later, so I wanted to give you an idea that this is not a static thing that somebody receives a processor and stays the whole time with that particular device.
So, the first study, and this is the paper that you all got to read ahead of time; we’re addressing the questions, Do early-implanted children reach normal language levels during the preschool years? Does performance improve, deteriorate or remain constant relative to hearing age mates over time? And, what factors contribute to successful outcomes?
So, I’m going to be showing you results on the preschool language scale at four and a half and comparing them to self scores at age ten and a half, a composite score covering syntax and semantics and the various areas of language covered by these tests.
Here is the normal standard score distribution that we would see form the normative sample with a mean of a hundred and a standard deviation of fifteen. And this is the group that they’re going to be compared with all the way through this presentation.
And first we’re going to look at what their score distribution looked like in relation to that sample in preschool. And we see a very skewed distribution with lots of scores here, more than one standard deviation below, but it’s expected for four and a half year olds, so the PLS actually sixty percent, sixty eight percent of the cases are more than one standard deviation below this green bar which represents minus one standard deviation from the mean.
Now let’s look and see what that same distribution for that same sample of children looks like at a mean age of ten. Very different kind of distribution. Here we see many more cases falling within the average range; now only thirty two percent of cases are more than one standard deviation below the mean. That’s still more than we’d expected with normal sample, but we’ll expect sixteen percent down there, sixteen down here. But we have a much more normal configuration to this distribution; so kids are changing a lot between preschool and elementary grades after they get out of special education.
Comparison of the two distributions, very negatively skewed, and much more normal. Now, when we do our regression analysis just consider results at ten as a continuous variable and predict how well are they doing at the self-attend and throw in lots of variables of, you know, the parent demographics and all the information that we collected. We come up with three variables that account for independent variance in performance at age ten. Age of implant, of course, is the biggy non verbal intelligence, pre-implant aided pure tone average.
Now, there’s kind of a reciprocal relationship between pre implant pure tone average and age of implant because kids implanted at twelve months are much deafer, more considerably, have to have much higher aided thresholds than kids who were implanted closer to thirty six months old; so more hearing tend to put it off later.
So you really see your dramatic aged of implant effect when you factor out pre implant hearing. Now, to look at how these scores change overtime, this is a predicted regression coefficient based on age of implant, which is along the abscissa, plotted for kids when they are four and a half, and when they are ten. And what we can see is, what I have indicated here at four and a half is the age, implant age, the mean implant age at which the average score reaches a hundred, reaches the normative being.
And you can see that only kids implanted around thirteen months of age could be expected to reach that normative mean by the time they were four and a half years of age. By ten and a half, however, we’re gone up, we’re closer to eighteen months, somewhere around seventeen months of age, these kids. And the kids by ten and a half are staying all the way up to kids implanted by about twenty four months are still continuing to have an expected mean within the average range for their age, again, displaying that there’s a lot going on between preschool and elementary grades in terms of language growth.
So, that just sets the stage for what I really want to talk about, which is the language delayed kids, but what we found was kids implanted at one to two years of age can be expected to complete elementary grade to the mainstream setting, and achieving language scores that are within one standard deviation of their hearing age mates by the time they’re in mid elementary years. So I mean, they all do, but it means there can be an expectation for these kids. And the advantage of early implantation was maintained overtime. Children with earliest ages of implantation, just as Doctor showed you, they continue to be the best performers.
Now, this is the study that I’m more interested in sharing with you today, which is based on the same sample of sixty kids where we were first looking at what proportion of these preschool language delays persist and what proportion resolve over time.
So we can think of these in terms of four groups and those of you who read the ASL literature are pretty used to this sort of thing.
We have kids who are delayed at four and a half split into two groups, those who are within normal limits at ten and those who are still delayed at ten. These are the ones we’re calling persistent language delay and we’re calling these late language emergence.
And then we have those kids who were already within normal limits at four and a half, and we can have two things happen there; they can either be within normal limits at ten or they can have regressed and be delayed at ten.
These are the normal language development group, and the late emerging delay group. Here’s the scores at four and a half sort of re-plotted in terms of these groups. Here’s the average range for normal hearing kids, and we see we’ve got this group here that we’re going to call the normal language emergence group, and they are all scoring at four well within the range that they should be scoring, but the rest of these kids are still below.
So let’s look at each individual subject now plotted by their scores at ten [inaudible] and we see that we only really have three groups here and they’re fairly equal in size.
We have those kids who continue to be delayed, we have these late language emergence kids who were delayed at four and a half but are now within the average range, and then we have our normal language emergence group who have stayed within the average range between four and ten.
So, we’ve got about three equal groups, which is nice for our statistics, and now we’re going to begin to pull apart what factors differentiate groups of children with normal language emergence, late language emergence, and persistent delay.
But we’re most interested in, of course, is differentiating those kids with late language emergence and those kids with persistent delay because we can’t tell the difference here when they’re four and a half from their language scores, and we’d like to know who’s going to stay delayed and who’s going to catch up.
So, I’m going to be showing you some slides; I’ll go through this one so that when you look at them they’ll all be the same. These are looking at various characteristics of these three groups, the normal language emergence, the late language emergence, and the persistently delayed group.
We’re going to have an overall FMP to indicate whether that was a significant difference; and if one group is significantly better, it would be in red based on the post op comparisons. So you can see that when we look at age at first hearing aid, which also corresponds to age first educational intervention, they’re very, very similar.
The normal language emergence group got a hearing aid, got intervention, and got their first implant at a significantly younger age; but what’s important to us is that this was not significant.
For the kids who were delayed at four, we can’t tell from their age of implant, we can’t tell by their age of intervention, we can’t tell from their age of hearing aid, which ones are going to catch up and which ones are going to continue to be behind.
Now, this is an interesting variable that did differentiate the groups, the percent of children who got their fist implant in the left ear. I don’t really understand this, but forty seven percent of these kids with persistent delay-ah, I don’t know why this keeps going back. Uhm, got their first implant in the left ear, compared to thirteen and twenty one percent in the other groups.
Mothers education level, no difference among the three groups; aided pure tone average pre-implant, no difference; great first mainstream, the normal language emergence group were mainstreamed significantly younger, but that’s a result of their having normal language at four, not probably a causal relation; gender, no difference.
If anything it’s a little odd that sixty three percent of the persistently delayed group were female; it’s kind of-but that’s not a significant difference. So, when we just look at these background characteristics, we don’t see much differentiating these two language delay groups, the one that that would catch up and the one that won’t, except for this interesting statistic about left year implant.
That’s just to show you because age of implant has been such a powerful variable in all of our research, how these three groups, I just had someone plot implant age in months for the normal language emergence, late language emergence, and persistent delayed groups, and you can see that, yeah, it’s true that normal language emergence group has a lot of kids that were implanted below eighteen months of age. But for these two groups, age of implant tells us nothing about who’s going to recover in language and who isn’t.
So, I’m going to look back at what we had. Remember we started testing them at three and a half; that was our first language measure. We did a parent-child conversational interaction, a thirty minute session video recorded with a standardized transcription of both language and speed sound production, and we had the parents fill out a CDI form.
What we got from the child analysis of the language sample was a number of different root words, mean of length of utterance in words, number of bound morphemes, and number of different bound morphemes. What we got from the analysis of speech using CASALA, which is another computer generated system for matching each phoneme produced by the child with the target phoneme derived from the language sample for the first hundred words. We got number of different vowel sounds, number of different consonant sounds, and Emily Tobey created something called a weighted developmental scores in which each sound is multiplied by the age at which it occurs in a normal hearing population.
So sounds that are in the literature, usually the template norm is being present by two year olds and multiplied by two, etcetera to give sort of a weighted idea of what is the maturity of the speech sounds being produced by these kids. And then the CDI, as most of you know, just has a parent rating what vocabulary words have you heard your child produce more than once, and it has a list of words that they check off, they have a list or irregular words, and ratings of sentence complexity. So it’s a parent judgment.
Here’s the same kind of graph, but now we’re looking at three year old speech and language characteristics. Again, what you notice is there’s lots of red for the NLE group. It’s easy to separate out those kids who have normal language by preschool; they’re better in everything, but there’s not much.
Certainly these early grammar measures we got from a language sample do not significantly differentiate our late language emergence and our persistent language delayed groups, neither does CDI ratings. They’re in the right direction, but the variability is so huge they don’t reach significance; however, the early speech measures do.
Here we see that the children who are going to recover, who are going to be normal by the time they’re in elementary grades, started out with more different vowels, more different consonants, and a higher weighted developmental speech score.
I’ll tell you later what I think about this, but I’m not sure why this is the case. We measured lots of things at ten and a half but I’m going to tell you about some of those, non verbal intelligence as the WISC perceptual reasoning, the duration of implant use by the time we did the follow up testing, the kind of technology they were using; remember I told you that was going to come back to be important; whether they used two implants or not, what was their cochlear implant aided pure tone average threshold; just sort of an overall measure of audibility, how soft a sound could they hear; and a phoneme perception score in the Lexical Neighborhood Test.
So, here no difference in performance IQ; that’s been a variable that really kind of helped us predict language in the past, but when we’re talking about differentiating among these three groups, no significant difference.
Duration of implant use at second test, doesn’t tell us much. Use of most recent technology, it is true that those kids with normal language emergence were upgraded; they had a much bigger tendency to use the most recent technology available at retest where only forty two percent of those kids in the persistent language delay group had upgraded their processor.
Bilateral device use did not get a significant CI square, although the mean, the percentage using bilateral device in the normal language emergence group was sixty three percent compared to only thirty seven percent in the persistent delay group, not significant.
Here are the two variables that we measured at age ten that did differentiate between these two groups; they are cochlear implant aided pure tone average threshold; for the persistent language delay group was significantly higher.
They just didn’t hear softer sounds; they were responding at almost twenty seven DB, whereas those kids in the normal language emergence group were set closer to, below twenty DB.
Audiologists have not considered that to be so critical as I think we’re beginning to think it is to get the perception of soft speech.
And finally, the LNT phoneme score, a speech recognition score, but we’re not going here looking at word scores because that’s so influenced by vocabulary; we’re looking at phonemes scores ninety four percent correct in the normal language emergence group compared to seventy eight percent for the persistently delay group.
So, they’re not hearing as well; it has to be louder in order for them to hear it, and then they’re not perceiving as many phonemes, the persistently delayed kids.
So, if we look at the variables that we’ve examined there’s a difference sort, a different group of variables that differentiate the normal language emergence from the late language emergence, and the late language emergence from the persistent language delay; it’s like these are different group comparisons.
For the normal versus late language emergence groups, the normal language emergence were implanted at a younger age, so they got a better start; they had better early grammar, they had earlier mainstream placement, and they used more recent technology. So the kids who caught up by four had all of those characteristics compared to the kids who took longer to catch up.
But look at the comparison, the factors that differentiated those with persistent delay from those with late emergence. Here the variables are totally different, left ear implantation, less audibility for speech, poor speech perception, and immature early speech production.
Now, because there’s so many auditory type variables here, I’m thinking maybe those early speech production differences had to do with audibility of speech because little deaf kids tend to produce what they hear and these kids weren’t hearing as well.
So we did a multinomial regression to look at the divisions among these three groups and we did in a step wise fashion. First removing age of implant, then bilateral use, then most recent technology, ear fist implanted, non verbal intelligence, language at age three and a half, and then speech at age three and a half; and these were principal component scores that were created by combining those individual variables I told you about.
What we wanted to see was the independent contribution of these variables. The tables I’ve showed you just previously looked at each variable independently; now we’d like to see if you throw them all together, which variables are coming out as being most important. And we’re going to use the late language emergence group as the reference group in all of these cases.
First we’re going to just compare the normal language emergence with late language emergence and this just squeaks in as being significant; age of first implant and language principal component score at age three is what differentiates those two groups, those who were normal when they’re in preschool and those who catch up later.
But what we look at what differentiates late language emergence from persistent delay, it’s what ear they first got their implant in and what their speech was like when they were at age three.
So, we’ve got different predictors, and just to give you an idea how different they look, this is the logistic regression let’s us come up with a probability that for each child you can come up with a probability that this child is in PLD [Persistent Language Delay], LLE [Late Language Emergence], NLE [Normal Language Emergence].
So, this is plotting age of implant against the probability of being, green is normal language, red is late language, blue is persistent delay.
And you see that with increasing age of implant, the probability that you will be in the normal language group descends precipitously form about sixty percent probable if you were implanted at twelve months, down to a very improbable if you were implanted out here.
But when we compare the same kind of probability plots for the late language group and the persistent delay group, you don’t see age of implant giving us good discriminability between those two groups; it mostly gives us good discrimination of who’s going to have very early catching up of language.
It, we can use the same kind of analysis to look at based on the seven predictive variables that I just listed for you that were entered into the logistic regression, based on all those, we can come up with a probability that a child is in that group, will score in that group, and compare it to what we’ve observed.
So for the kids that were observed to be in the normal language emergence group, sixteen, based on those seven variables, were predicted to be in that group, but three of them were mistakenly predicted to be in the late language emergence group. For an overall percent correct of eighty four percent correct prediction from those variables. Somewhat less when we’re looking at the observed versus the predicted placement in the late language emergence group, two of them were mistakenly placed in normal language emergence, and four of them in persistent delay for a seventy two percent, almost seventy three percent correct prediction.
Now, this is still considerably improved over- earlier I showed you when we were just trying to predict the exact score at age ten. Look at those three variables, age of implant, pure tone average, and IQ. We could predict thirty eight percent of the variance in that if we’re trying to predict the exact score.
We can do much better if all we’re trying to do is to predict what category you’re going to be in. Our prediction rate goes up considerably. Predicting persistent language delay were right thirteen percent of the time; we’re wrong five percent of the time, but we never, based on those variables, called a child who was persistently delayed in the normal group.
So, in a way, as people who are dealing with kids clinically, we don’t care so much whether, to predict whether their scores at age ten is going to be a hundred or a hundred and eight. We just care whether they’re going to catch up for a lot of the predictions we’re trying to make.
And finally I want to talk a little bit about what are the academic consequences for these kids who are in this, remain in this persistent delay group. One of the measures that we have used historically in deaf education to look at how well a deaf child is doing as he progresses academically is his verbal performance IQ gap, how close is that; the performance IQ representing his potential, the verbal IQ representing the language impairment due to hear loss, so how close is he to catching up.
So we want to look and see what the size of that gap is; we’d like to look at phonological decoding scales for reading to see if they’re at age appropriate levels, and we’d like to look at reading comprehension skills. We use the Wechsler look at the verbal performance gap and the Woodcock Reading Mastery Test to look at basic skills through word identification and word attack which is basically phonic skills, you’re reading non sense words and you’re just looking at their phonics skills.
And reading comprehension on the Woodcock looks at both word and passage comprehension; much more involved with syntax, much more involved with the global aspects of language. Similar kind of graph; we’ve already seen the performance IQ doesn’t differ significantly, but look at the size of the verbal performance gap in that persistently delayed group. Now, these gaps for the normal language emergence and the late language emergence groups are very close to normal, which should be expected to be a zero, and it is very rewarding to see the two thirds of these kids by elementary grades have for all practical purposes closed that verbal performance gap.
For those of you that old enough to be in deaf education a long time, that’s a pretty phenomenal thing, but, you know, this is what we used to see in the old days. Twenty four point verbal performance gap, that’s huge, and that’s what we’re still seeing in this persistently delayed group. Now, their basic skills, reading skills, are not different. I mean, yes, the normal language emergence group is very, very, very good, but between the late emergence group and the persistently delay group, that’s not a significant difference, and they’re both within the average range.
It is in comprehension that we’re seeing the biggest consequences, and that’s related to this verbal performance gap. We’re just not, these kids aren’t living up to their potential and they’re not reading at their potential. It is in a way – it really is exasperating to see kids who in every other way have the advantages that these kids have that we see in so many kids in this group and trying to understand why some kids just aren’t getting there, and that leads us to thinking about [pause] language impairment.
And this is just, we’re just at the beginning of trying to explain why some of these kids are in the persistently delay group; we still don’t know, for example, whether some of them would close that gap as they gain experience. You know, we’ve seen close the gap between four and ten, are they done? Or if we look at them again at twelve, fifteen, would that gap, would they continue to improve? I suspect not, but we need to get those data.
The specific language impairment underlie PLD in some of those children? And how can we distinguish those kids with persistent delay that is due to some auditory phenomenon, left ear implant, bad thresholds, from those kids that it’s due to some other mechanism associated with SLI [Specific Language Impairment]. We know that the group is too big; it’s thirty three percent of this population, and we know that’s too big to SLI, but some portion of those we should be able to identify as having some specific language impairment.
And we’re interested in trying to follow up this idea that early speech production seems to be reflecting long term language problems, and can we find a way to make that into a more reasonable assessment tool? Because if we know at three and a half who’s going to have problems, we can begin to develop intervention methods to address them. So that’s where we are right now and those of you who know a lot about specific language impairment in this room, you should know that we have looked at the natural things like digit span, novel word learning, non word repetition; we’ve looked at a lot of those traditional measures.
The problem is that they are very auditory related, and they tend to be deficient in both delay language group and the persistent delay group, so they’re not, they’re not where we’re going to find our diagnostic information. But thank you for listening to all this data.
Thank you. I’m Uma Soman I’m at Vanderbilt University. Thank you so much for this presentation. It’s very nice to finally make sense of what I was seeing in the classroom to who’s doing well, to catching up, and to going nowhere. So, thank you so much for creating these groups and sharing this with us.
My question is related to the late language emergence children. Clearly, something clicked, something happened, they caught up. Do you think it is mostly related to the factors? Or were there any specific interventions that you can, you want to investigate further as potential contributors to this catching up?
Geers: Yeah, I wish. We had looked. We have a lot of intervention information about these kids. It appears that they were-they tended to go into the mainstream in first grade, the persistent delayed kids in second grade. There was a lot of variability and you could not, based on when they enter the mainstream, you couldn’t differentiate them, you couldn’t differentiate them based on – now, they all got started not as early as the normal language emergence group, OK?
But you could look at those two groups and when they got started in their oral programs didn’t differ; when they entered – we looked at when they entered a preschool class because Jean Moog and I had looked and seen in another study that those kids who were enrolled at a preschool special education, oral class at two did better by six than those that didn’t do it for these kids, differentiating these groups.
So the answer is, and they all had, they were all in good early oral intervention with very involved parents. So, whatever it is, it’s more subtle than that.
Soman: Sorry, can I ask one more question?
Soman: You said the students went from seventy eight percent being in special education to two percent being in special education, so when they were in mainstream settings, did they still receive some form of special education services?
Sonam: Or were they completely off IEPs [Individual Education Plans]?
Geers: No, most of them did, and we looked at the percentage of grades in which they had individual therapy, pulled out for individual therapy, and that went the other way. The persistent language delay kids had more therapy, so, of course, they did because they weren’t doing well. We looked at whether they used to have FM [phonetic] systems, no difference. We looked at a lot, you know, we’re educators, so we could think long and hard about what would we ask? What would we want to know that we think might make a difference? And we just didn’t find any.
Thank you. [ Background Sounds ]
Please introduce yourself.
Dilley: Hi. Laura Dilley Michigan State University. Thank you for a fascinating presentation. I wonder to what extend do you feel the quantity or quality of speech language input to the children might account for some of the variability that you’re seeing in outcomes.
Geers: Well, you know, that’s hard to know for sure. We do have videotaped interchanges with the parent and the child. All these parents were in intervention programs and had been since they were very young, coaching them on interaction; and we have to remember that at three and a half there was no difference in their language output. So I guess the answer is no, we didn’t see those differences.
Now, we did not – we do have a graduate student who is looking at the mothers’ input, and some preliminary results of that are that kids tend to do, have better language when the mother’s input matches the child’s grammatical level.
So there’s a point at which you could have too much talk. I’m looking at the progress between three and a half and four and a half, and we’re just charting which kids made the most progress. And when the mother’s language grammatically matches the child’s language, we saw significantly faster progress.
Now, I’m not talking about differentiating between the late language emergence and persistent delayed groups, but that is an observation, and we need to look more, I think, at them. [ Background Sounds ]
Hi. I’m Eileen Haebig and I’m at UW Madison. My first question is, I’m sure you’ve looked at this, but for the language scores that you presented, the standard scores, they were based off chronological age, right?
Haebig: So, if you calculated a standard score using their hearing age, did you see anything with that, like, especially with the late emergent, language emerging?
Geers: That would be kind of reflected in their duration of implant, duration of implant use, and that was just very similar across the group, yeah, no, that wouldn’t, wouldn’t have probably have done it.
Haebig: And then just from the last comment that you made about maternal input, so you talked about the grammatical level of input, but you also looked at frequency, right? different types of input?
Geers: Well, we looked at number of words, number of different words, number of bound morphemes, number of different morphemes, bound morphemes and MLU [Mean length of utterance]. Those were the variables that stood out as being-and we just did it for the parent output and the child output, and tried to look at how closely they were matched; and the closer the match, the better the progress, regardless of whether the children were at a low language level to begin with or a higher language level.
Thanks. [ Background Sounds ]
Really nice talk. I’m Hope from Vanderbilt, and I have a question that may not be easily- Why do you think the left ear was important? That just seems so weird.
Geers: Well, there’s literature out there. It seemed pretty weird to me too. There is literature out there with adults implant patients that shows better speech perception for right year than left year implants. There’s, there are four studies out there like that, but I have not seen any studies that show the effect on language, but I really, we really need to replicate all of these results and I’m talking to John about maybe replicating this with a different sample, a broader sample, because I think, I just don’t know whether this is a sporadic result.
But, you know, there are people who, who might believe that there are some brain lateralization that is important for language that may be effective. I don’t know.
Hope: So, I mean, it could go either way. Yes. So it might go either way, but it might be just a spurious statistical thing.
Geers: It might be just a spurious thing, but I would say that nowadays most surgeons, all else being equal, implant the right ear; so there’s some evidence to indicate that there’s a preference for the right ear, all other things being equal.
But back in the day when implants first started, I remember when the surgeon would say to the parents, look, both ears are very similar. Which one would you like your child implanted in? I don’t think that happens any more, but that was a long time ago.
Thank you. [ Pause ]
Please introduce yourself. Yeah.
Hi. Areej Asad from University of Auckland, New Zealand. I just would like to know more about the CASALA results because it’s speech, so I wanted to know did you analyse it according to place, manner and voice or just in general because I noticed it’s connected to speech sample, which is great.
Geers: Yes. Emily Tobey is doing those more intricate analyses. So far this is what showed significant differences, OK? It shows overall consonant correct scores, but we’re talking about three and a half year old very deaf kids, so they don’t have much, they don’t have a lot of phonemes to begin with. And so, when she tried to break it down into the categories, I mean, Peter Blamey’s program does let you break it down into all kinds of categories. There would be so few exemplars that it was very difficult to see significance; so, no, we have not see that, but she’s still working on it.
Asad: Yeah, because I use CASALA program and I’m doing my Doctoral and I know that there are some, like there are some sounds, if it was a thing comparison, like the results would come in comparison with the adults’ production; it’s not according to the child’s phonetic inventory. So, if you’re looking a child’s phonetic inventory, you need to count it again, not inside, like, not by the program itself.
Geers: Oh, you mean-
Asad: If you’re looking at the phonetic inventory, not the percentage of consonant correct, then you need to count it as the child himself as, you know, that’s the child’s speech, regardless if it’s correct or wrong.
Geers: Oh, OK. I, yes, but that’s, that, there’s two numbers you can get. One is a correctness score and one is just an occurrence. OK, we’re using the correctness score because that is what gave us the significant differences.
Asad: Alright. Another question is, it’s really interesting that you put for the future kind of speech production assessment to give us more. My question for you, what about Eli speech intervention? Like Eli stimulation for the sounds we already know like in majority we say, oh, they have problems. I know as a speech language therapist we don’t really have now a specific kind of speech therapy evidence space speech therapy approach that we can use. What do you think about that?
Geers: Well, speech and language are so intertwined and in oral and in auditory intervention, you know, you’re putting in speech and trying to get out the best approximation that you can, but you’re working on speech and language simultaneously; you’re not working on speech sounds in isolation like you might do for an older artic [articulation] case, for example. And I don’t see pulling those apart for intervention.
Asad: Oh, no, no. I still agree with you. We’ve had both of them, but what I mean is like, have more, like I know about auditory verbal therapy, but like, it’s not that much concentration about the late acquired age that we know kids with hearing loss have problems with. My idea is that what about have, like, what about developing a new evidence based speech therapy approach, like based on the ones that have been already in the literature for, you know, children with speech disorder, and implemented beside the auditory verbal? That would help them out.
Geers: Well, it’s interesting idea. As young as three, you think, these very young kids doing a articulation intervention approach. I don’t know. I’d have to ask some of the teachers what they think about that. It’s interesting idea. Thanks.
Please introduce yourself.
Hi. Caitlin Imgrund from the University of Kansas. So I was very fascinated by your talk, so thank you so much. I’m wondering about the demographics of your participants, in particular as you mentioned the SES [Socioeconomic Status] of the sample was quite a bit higher than what we would expect to see in the general population, and although it’s very amazing that so many of these children were able to move into within normal limits, so your typical language emergence group, it did seem to me like perhaps given the fact that your SES was so high, if those children had not had hearing loss we might expect their average to be a little bit higher than the normal distribution.
Imgrund: So, is it possible that even with these children that are moving to within normal limits were still not tapping into perhaps their true language potential? And if you could speak to that, that would be wonderful.
Geers: Yeah, you know, you did notice that this normal language emergence group were getting kind of super normal scores on some things, up at a hundred and twenties, you know. And that’s largely because that’s probably based on their environment of the average, and so, you know, a kid who was doing ninety and functioning in the some of the educational environments these kids are functioning in, are operating with a handicap. They still belong in the mainstream setting, and I think those kids will continue to improve.
That’s why I’m so interested in this verbal performance gap, which is so huge for the persistent language delayed kids because that just tells me how close they are to hitting what we could call their ceiling. And a lot of these kids are there. It’s also important to remember that it was astonishing because the first time I saw John present this latest data he has, about a third of his kids are what I would call persistently language delay. So, even though we have very demographically different groups here, about the same proportion are in this persistently delayed group.
So it’s not just that kids in our sample are particularly advantaged; there’s something about a third of these kids that we have to figure out. It’s something we’re doing, maybe that we have to figure out.
Thank you. [ Pause ]
Moncrieff: Hi, Ann, thank you very much. This is Debbie Moncrieff from the University of Pittsburgh. And if you may remember, I focus on normal hearing children and auditory processing, and one of the areas that I specialize in is the symmetry between the two ears. And in the majority of the population the asymmetry is shown as a left ear deficit.
So we now have evidence electrophysiologically that in the weaker ear pathway there is an increase gain in the neural signal, and that that increase gain in the neural signal is possibly leading to the loss of synchronization and speech perception and clarity.
Geers: That’s fascinating.
Moncrieff: I know. I wasn’t going to come up, and then you started to go into. Is it us? I don’t think it is you. I think-I’ve seen you present before and you came to Pittsburgh, and you know, we’ve been, we presented here actually together. This subgroup of your population has always intrigued me, and so I’m now doing some group at the de Paul School in Pittsburgh to start to look at this auditory processing phenomenon in children with hearing loss as well.
Yeah. So it could actually be in the, in the wiring. I think it’s genetic. I don’t think it’s acquired, although I have colleagues who think it is; but I think it may be in the pathway and that there may be something inherent that is preventing you from allowing those speech and language processes to be accessed.
Geers: We’ll have to talk about what kind of task we might use with kids.
Moncrieff: I would love to talk to you about that and Emily.
Geers: OK. Alright, thanks.
And last question
I’m Susan Steinman from New York Eye and Ear. Thank you very much. I learned a lot today. So you mentioned early speech production as a potential predictor of later performance, and I was just curious about what other potential predictors you were interested in investigating. You mentioned verbal working memory not really being a good one, but whether it’s non verbal working memory, what else was there?
Geers: Well, the other biggy is auditory, and maybe auditory processing, but, I mean, I think we’ve-I don’t know where else to go with that. I mean, I’ve seen, OK, their phoneme perception isn’t as good, their thresholds are higher, you know, does that mean that we need to really get after the audiologist with little kids and say, get those thresholds under twenty, get them under twenty, do whatever you have to do? And we do that, and that would make a difference or is that just a symptom of an auditory processing problem that’s causing them to have higher thresholds in poor phoneme perception? I don’t know whether it’s a cause or a result.
So that’s the other area that is a big predictor, and I think that maybe we’re getting an early speech perception by looking at speech production. So maybe we can get them both if we look at, if we get a good measure of production early on.
Thank you. Thanks. Wonderful. Well, please let’s, again, thank both Doctor Ann Geers and Doctor John Niparko. Wonderful session. Thank you so much. [ Applause ]