From the Desk of Ann Kummer
If you are a pediatric speech-language pathologist or if you are a parent or a grandparent of an infant or toddler, you will be particularly interested in this 20Q article entitled Language Development and Its Clinical Applications.
As I write this introduction, I am a very fascinated observer of the language development of my 10-month-old and 19-month old granddaughters. I think we will all agree that language development in a typically developing child is both a miracle and a mystery! Fortunately, Dr. Nina Capone Singleton and Dr. Brian Shulman will help to unravel part of this mystery for us. In this article, they give us a better understanding of how children put it all together as they learn to talk. They provide an overview of what influences language development, with a particular emphasis on pragmatics and semantics.
Drs. Capone Singleton and Shulman are experts in language development. They recently published the 3rd edition of their textbook entitled Language Development: Foundations, Processes, and Clinical Applications, published by Jones & Bartlett Learning.
The following is a little more about these remarkable authors:
Nina Capone Singleton, PhD, CCC-SLP is an Associate Professor in the Department of Speech-Language Pathology, an Adjunct Associate Professor in the Hackensack Meridian School of Medicine, and the Director of the Developmental Language and Cognition Lab, all at Seton Hall University. She has held clinical positions at the Children’s Seashore House-The Children’s Hospital of Philadelphia, Children’s Memorial Hospital (Chicago), Bright Futures Early Intervention Clinic (Evanston, Illinois), and the Westchester Institute for Human Development (Valhalla, New York). Dr. Capone Singleton has published many papers and has served as Associate Editor of the Journal of Speech, Language, and Hearing Research (2010-2013). She is currently Associate Editor of Folia Phoniatrica et Logopaedica. Dr. Capone Singleton has presented at both national and international conferences.
Brian B. Shulman, PhD, CCC-SLP is currently Dean of the School of Health and Medical Sciences (SHMS) and Professor in the Department of Speech-Language Pathology at Seton Hall University. He has served in a number of leadership positions within the American Speech-Language-Hearing Association (SHA), including Chair of the Board of Division Coordinators (BDC) and Vice President for Speech-Language Pathology Practice. He is a Fellow of the ASHA, the Association of Schools of Allied Health Professions (ASAHP), and the National Academies of Practice (NAP). He recently served as Editor of Pediatric Speech and Language: Perspectives on Interprofessional Practice, an issue of Pediatric Clinics of North America. Dr. Shulman has done numerous invited presentations to professional groups at state, national, and international conferences.
These authors are so knowledgeable and passionate about this topic. We are so fortunate that they are sharing their passion with us!
Now...read on, learn and enjoy!
Ann W. Kummer, PhD, CCC-SLP, FASHA, 2017 ASHA Honors
20Q: Language Development and Its Clinical Applications
After this course, readers will be able to:
- Describe the purpose of developmental milestones in their clinical practice as speech-language pathologists
- Identify the factors that contribute to richer versus disordered language development
- Identify a range of early pragmatic and semantic behaviors within the domain of language
1. How is the topic of typical language development relevant to working with children who exhibit a communication disorder?
Understanding language development provides speech-language pathologists a set of expectations to guide their observations. Whether screening or evaluating a child, the speech-language pathologist needs to think about where a child might fall along the developmental continuum. Again, throughout intervention, a developmental framework ensures any session is at an appropriate level. When a child is having difficulty, a task may be too high a level or at too low a level for a particular child. A developmentally appropriate framework helps us adjust our goals and scaffolds to accommodate the child’s learning needs.
The developmental approach is supported by the American Speech-Language-Hearing Association (ASHA). For example, the Core Knowledge and Skills in Early Intervention document states that the speech-language pathologist working with infants and toddlers needs “knowledge of typical development from birth to age 3 years, across domains” and “the course of communication, hearing, speech, language, emergent literacy...development and their relationship to and impact on overall development.” (ASHA, 2008, p. 2). Knowledge of typical development is invaluable during parent education. We can provide initial feedback and realistic expectations that are grounded in the understanding of typical development before scoring formal tests and analyzing language samples.
2. I was surprised to see you talk about the World Health Organization in the new 3rd Edition of Capone Singleton and Shulman text! What prompted the addition of policy to the text?
Brian and I want speech-language pathology students to feel a sense of belonging to our organization early on in their career. We have always included reference to the American Speech-Language-Hearing Association (ASHA). We also wanted to engender the understanding that thoughtful policymakers and research guide our practice patterns. We believe that it is important to emphasize, as early as possible that speech-language pathologists are a community of professionals. We are bound together by the ethics and practice policies of ASHA and this engenders a sense of responsibility to our patients and their families from the outset. In fact, much of ASHA’s practice policy is guided by the World Health Organization (WHO, 2001) and the speech-language pathologist guides his/her professional practice by ASHA’s practice guidelines. This is particularly the case when it comes to following the International Classification Framework (ICF) determined by the WHO.
3. In brief, what is the International Classification Framework (ICF) set up by the WHO as it relates to us?
The WHO’s International Classification Framework (ICF) provides the framework for the ASHA Scope of Practice (2016). Speech-language pathologists, like other health professionals, determine if impairments in the child’s physical structures (e.g., cleft palate) or body functions (e.g., poor working memory, small vocabulary) limit the child from executing activities in their natural environments such as in the classroom or at home. In early intervention, for example, most states require that we assess and treat the child within the natural environment. Most often, the natural environments are the home or daycare setting, as examples. We must determine how limitations of body structure or function impact participation in life activities. For example, how does limitation impact participation in communication at dinner with family? Equally influential is the environmental impact on the child’s ability to achieve and participate. For example, a school-age child with Attention Deficit Disorder (ADD) may have minimal to no functional impact on academic success in a classroom where flexible seating and self-regulated breaks are naturally part and parcel of the teacher’s program for all children. Whereas the next academic year, a classroom culture where sitting in a specific desk/chair with no breaks may render the same child disabled.
4. What new evidence is there that environment plays a role in language development?
The work of Deborah Hwa-Froelich with internationally adopted children has captured our attention in this regard. Children can acquire their adoptive language within two- to three- years of adoption. There are some varied outcomes of children who are internationally adopted depending on pre-adoptive care. Pre-adoptive care variables include nurturing environments (e.g., social interactions), nutrition, and medical care exposure. For example, although children who were adopted can exhibit a typical course of development relatively quickly, if a child was living in an orphanage that was not well supported economically and was cared for with a high caregiver-to-child ratio, or there was poor nutrition and medical care, then the child’s outcome would likely be poorer. Institutional care such as this parallels development of other children with negative environmental impact on their language development, namely children who are abused and/or neglected (Hwa-Froelich, 2012). One thought is that children are not the recipients of rich linguistic input nor experience-varied rich activities as other children from more supported pre-adoptive environments. The effect is not only on foundation or pre-adoptive language skills, but also on attention and other behaviors that could support continued language development once in post-adoptive homes.
5. On evaluations, clinicians take a detailed case history which includes the question “Is there a family history of language impairment or learning disorders?”. Why do we still ask this question? Is there reason to believe that genetics still plays a role in whether a child will have language impairment?
Yes, language disorder is often heritable. Language Disorder is the new agreed upon term according to the DSM-V (2013). So, this question on our case history is still very relevant. It tells us that the child is at risk, along with the parent having concern, are two risk factors. Dorothy Bishop and colleagues (2012) found that a parent’s poor performance in repeating nonwords like ‘teivack,’ when their child was 20 months of age was associated with their child having a language disorder. The innate ability to process non-words for repetition may be Fheritable. It is also possible that a parent’s ability to repeat non-words may be indicative of a parent’s own weak vocabulary because non-word repetition and vocabulary size are related. So, it would seem that this could be a case for genetics and impoverished language environment potentially influencing the child’s outcome (Capone Singleton, 2018). However, there is a line of research that genetic endowment plays a significant role in a child’s language disorder. Some genes that have been implicated in speech and language disorders are CNTNAP2, FOXP2, K1AA0319, ATP2C2, and CMIP. The FOXP2 gene is probably most familiar as the gene associated with motor speech impairment or apraxia. The CNTNAP2 gene is associated with the specific language impairment component of disorders such as autism and language disorder. The K1AA0319 gene influences neuronal development which can involve spoken language areas. The ATP2C2 and CMIP genes are associated with performance on nonword repetition tasks.
6. What is this new area are of genetics called epigenetics and how does it relate to communication disorders?
Epigenetics is the study of how experience influences genetics. That definition is simplistic, however. Epigenetics delves more into the types of experience, the extent to it being repeatedly applied, and the outcome on genetic expression. Positive or negative experience can form a chemical trace on a gene and affect its expression. The details of genetic expression are of course beyond our conversation but knowing that experience can alter genetic expression can drive our field. Think about caregiver coaching and interventions for young children in early intervention! If our model for parent coaching isn’t the perfect vehicle for repeated positive exposure for change I don’t know what is! We really want to promote caregiver interventions that engage the infant and toddler in social connectedness and engagement. It is the social engagement between caregiver and child that forms the basis for all language form and content learning. By this, we mean words and grammar.
7. In many cases, parents can observe when a school-age child is having difficulty because they are failing academic tests or a teacher reports behavioral challenges in the classroom. How is this different for the speech-language pathologist working with infants and young toddlers?
Development changes much more quickly during infant and toddlers than in the school-age years. By that we don’t mean to say that school-age children are not developing – far from it. The school-age child is making incremental changes every day that are more likely to be picked up over time. For example, you will see normative data presented by age in years for the school-age child. In contrast, developmental changes are broken down by the month and days in infancy. Caregivers can see a child change over the course a few days. A caregiver might notice the child make a jump from pointing to indicate a desired toy to labeling it for the first time within the same week! Also, there are multiple domains of development that are interrelated. Language and play milestones are tightly linked in early development. We know that exploring objects at the mid-point of the first year is contingent on motor development – is the child able to stabilize his or her body to sit? Sitting allows the child to hold an object and explore it with the hands. Exploring an object provides the caregiver and opportunity to socially interact with the infant about that object. Social interaction provides language input.
As a consequence, if the speech-language pathologist is observing a young child s/he observes not only language but play and motor development as well as social interactions between the child and caregiver. If they have knowledge of developmental milestones in the first 5 years of life, then they have expectations to guide assessment and intervention. While there are formal assessment measures for the birth to three-year-old population, the knowledge of developmental milestones provides the speech-language pathologist skills in talking with parents and other professionals about what is appropriate for a child (e.g., play, expectations for behavior and language). The speech-language pathologist may not always be able to use a formal, standardized measure with a child and may need to rely on formal analyses within non-standardized procedures. Knowledge of developmental milestones enables the clinician to be facile with his/her observations in these circumstances (e.g., analyzing behavioral observations).
8. Are there resources available for the speech-language pathologist and caregiver to track children’s developmental milestones?
There has been a push for early identification and early intervention in the last decade. This is another reason we focused on policy in the first chapter of 3rd edition of the Capone Singleton and Shulman text. With a broad net cast for early identification and early intervention of infants and toddlers, many good resources have emerged. For example, ASHA has led the charge with signs to look for when a child shows signs of difficulty. Their campaign is called Identify the Signs. The Center for Disease Control (CDC) takes a proactive approach with their Learn the Signs. Act Early. website and the American Academy of Pediatrics powering their Healthy Children.org site (https://www.healthychildren.org/english/ages-stages/pages/default.aspx ). Both websites describe typical milestones. The CDC also shows images and short video clips illustrating each one from birth to 5 years of age. One of our favorite clips is of a child having a tantrum to illustrate for parents that children use these types of behaviors in typical development when they have not yet mastered the language to regulate their own or others’ behavior.
- ASHA (https://identifythesigns.org/communicating-with-baby-toolkit/?utm_source=asha&utm_medium=email&utm_campaign=pr082218 )
- Center for Disease Control (CDC: https://www.cdc.gov/ncbddd/actearly/milestones/milestones-in-action.html)
- The CDC milestones in action website includes a booklet that caregivers can download (https://www.cdc.gov/ncbddd/actearly/pdf/parents_pdfs/milestonemomentseng508.pdf)
- Einstein College of Medicine has several development videos on their website that cover a variety of domains including social engagement, communicative intent, speech-language and motor milestones (http://www.einstein.yu.edu/video/?VID=435&categoryID=985&ts=features&tsp=related#top)
9. Sometimes when looking at a clinical report, I see the phrase “language and pragmatics” as if pragmatics is not part of language. Do you find that some professionals are unsure of where pragmatic skills fall within language development?
A spotlight has, once again, been placed on pragmatics in our field. In the era of Noam Chomsky, the field of psychology, psycholinguistics and speech-language pathology took a more severe turn toward morpho-syntax but in the last decade or more, the importance of how a child uses language has come to the forefront. Autism Spectrum Disorder (ASD) has risen in incidence/prevalence and categories of pragmatic language impairment have emerged for children who do not fully fit the criteria of autism. In fact, the DSM-V (2013) has included the categorical disorder of Social (Pragmatic) Disorder.
It is understandable how the domain of pragmatics is less clear for professionals to understand as “language”. Everyone wants to track something concrete that they can write down and tally – like the Mean Length of Utterance (MLU). The behaviors that fall within the domain of pragmatics may seem less concrete than those of words and meaning (semantics), and morphology or syntax. These other domains have the properties of being written down and pictured. Pragmatics is absolutely one of the domains of language, both receptive (e.g., reading someone’s body language, theory of mind) and expressive (e.g., gesturing, proxemics, discourse) modalities. The pragmatic domain of language may sometimes feel less concrete or includes varying behaviors for similar constructs. For example, intentions which are pragmatic functions can be expressed via gestures or words, the latter falling under semantics for meaning.
Pragmatic behaviors fall along a wide range of types from social nonverbal behaviors of eye contact while speaking and utilizing a variety of intentions to use words to engaging in a spoken or written genre of exposition to share how an experiment was conducted and then discuss the results. Once a clinician becomes facile with the behaviors, they are trackable. For example, I (i.e., Capone Singleton) teach an undergraduate course in language development. When the unit for pragmatics comes along I assign a scavenger hunt for pragmatic behaviors (e.g., eye contact to gain attention, whining to manipulate, shake head ‘no’, demonstrating theory of mind, retelling a story).
10. How does Theory of Mind fit within our practice of language? You say it falls under pragmatics.
Theory of mind (ToM) refers to a child’s ability to recognize another person’s mental state such as experience of emotions or knowledge (e.g., Miller, 2006). For example, once when teaching preschoolers, a young boy came to me and said: “Nina, he won’t give me my sword.” In this example, we have two important examples of ToM – a violation and a well understood reference. In the first violation of ToM, my young charge used the pronoun he but had not given me previous reference to who he was. Alternatively, you as the reader may have assumed the young boy was pointing to he, but in either case, he must be accompanied by a referent. The second ToM assumption is sword – both my young charge and I know the sword is a toy. The outcome would be very different on my part had I not inferred the knowledge of sword as toy.
Having ToM allows us to predict behavior, participate in conversation and other discourse (e.g., text comprehension) and to make inferences. Skills like joint attention and the recognition that others have intention when they use words are precursors to more mature theories of mind (e.g., Miller, 2006). Indeed, ToM falls well within the domain of pragmatic language.
11. Is there a developmental trajectory to Theory of Mind or is it categorical – born with it or not?
ToM is definitely not a categorical construct. For example, two types of ToM are cognitive theory of mind which involve beliefs, intents, and pretending, and affective theory of mind allowing children to recognize emotions in oneself and others, and respond to others’ feelings. Like other aspects of development, ToM develops over time and emerges from the foundation of other skills. For example, Carol Westby writes about the importance of infant engagement and social interaction in the emergence of ToM. Later, higher level social interactions are not effective without ToM. What this means is that infant-caregiver interactions form the basis of ToM. The initial infant-caregiver level of ToM is referred to as affective ToM. In preschool, emotion vocabulary helps children put language to feelings, facial expressions, and body gestures. As the child gets older they evolve to understand more cognitive attributes such as beliefs, intents and knowledge. Children have to understand when these are the same or different from their own to be effective communicators. Their narratives or conversations can then be accurate in pronoun use and in giving just enough information. However, this level of ToM starts in infancy. Engaging in joint attention with caregivers allows for language to emerge. Infant engagement is the key to language development via joint attention and ToM emerges from there.
12. At what point in development does language comprehension come on line?
We should think about the development of language comprehension like any aspect of development with some unique aspects. First, language comprehension starts to develop in utero! Speech-language pathologists should remember that hearing is essential for all language development and hearing is developing in utero, awareness of sound and preferences for caregivers’ voices are apparent as well as right after birth. Again, our case history is a key component to understanding the earliest development of a child. Once the infant is born, comprehension continues to develop incrementally over time, and over the 5 domains – phonologically (understanding the difference between /hæt/ and /kæt/), semantically (pointing to a pictured ball when the caregiver says “a ball!” while picture-book sharing), pragmatically (following a caregiver’s point with an eye-gaze), morphologically (learning that gorping- a novel word heard refers to the action happening because –ing is a present progressive morpheme that attaches to verbs only) and syntactically (“Before you put your name on the paper, make sure you have 2 erasers and a pencil out”).
Early in development, children use many nonlinguistic cues and context scaffolds to help them bootstrap into the language they hear. For example, often times babies are already in the process of an action as parents are providing the language of direction. A 10-month old may be handing over a ball to the caregiver and at the same time, the caregiver says “let’s play ball” or “give me the ball” in a scaffolded attempt to teach the child how to play ball. Over repeated exposures, they are learning what the linguistic message specifically decodes to mean. Another common example across development, contexts and adults in a child’s life including teachers is the use of gestures. Gestures is one of the most frequent visual cues used across professionals to help children understand the language they hear.
13. Will receptive language always exceed expressive language?
For the most part, this seems to be accepted as true in children who follow the typical course of development. So, for example, children understand approximately 50 words at 10 months of age yet they are only speaking 1 to 5 words between 10 and 12 months of age. The exceptions to production exceeding receptive language tend to be cases where children have learned “chunks” of language as holistic phrases. In this case the child does not have productive control over the individual components in the phrase such as the words’ morphological or semantic meaning. As another example, children sometimes echo our words in response to a query. Children with ASD and children with language disorder use echo responses. When children echo, they may be attempting to process the meaning of the language they are hearing or they may be trying to participate in the interaction.
A great deal of conversation, however, has erupted over the past 2 decades regarding the experimental method that yields the asymmetry between receptive and expressive language. If we ask a child to name all the different vehicles he knows, or to name a picture (e.g., of car) versus point to car when presented four pictures (e.g., car, truck, sled, bus), we are tapping his knowledge in very different ways. The four picture array is highly scaffolded and helps the child demonstrate his comprehension (i.e., recognition) of car. If the child does not name car via the first rapid lexical access task, can it really be said that he does not have that item in his expressive language? What if the same child still does not name the picture of car? What if he names it when I say /k/?
The point here is that instead of comprehension versus production, clinicians might think about language as having representational strength or richness. Language is represented with varying richness depending on experience and associations made in their experiences. In many ways, comprehension and production dichotomies really emerge from task demands and amount of scaffolding within the tasks, and of course the strengths and needs of children. For example, a 10-month-old may not have a mature motor speech system to articulate a word but their language system is capable of demonstrating a word representation in recognition format. Therefore, we say comprehension precedes expression. In terms of clinical application, we want to engage in therapeutic activities that enrich knowledge of language. The richer the knowledge, the more likely a child will express their knowledge. At first while knowledge is weak, the child may need more scaffolding to demonstrate what they know.
14. I sometimes hear speech-language pathologists say that if we allow children to point a lot they won’t learn to use their words. Is there any truth to the thought that gesturing hinders speaking?
There are no data to support the notion that gesturing by the child or by the adult will hinder the child from learning to speak. Quite the opposite actually! Gesture reflects what children know. First, let’s define gesture which is different than a signed language. Gesture is defined as movements of the hands, body or face. Some gesture conveys information. For example, pointing to something in the immediate environment to direct attention to what you are speaking about, or putting your left hand with palm up and your right hand with palm down, each a few inches apart from each other might indicate the thickness of something not in sight.
There is a vast literature dedicated to gesture development, gesture communication, gesture use in communication disorders, etc. This literature includes journals dedicated just to gesture such as the journal Gesture and the Journal of Nonverbal Behavior. Across many studies, researchers in psychology, speech-language pathology, linguistics, and education have found that gesture promotes receptive and expressive language development. Studies by Acredolo and Goodwyn found that when parents use iconic gestures, babies use more iconic gestures, and baby’s words emerged sooner. Further, into their toddler year, expressive language was stronger than children in control groups. Acredolo and Goodwyn are the developmental psychologists who made Baby Signs popular. Later, I (i.e., Capone Singleton) and others started manipulating iconic gestures and pointing gestures experimentally. We became interested in a variety of word classes and if children with and without language impairments could benefit from gesture.
As an example, there is a fairly recent study by DiMitrova, Ozcaliskan, & Adamson (2016) that showed toddlers with Down syndrome and toddlers with autism along with toddlers without these diagnoses, all gestured before using words. Parents of the three toddler groups translated their children’s gestures into words. In other words, parents paid attention to their child’s communication, interpreted the gesture as a stimulus for word modeling, and modeled the word. In turn, children with Down syndrome, children with autism, and children without those diagnoses all acquired those words before other words that the children did not have in their gesture repertoire.
15. So, can assessing a child’s gesture repertoire actually help the clinician better understand where the child is in development?
Yes, being a keen observer of a child’s gestural communication helps the clinician discern the child’s curent developmental stage of communication development, and the next step in language they should be ready to progress. Also, reading a child’s gesture can help the speech-language pathologist understand what the child may know because the gesture is reflecting what knowledge is represented in the child’s memory. If a child is gesturing, then the caregiver and speech-language pathologist can model the language for what the child is gesturing---mirror the gesture and add the language.
There are predictable gesture milestones for the speech-language pathologist and caregiver to look out for. First is the prelinguistic gesture sequence – SHOWing objects to the caregiver (e.g., holding up a toy), GIVE-ing an object to the caregiver (e.g., giving over the car to the caregiver and then quickly expecting it back), and POINTing (e.g., to the bottle they want on the shelf). It is important to know that infants begin to SHOW somewhere between 8-months of age and 10-months of age. About 2 or so weeks later the infant will begin GIVE gestures, and POINT will emerge shortly after that but before first words. In fact, pointing precedes and predicts first words in both spoken language and signed language.
When we observe a child who has concerns of language, and that child is older than 12-months, we first observe for gesture communication because it tells the speech-language pathologist first, if the child is ready for intentional communication. The intentional communication stage begins at 8- to 10- months with these pre-linguistic gestures. Second, we observe specifically for pointing because we know that pointing is the precursor to first words. If a child does not have those gestures then we will be starting intervention with prelinguistic expectations. Of course, caregivers and speech-language pathologists provide spoken language with gestures but our expectations are at a prelinguistic, intentional stage. The same is true of the word combinations stage. If the speech-language pathologist is encouraging word combinations, then be sure to pair pointing and words. Infants predictably point with words before they combine words. A word + POINT emerges by 16 months of age predictably before the expected stage of word combinations stage by 24- months of age.
16. What other gestures might caregivers be looking to see?
Once an infant is at the one-word stage, s/he will also be producing social gestures like waving hi or bye, and nodding no. Infants and toddlers can also be using some iconic gestures. Iconic gestures have also been referred to as baby signs or representational gestures. In fact, Acredolo and Goodwyn developed their Baby Signs from iconic gestures that babies actually produced! Iconic gestures mimic the shape or function of objects and actions. They can also follow the path of location words. For example, I (i.e., Capone Singleton) once had a late talker put her index finger to her lips to request bubbles. In this example, the index finger represents the shape of the bubble wand, and the motion of putting the index finger to her lips mimicked the action of blowing bubbles. This information was clinically useful to me as a speech-language pathologist. Her gesture indicated to me that she was using at least single gestures akin to single words to request, making eye contact to request, and had represented the object and its function in memory (the semantic-meaning). This is a terrific opportunity to model the word “bubbles” in combination with a mirror of her gesture (i.e., index finger to lips). A study by Karla McGregor and her colleagues trained the location word “under” by comparing an iconic gesture showing a path of movement from over to under, and a picture of an object placed under. The gesture was more effective in the toddlers generalizing the location word to other contexts (McGregor, Rohlfing, Bean, & Marchner, 2009).
17. Should parents be concerned if their child is not talking by 12 months?
A question like this has many approaches. Both Brian and I have been asked questions like this by family, friends, by students and acquaintances over the many years we have been in the field. The first answer we give is that any time a parent is concerned about their child, it is a good indication that they should seek the advice of and at least a screening by a certified and licensed speech-language pathologist. First words are not the only milestone to consider. There are other factors. Whether or not hearing is intact, and previous speech, play and motor milestones must be also be considered. Once all else appears within expectations, and the infant is hearing, only then are we not concerned about the first word at such a discrete point in time as 12 months exactly, -no. Developmental milestones are met within a range of time. Generally most children will utter their first word between 10 months and 14 months with 12 months being an average. If a child is learning more than one language then that word may be in any of the languages she or he is learning. Too, the word may not sound like an adult form but should be recognizable to some extent such as “duh” for “duck”. As a child marches past their second birthday we become more concerned that the child may be exhibiting late language emergence, previously referred to as late talking. A child is referred to as a late talker if they are 2 years of age and have less than 50 words or no word combinations.
18. Why do very young children – toddlers really, sometimes forget words or stop using words they have?
It is so interesting that we view word retrieval errors by toddlers as being distinctly different from those of older children. It makes some sense because of course toddlers are still learning to talk for the first time while older children, say school-age children, seem to have mastered the talking part. However, each new word is still new whether we are talking about a toddler or a school-age child. Katherine Nelson said that early on children have “stops, starts and regressions” in very early words added to the lexicon. She was talking about the first 10 to 20 words. Lisa Gershoff-Stowe showed us in her research that there is a bit of influence of being a new speaker and a bit of influence from the word itself being new in the lexicon. Each of these factors – newness in retrieving words, and newness of the word itself, explains why toddlers may not use a word they once did (Gershkoff-Stowe, 1997).
Lisa Gershkoff-Stowe showed us that toddlers go through a period of increased word retrieval error that shapes like a curvilinear trend when retrieving words. This happens around the word spurt – so when the child has between 50 and 150 words. Instability appears to be due to weak semantic knowledge – not knowing very much about all the new words toddlers are learning. Saying “cat” in the context of a sheep, or as my son recently said “pliers” when he was trying to refer to the object tongs. These errors show the same sources of error as in adults and older children. However, a second reason for the instability is that the word retrieval process itself is immature (Gershkoff-Stowe, 2002). Toddlers may perseverate by saying the same word more than once. In other words they are retrieving or saying the same word they had just said a few moments ago even when the context does not call for that word. Once toddlers become well-practiced or stability in the system is achieved the word retrieval errors seem to decline and level to a more tolerable level for us.
19. Your son’s error of saying “pliers” for tongs makes me think of how the errors are related to the objects they are trying to name. I remember in graduate school the example was that children call the moon a “ball” because they are both round? Why is this important?
Right, there are systematic ways that naming errors are related to the words children are trying to say. Very often they are similar in shape. So a moon – the target, and a ball – the error the child says are both round. This is grounded in a computational bias children have called the shape bias (Landau, Smith & Jones, 1988). The shape bias refers to how children extend words they already know. In the ball-moon example, children know ball and they have computed from thousands of experiences that things that share similar properties – namely shape, tend to also share the same name. So, if it’s round, it’s probably a ball. There is also a function bias as is the case when a child calls a rake “broom” because they make the same functional movements. They also happen to share some shape requirements. The importance of these relationships is that children are smart! They are calculating systematic features and associations from infancy, in addition to some language associations. Babies and toddlers create assumptions or biases that help them navigate new instances of language more easily. It makes word learning much less laborious. We do not need to teach each instance because other than the first instance, the other exemplars can be extended.
20. Do we still think of children as having word retrieval problems if they have a language impairment or is the new way of thinking that children are not learning the words well to begin with?
The DSM-V (2013) refers to children who have impairments in language domains only but no other concomitant disorders (e.g., autism, deafness, Down syndrome, intellectual disability) as having a language disorder. For children with language disorder, they have the same sources of error and the same patterns of word retrieval and naming as their same-age peers without language disorder. For example, children with and without language disorder produce a variety of error types – semantic errors (e.g., towel / blanket), phonological errors (e.g., constructions / instructions), mixed semantic-phonological errors (e.g., escalator / elevator), indeterminate errors (e.g., “I don’t know,” “thingie”) and visual misperception errors (e.g., ball / moon). The main difference is that children with language disorder make many more errors than their unaffected peers. It is important to remember that of all the error types, both groups of children (with and without language disorder) make semantic and indeterminate errors most often. This indicates to us that our vocabulary sessions should be spent enriching the meaning of words not just repeating the words from a list. All the time we spend pointing out and describing the details of objects and events is time well spent! When we describe objects and event we provide enrichment of the semantic-meaning. Children need enriched semantic-meanings in memory to retrieve the words they want to talk about those events and objects later.