Thursday, September 8, 2016
Wednesday, June 1, 2016
June 01, 2016 Heather
I was interested in this article for three reasons. First, after teaching an advanced writing course for many years, I know how difficult it is to teach the passive voice. I find that my students often come in with a smattering of ability to use the passive voice naturally (probably because they've picked it up from natural discourse), but struggle when we focus on it explicitly. My Arabic speakers often make errors where they use half passive and half active voicing (e.g. "He was read.."), where they insert the be verb when it's not needed. After such focus, I find that they start to insert passive structures into their papers in places that actually call for active. The second reason this article caught my attention is because I am trying to prepare my students for technical writing, much of which calls for the use of the passive voice. I hope hoping to clean something useful due to its context with computer science majors. Lastly, the subtitle of the article reads, "The case of the less proficient English learners," which clearly describes my student population.
I found the introduction and literature review fairly interesting and useful. They did a good job of confirming why it is still important to teach and use the passive voice. A colleague who was editing my work once told me that I should never use the passive voice, which I thought was strange since the proper use of the passive v.s. active voice simply depends on the context (choice of focus, etc.).
Apparently, in the 1980's there was a move to encourage use of the first person in academic writing in order to "allow for more personal comment, narration and stylistic variation" (p. 2). Fortunately, now things have shifted back.
"Biber and Conrad (2009) found that the use of the passive was particularly evident in research articles, especially so in the methodology sections...They conclude that the advice to avoid the use of the passive, often found in writing guides, is 'misguided' (p. 122). Swales (2006) is able to confirm through a corpus-based analysis of authorial stance across social and material science disciplines that the passive is important in reporting scientific research since it 'enables emphasis to be given to the work rather than the researcher who performs it' (p. 509)" (qtd. in Johnson & Lyddon, 2016, p. 4).
The article also provides a good overview on why exactly the passive voice is so difficult. Some of it stems from the L1. As an example, apparently in Asian languages (about which I know very little), "it is difficult or impossible for an inanimate subject to take on [the] property of agency" (p. 2). The example they gave in the article was: The thermometer measures the temperature, a sentence which could not exist in some languages. They would then want to say, We measure temperature with a thermometer. This latter sentence is one we would want to change to avoid the "we" and thus say Temperature is measured by a thermometer, which is still awkward to them since an inanimate object is still doing something. Another issue is that some verbs that are transitive in English are not in other languages, and vice versa.
Issues with the Article
First, the authors emphasize that grammar textbooks currently available do not do an adequate job in teaching anything more than form, relating to the passive construction. I disagree. The textbooks I have used in the past 10 years all cover quite extensively the meaning and use. For instance, the rules of using passive voice when a) the agent is unknown; b) the agent is assumed and it is not necessary to mention; c) we want to avoid mentioning the agent; or d) we simply want to focus on the object or action, rather than the agent. I regularly go through these patterns, along with a lot of practice. Most of this comes right from the books. I am not sure what grammar textbooks are available in Japan, however.
My second and most important concern relates to the study itself--the research question and methodology. Basically, their study aimed to see if a basic three-session module on passives was effective. In describing their "instructional approach," the authors used a lot of flowery academic language to present what is a very basic PPP (present, practice, produce) lesson plan format. Phases 1 and 2 include verbal explanations and presentation of the material using charts and diagrams. Then, Phase 3 is some type of communicative activity, followed by a reflection-type activity/self-assessment. This is how grammar is usually taught. The authors here called it concept-based instruction (CBI), which is something I have never heard of, probably because such a notion is so completely elementary in our field. I am familiar with content-based instruction and task-based instruction. But concept-based instruction in my view seems to mean instruction in general, again harking back to the PPP model. Nothing new here.
So I ask, isn't this simply classroom research, where an instructor wants to see whether students met a certain learning outcome or whether a new teaching method was effective using a pre- and post-test? In this case, it is hardly worth publishing, in my opinion.
Johnson, N.H. & P.A. Lyddon, P.A. (2016). Teaching grammatical voice to computer science majors: The case of the less proficient language learners. English for Specific Purposes, 41, 1-11.
Monday, May 16, 2016
One of the purposes in delving into STEM writing in the first place was that one of the engineering professors at our university expressed frustration at international students not being able to adequately write lab reports. As a starting point, I gave students a very basic format consisting of introduction, experimental procedure, and results. They did two brief activities using this format. The first activity involved watching a short Youtube video of a man doing an experiment to determine how much sugar is actually in a can of soda pop. The second activity was a set of lab notes I had written up, based on a fictitious experiment to test the effectiveness of three types of hand-sanitizers. The notes included the research question, materials, brief notes on the procedure, and the results, including photos of the bacteria growth in petri dishes. The students were then able to utilize the information from the notes and what they observed from the images to draw conclusions on the most effect product. Together, these two smaller writing assignments worked well to provide my English language learners a more authentic context to describe a process and discuss the results. This also provided the groundwork for them to conduct and write about their own hands-on lab.
Planning a lab for English language learners can be challenging to due restraints in facilities, equipment, and technical know-how. Because I needed something for them to do in the classroom, I fell back on an old standby: the absorbency of diapers. From experience, I knew that students always enjoyed the process of extracting the tiny granules from a diaper and watching them expand to absorb a large quantity of dyed water. But this time, I wanted to expand the activity to compare three brands of diapers. In pairs, I had the students take one diaper from each brand—Pampers, Huggies, and Target (generic)—and first record the qualitative data on each, including softness and elasticity, as well as measure the dimensions and cost per diaper. Then, they measured the absorbency by pouring dyed water into each and recording the amount at saturation. This provided an engaging hands-on activity on which to base their paper.
In a multi-draft formal report, students included an introduction on the purpose of this experiment. The second section was a description of each diaper, based on their notes, and the third section described in detail—and using the passive voice, where possible—the procedure used to measure the absorbency. Here is an example from one student:
Once the measurement and observation were taken, each edge of diaper were cut to extract its granules which is found on the padding of the diaper. After extracting the granules, they were poured into a plastic cup. Then, colored water, which is used to give better visibility in absorbency, was added until the material was saturated. Eventually, the capacity of the water that was observed was recorded.
This student was able to effectively utilize the passive voice in describing the process.
Finally, the conclusion of their report gave a recommendation of which brand was the best buy overall.
Benefits and Challenges
In the course of the semester, my students had learned how to effectively describe a process, work that process into a simple lab report, and then move into a more developed full-length paper. A major benefit included the fact that STEM topics naturally provide good opportunities to use the grammar structures learned in class—sentence combining, transitions, and the passive voice. These topics also seemed to be engaging, since they are academic in nature and students can see the benefit of these writing tasks. The real challenge, on the other hand, was coming up with prompts that are STEM-based but did not require extensive background knowledge or research. Another issue that arises is the problem with plagiarism. Given the technical nature of these topics, students are much more inclined to take explanations and definitions from the Internet without appropriately citation. Still, using STEM topics and tasks in writing has been worthwhile, even though in some ways it targets lower on Bloom’s (1954) taxonomy of critical thinking.
Bloom, B.S. (Ed.). Engelhart, M.D., Furst, E.J., Hill, W.H., Krathwohl, D.R. (1956). Taxonomy of Educational Objectives, Handbook I: The Cognitive Domain. New York: David McKay Co Inc.
Process writing assignments
One of the rhetorical patterns in our textbook, but one which we had not previously used, is the process pattern. We had been focusing mainly on the summary/response and argument essays—again, which we thought would promote more critical thinking. However, I realized that the process essay might be more representative of some of the technical writing our students would be required to do in their STEM classes. For instance, writing lab reports would often require a section or two that describes what happened or how something works. In developing the process writing component, the main challenge was developing STEM-related prompts that did not require too much technical background information. Three of the most successful prompts were to describe a) the water cycle, b) the process of recycling plastics, and c) how a product gets to the consumer.
For the water cycle prompt, students were given a diagram which provided them with the most important vocabulary—precipitation, evaporation, runoff, water table, transpiration, and condensation (see Figure 1). From there, students were able to expand their ideas on each step, giving examples and detail. They also incorporated the transitions and signals that they had learned in earlier courses and reviewed here. Overall, the students did well with it and seemed to enjoy a fresh topic (ecology) to write about.
Figure 1. Sample prompt given.
Next, as an in-class writing assignment, students wrote a multi-paragraph essay describing how plastics are recycled. They based their essays on a three-step flow chart (see Figure 2), providing them with some key points. In this case, the aim was to assess their description skills, not their knowledge of recycling. But the students were able to use the understanding that they did have to build and develop the ideas in the prompt. Although the goal wasn’t to stimulate creative thinking, it was clear from their essays which students applied themselves to really build and develop their ideas and which students simply wrote about the basic points on the flowchart. The manufacturing prompt was of a similar nature and produced similar results in student writing.
Figure 2. Sample prompt given
Hands-on science activities with short-answer writing
In many cases, I simply wish to give students basic writing practice using STEM-related prompts, rather than craft complete essays. I was drawn to activities from elementary school science class. Initially, I wondered if my students would find this somewhat insulting, as they were projects and topics that I remember from fourth grade. But in reality, they seemed to enjoy this. I wondered if in their country, education was not so hands-on and perhaps they did not engage in such activities.
One activity was to make homemade flashlights. After reading some articles and reviewing diagrams of electric circuits and their components, I distributed to each pair of students a AA battery, small light bulb, copper wire, paperclips, and some cardboard and tape for mounting. After they all spent some time putting it together, each group had quite a unique looking flashlight, some of which worked more successfully than others. Then, I had the students write a short paragraph explaining how their flashlight worked. Sample language would include such sentences as:
First, the electricity flowed from the battery terminal into the contact on the end of the light bulb. The filament inside the bulb illuminates and emits lights. The electricity flows out again and through the copper wire to the negative electrode on the battery. A switch on the wire can either break the circuit or let the electricity continue flowing.
Another activity involved growing bacteria cultures. I had our department order in two sets of petri dishes pre-filled with agar. In groups, the students decided which surfaces to swab for their cultures. Key vocabulary included bacteria, exposure, surface, replicate, incubate, and so on. I took the petri dishes home and left them for a couple of days inside my oven with the light on, which resulted in a nice warm environment for the bacteria to grow. Back in the classroom, they wrote answers to a series of short-answer questions, requiring the use of the target vocabulary.
Tuesday, May 10, 2016
May 10, 2016 Heather
This is a multi-series post on my ESL writing experiences this past semester.
As educators, we wear many hats. Besides imparting knowledge and skills, it is natural for us to feel the duty to instill values and a sense of responsibility and initiative. In an intensive program, our main focus is to prepare our students for success in their mainstream classes. They need not only academic language skills, but also study skills and intercultural competence. They need to learn how to respect their peers, manage their time, and think critically. And being the noble teachers we are, we try to tackle it all. Just as a parent.
A topic that often resurfaces is critical thinking, and I believe Bloom’s Taxonomy (1954) has been the most influential work here. In many a staff meeting, I have heard the argument for more focus on critical thinking. It is true that many of our students come from educational backgrounds where little critical thinking is required of them beyond answering test questions and memorizing facts. Instructors have bemoaned the fact that students struggle to come up with “something new” and “creative” in their essays. We all nod our heads and get out Bloom’s triangle and talk about how to move them up from the basic foundation of recalling knowledge to the tip, which is synthesis and evaluation.
I do agree that helping ESL writers progress through these academic processing stages – knowledge, comprehension, application, analysis, synthesis, evaluation – would be ideal. In order to foster the type of critical thinking involved in some of these higher-level stages, many ESL and English composition instructors adopt standard rhetorical patterns. Some standard models are the argument and summary/response models. For the past five years, I have taught an advanced ESL writing class each semester and have worked to help my students develop their arguments and bring in outside sources as support; however, it is like banging my head against the wall to get them to give any new type of argument I haven’t heard before. When they do come up with something “creative,” it is difficult to comprehend due to the language limitations.
Because of the challenges in expressing new ideas, most teachers and students prefer to stick to the basic ESL topics: education, the environment, and technology, to name a few. We are all familiar with these topics. Our textbooks are built primarily around them. I suppose that many assume liberal arts topics are the right place to plant the seed of critical thinking in our students. But is there really anything new about environment, education, and technology? These are actually safe places for our students to pump out trite sentences like, “Studying abroad is beneficial because it helps students learn a new culture and language. For example, I had a friend who…”
As most of my students plan to matriculate into STEM-related undergraduate programs, I realize that in reality much of what they will be doing, language-wise, relates in fact to the lower-levels of Bloom’s Taxonomy - knowledge and comprehension. Isn’t that a better place to begin for language learners? They will need to learn critical thinking skills at some point, but perhaps it would be wiser if as language educators we focused on the main hat we wear.
Bloom, B.S. (Ed.). Engelhart, M.D., Furst, E.J., Hill, W.H., Krathwohl, D.R. (1956). Taxonomy of Educational Objectives, Handbook I: The Cognitive Domain. New York: David McKay Co Inc.
Monday, May 2, 2016
In their study, Murphy and Kandil (2004) run an analysis on the Academic Word List (Coxhead , 2000) to categorize stress patterns. Their aim, I believe, was to determine which word-stress patterns were the most common, and thus help instructors focus their efforts on teaching those particular patterns. They developed a numeric convention to group words by their common stress patterns: 3-2 means that a word with three syllables would have its primary stress on the second syllable (e.g. commitment); 5-3-1 would be a word with five syllables, the primary stress being on the third syllable, and a secondary stress on the first syllable (e.g. theoretical). The results of their analysis shows that of the 525 headwords in the AWL, there were 39 distinct patterns, but over 90% of these words have only 14 of those word-stress patterns. Looking at their results table (p. 69), we see that the most common stress pattern is the 3-2 pattern, followed by the 2-2, 4-2, and 2-1 patterns. They conclude that knowing these patterns “should prove useful as a complementary source of information for purposes of curriculum and lesson planning and private study” (p. 73). In addition, they state, “[W]e have found that the numeric conventions for labeling stress patterns illustrated in this report are useful when working with EAP and other ESL learners” (p. 70).
As practitioners, rather than researchers, how do we apply findings like this? First of all, let us look more closely at these patterns and real examples that we would use in the classroom. For the 5-3-1 stress pattern, which was mentioned in the article along with the sample words theoretical and methodology, these two words are grouped together because they each have five syllables and a primary stress on the third syllable. But how about the word electrical? That is a four-syllable word, with its primary stress on the second syllable (4-2), so thus would be placed in a separate category, despite the fact that it shares an important attribute with theoretical, a 5-3-1. Also, we have biology, another 4-2 word, placing it in a category with electrical, rather than with methodology. This wouldn’t make a lot of sense to instructor and student alike. And how would teaching a numeric convention along with the word and its meaning be of much help except for a student who is adept at numbers and wants to memorize a 3-2 along with the word?
Perhaps theory has caught up with practice in the years since this study was published. One of my favorite pronunciation resources is Well Said: Pronunciation for Clear Communication (Grant, 2010). Her chapter on “Using Suffixes to Predict Stress” introduces the idea that word-stress is often based on suffixes, and contrary to Murphy and Kandil’s method of counting from left to right, her rules are based on right to left. It doesn’t matter how many syllables the word is, most common suffixes call for stress on the syllable right before the suffix. Or perhaps the second syllable from the suffix (again, starting with the last syllable). Here are some examples, showing that the numeric convention is less helpful than simply knowing the suffix:
Here we can see that helping students notice that in general, the suffix usually calls for the primary stress to be just before it may be more helpful than grouping words based on total syllable counts, the way that Murphy and Kandil describe. It should be acknowledged here that they did mention in their discussion the fact that suffixes have an impact on stress. Specifically, they write, “[T]eachers may take advantages of the opportunity…to build learner awareness of the impacts of suffixation by also introducting stress patterns of related words within the same lexical family (e.g. eCONomy, ecoNOMical, ecoNOMically, and eCONomist)” (p. 71). But they fail to see that knowing which suffixes call for the stress immediately before and which do not (e.g. ic/ical vs. ist) is probably more than the importance of words being within a certain lexical family. I certainly agree, however, with the article’s premise that word-stress patterns are a very important aspect of pronunciation, especially in the realms of EAP and academic preparation. Teaching word-stress patterns, especially with suffixes, can be fun and engaging while lending itself well to various academic topics.
Murphy, J. & Kandil, M. (2004). Word-level stress patterns in the academic word list. System, 32, 61-74.
Grant, L. (2010). Well Said: Pronunciation for Clear Communication. National Geographic Learning.
In a recent issue of Language Testing, Batty (2015) explores the long-standing question of video-based vs. strictly audio material to test listening comprehension. Is a test where the speakers can be seen on a screen easier than a test where only the voices are heard? As always, I find the literature review almost more interesting and insightful than that the “present study” itself. For one, it saves me the time of researching and reviewing the literature. I also find it interesting to see the author’s take on all of these findings.
In this particular article, after he discourses on the difficulties of defining and measuring the construct of listening, he turns his focus to the role of visual aids and nonverbal communication. He notes that apparently, “70% of the meaning in a social situation is encoded in the visual…channel” (p. 5). Like other researchers I have read, he argues, “Listening rarely occurs ‘in the dark’. For sighted people, in virtually all situations and circumstances, listeners can see those speaking to them” (p. 5). Subsequently, he does admit that telephone conversations and podcasts are exceptions.
This has led me to reflect on the various media that I listen to in my own life, as well as the degree to which I use nonverbal communication to ascertain meaning. I am not sure that I agree with the conclusion that nearly all listening happens face-to-face. In my personal life, I spend a lot of time talking on the phone to family and friends who are far away. Indeed technology has influenced my communication style with the added formats of online chatting, email, and text. But honestly, one of my favorite apps is Voxer, which allows me to send short audio messages (like texts). Thus, a lot of my day-to-day communication with family happens through listening to these short clips of my nieces and nephews telling me what they are playing, etc. (I have found this to be a more effective way to keep in touch than Apple’s Facetime or Skype, where I watch dizzying videos of the room twirling around me as my three-year-old niece dances and temporarily forgets that we are even having a conversation.) In addition to the audio-only phone conversations, I also spend a lot of time listening to news radio, audiobooks, lectures, and podcasts in my car and as I exercise. And, going back to my college days, I remember the experience of countless lectures in huge auditoriums (where I could barely see the professor, let alone watch his body language) where I sat eagerly scribbling notes. As I watch the students in my ESL listening class, they are doing the same thing. Talking head or no, they have their noses in the paper, eagerly trying to answer the questions. Or, even when there are no questions, they are not necessarily focused on the screen.
I suspect that the role of nonverbal communication depends on the type of listening task. I believe that as ESL instructors, we might assume that a video is better than audio only. Academic listening textbooks, such as the Contemporary Topics (Beglar & Murray, 2009) series often contain video recordings of the lectures. However, because the lectures are more based in the liberal arts (e.g. themes of communication, social trends, etc.), there aren’t really any valuable visuals to help listeners understand the material. Really, it is just a talking head. Regardless of whether or not the speaker’s facial expressions and hand gestures truly help comprehension, listeners are not necessarily motivated to pay attention visually. However, for topics that require visual aids (for native and non-native speakers alike), obviously a video would be more effective. Take the lecture videos available on the Khan Academy website. Khan Academy hosts a multitude of resources on academic topics: mathematics, chemistry, accounting, etc. These don’t have a talking head at all, but instead employ video-recorded screen capture techniques. This enables the viewer to see the lecturer writing on a chalkboard, drawing diagrams, and showing pictures. These types of visuals would be more likely to enhance listening comprehension.
Returning to the article at hand, Batty goes on to discuss many studies that have been conducted looking specifically at visuals in an assessment context. Naturally, he explains the limitations of all of these studies (e.g. different questions were used, only used classical test theory), but it seems that several of the studies he reports concluded that there was no significant difference in performance on audio-only and video-mediated tests. One that showed the opposite was a study involving a French test with three versions: strictly audio, video, and completely silent (only the video) (Baltova 1994). Surprisingly, the examinees performed similarly on the video and the silent tests. This leads me to wonder if having the visual cues and nonverbal communication enhances the true listening comprehension or if it helps the listener to infer what they didn’t really hear or understand. Perhaps it doesn’t matter in real world listening. But if the object is to isolate the construct of listening for testing purposes, this might be a consideration. Batty confirms my conclusion by saying the following:
Anyone who has traveled in a foreign country with out knowing the local language knows that a great deal of information can be passed with simple hand gestures and a few words, but a test of foreign language listening comprehension is typically concerned with mastery of the language itself, not that of pan-cultural, ad-hoc, gesture-based communication. (p. 17)
I completely agree with Batty here. As a student of Russian as a second language, I remember trying to communicate on the streets of St. Petersburg, and probably 70% of the meaning I was able to take away from what a stranger on the street was saying to me was from his gestures (e.g. in essence telling me, “What are you doing on the streets in -40 degree weather without your face covered with a scarf?!”). However, as one attains more proficiency, we hope that more of the meaning (the literal meaning, at least) comes from the language itself and less from this type of body language.
After Batty reviews this literature, he presents his study. The novel thing about it is the use of the many-facet Rasch analysis, which as we know, is all the rage now in testing research. Still, Batty’s study concludes that there is no real difference in performance on the audio-only and video tests, regardless of proficiency level. He says, “Overall, it seems likely that the divide in the research over whether video has a facilitative or no effect can be more easily explained by differences in test design than anything inherent to the format of delivery” (p. 17).
After digesting this article and others like it, I am left wondering if it is really possible to truly assess listening comprehension. There are so many variables involved—nonverbal gestures or visual aids that may convey the message, reading skills required to differentiate between multiple-choice options, etc.—it seems nearly impossible to really isolate the construct of listening ability. Both as an instructor and also the Testing Coordinator, I spend a lot of time looking at test scores. What is the difference between a student who got 86% correct and a student who got 80% correct? Why did a particular student get a question wrong? Was it because of proficiency, or was it that student’s interaction with the test item? Is there really such a thing as a perfect test question? I feel that the tests we give in English programs can give us only a ballpark estimate of student ability, which perhaps is sufficient in identifying low versus high performers. However, because these are real human beings, even sophisticated measures such as the many-faceted Rasch analysis can only tell us so much.
Batty, A. (2015). A comparison of video- and audio-mediated listening tests with many-facet Rasch modeling and differential distractor functioning. Language Testing, 32(1), 3–20.
Beglar, D. & Murray, N. (2009). Contemporary Topics 3: Academic Listening and Note-Taking Skills, 3rd Ed. Pearson Longman.
Note: This was originally published on one of my earlier blogs (March, 2013)
I thoroughly enjoyed studying language testing in graduate school, entering test scores and playing with the spreadsheets. As Testing Coordinator, I have begun in recent semesters to run item analyses on our listening tests in order to identify areas needing improvement.
I have mainly been checking item difficulty, item discrimination, and distractor analysis, entering the data and comments onto a draft of the exam. This makes it easier for teachers to interpret the data and know how revise the problematic items or distractors. See a sample analysis of one of my listening tests, as well as the actual test with comments.