from 13 Dec 2013

Audience
  1. Audience kept asking "what should WE do?" Not just what should we do.
  2. Interaction between audience and speakers did the latter proud - you could tell folks walked away impressed by what they saw. Good good result.
Noteworthy on the Communication Front
  1. Confident speakers. Even, sometimes, above and beyond the material at hand - excellent ability to make most of available material.
  2. Poise in front of the room pretty remarkable.
  3. Nice handling of questions.
  4. As well presented as these talks were, all were more or less ad libbing (some from notes). It occurred to me that we should ALSO teach the art of writing out one's remarks and delivering from prepared text (how to do that without reading, with some improv, etc.)
  5. Lots of really excellent visual metaphors.
  6. Good handling of PowerPoint and Prezi for the most part.
Data Collection
  1. Several examples of genuine enterprise tracking down material and background info.
  2. Folks seemed very much "at home" with online data collection. Perhaps we are now in position to "up the game" by including some training in the pros and cons of online samples and techniques used to maximize its utility?
  3. Multi-barrel questions. Since so many of our students end up doing online surveys or face-to-face interviews that are quite survey-like, they may need to be better schooled in the (well developed) art of effective item construction. The world is getting more and more full of surveys and questionnaires and most are really bad methodologically. We tend to think in terms of the survey artifact as producing the validity and "objectivity" while we ignore sampling issues and instrument confounds. It's kind of ironic that we are so up on certain kinds of researcher bias (e.g., can people from group X interview people from group Y), but we are sometimes myopic when it comes to well established things like how question wording can affect responses.
  4. Interviewing and the tiny sample problem. Can we do a little deep dive on why folks have challenge of interviewing more than 5 people? Are there things in how we are teaching them to do it that are making the task so onerous that N typically has one digit? Part of issue is that they are perhaps learning too little about the world because there's just not enough variation. Another is that it can look lame. And another is that we sanction idea that tiny samples are OK. From my observations over the years the contributing factors are
    1. delays due to getting HSR approval;
    2. epic fail on the "how to find subjects" front (and, probably, lack of (prior) appreciation of how hard this is);
    3. lack of a clear tradition of compensating research subjects;
    4. Relatedly, we probably don't do a good job of teaching students the art of approaching subjects, how to ask for interviews, etc. I suspect we leave them too much on their own at this crucial point;
    5. Lots of confusion about whether it is OK to do pilot and/or information interviews up front to get a feel for the process and to inform the planning for interviews.
    6. Transcription as bottle-neck. We may have folks over-relying on audio recording and transcription and we are not teaching the art of taking good notes, writing up interviews immediately (like field notes), and using recordings strategically (not everyone is doing micro conversation-analytic ethnography, but they seem to feel they need to transcribe as if they are). In any case, if we are going to have folks doing this sort of work, we should do better at teaching them how to do it more efficiently.
    7. Overly ambitious plans (see 2) and challenge of fitting interviews in logistically in an already crammed schedule. Maybe do some more hands-on training, even "phone-banking" style workshop where we practice finding subjects, interviewing over the phone, etc. So much of this stuff is "hours-of-flying-time" only - you just can't get good at it until you have spent a reasonable amount of time being bad at it.
Substantive Knowledge
  1. Interesting how broad the range of topics was. Interesting discussions before and after on the question of whether there'd be some synergetic advantage to having a theme topic so that background knowledge could be shared and built upon.
  2. Some presentations suggest to me foggy connection to contemporary actual events. Contemporary zeitgeist phenomena connections good - food deserts, online life, health disparities, gender identity, chemicals, fair trade, working conditions. But connection to contemporary events, facts, etc. seemed weaker. A few times out and out wrong on central issue. I suspect partly effect of journalistic sources dominating students' stock of knowledge (very big dragon for us to slay).
  3. Are folks too fluidly gliding between established facts and conventional wisdom.
  4. Dearth of realistic sense of policy options and practicalities. I felt the need to connect folks with a sense of what constitutes a set of policy choices, what is possible, what is politically feasible, what would be efficacious. Or are we OK with sociologists and anthropologists being the impractical idealists? (serious question)
  5. We should strive to have folks reach senior year with solid SUBSTANTIVE knowledge about something. Every year, even though they'd done thesis about topic, some don't seem to know the thing itself as well as should. I suspect this is related to tendency to do thesis on a topic that is new to me rather than building on something I have already learned about. I have of late been counseling students that they can/should only do a thesis on a topic that grows out of a previous course.
Bias, Interpretation, and Related
  1. Personal connections to research particularly well-articulated in some cases.
  2. Some broad brush painting with respect to race struck me as analytically sloppy. Big oversimplification of race and class.
  3. Imputation of specific economic motives to individual actors. Do we as instructors talk about "the capitalists" and "corporations" and "owners" in such simple terms?
  4. Interpretations dominated by one's own bias. Almost no one owned up to their bias, what they expected/believed/hoped going into the research.
  5. Relatedly: My opinions taken as a given, failure to recognize that my positions are mere convictions, beliefs, just like "theirs." Normative stance common. Undertone of "good vs. evil" or "us vs. them"?
Research, Design, Questions
  1. Challenge: really school folks in recognizing and expressing what is sociologically interesting. Not same as what we should care about. Contrast with things that are politically interesting or empirically interesting. Strange, important, unusual.
  2. Research question and design. What is being compared to what in order to ascertain what? We don't all need to be doing experiments but need to push on the rigor front. At least be able to say what the design is. Recognize/acknowledge where THIS study fits in the overall scheme.
  3. Idea trajectory among our students seems to be cause -> topic -> question. Perhaps that's unavoidable in our context; if so, we should have more well articulated protocol for working with it to arrive at more "researchy" questions.
Methods, Analysis
  1. It was great that most speakers did NOT waste valuable air-time doing lit review and a few did great job of naming some key antecedents to their work. Need to build on that as a standard.
  2. Ecological fallacy issues. Denominator and distribution issues.
  3. Too little sense of differences that make a difference. We need to do a much better job at communicating when small numerical differences mean nothing/something. Help folks see what information IS in the data and what is not.
  4. Tables constructed backwards. Good to get in the habit of putting IV on top.
  5. Need some instruction in methods and beyond in how to consistently label/title tables?
  6. Presenters presenting quotes on screen and reading them and then glossing with interpretations. Might be more rigorous to identify trend and then illustrate with quotes.
  7. We need to work on folks' ability to sum up a theory
  8. Need to teach more about forming typologies. Ideal types as an analytical device. Big important tool - Weber's argument with historians and regression-ers.

Follow Up Analysis

  1. Strikes me that we should focus on two things. First, the easy one (which I've done above) is to notice the deficiencies and tag these as things we need to cover more or better or what have you. Second, and perhaps harder to do as observer but easier to actually implement, is to focus on what things at least one person really did stand out on, which things did somebody did really really well. We could then ask ourselves how we might be able to get this to diffuse, how we can reproduce it.
  2. Relatedly, imagine a grading system in which we give some sort of exam or task and then evaluate against a standard and then focus our teaching on getting everyone to be able to do the things that were done the best on the task. And then, after that, we introduce some newer, harder, next steps. Just a weird idea.