6 thoughts on “A random walk down the garden path: A new implementation of self-organized parsing

  1. What would incorporating verb-specific biases look like in this method? Would it be expressed in terms of strength of the link between states? (e.g., after processing a verb, the states consistent with a MV reading will have stronger links for verbs with strong MV bias than for verbs with weaker MV bias?)

    1. Hi Grusha, yes, I think that type of lexical bias can be best embedded in the harmony of individual links. This would mean that a verb link “arrested” would have a higher harmony when “cop” attaches as the subject than when “robber” does. A state containing nsubj(arrested, cop) will have a higher harmony than one containing nsubj(arrested, robber). I think this could lead to larger garden path effects at “arrested” in “the robber arrested (by the detective)…” than in “the cop arrested (by the detective)…”, although I haven’t tested it yet.

  2. I’m wondering how this parsing model would handle the discrepancy in processing difficulty here:
    “The coach smiled at…” vs. “The coach smiled toward…” “…the player tossed the frisbee”

    1. Hi Abby. I assume you’re referring to the Levy et al. (2009) findings. Levy et al. (2009) did find similar first-pass reading times for “at” and “toward”. I think mparse would be able to account for that along the lines of what I presented in the talk. As for the differences between those conditions, especially with regard to regressions, I’m not sure how best to account for them. Mparse currently has no interface to eye movement control, so I can’t really speculate further. I will say that it’s clear that people can handle speech errors and other things that noisy channel approaches can explain well, so a more complete theory of (re-) reading should also be able to explain those things as well. Perhaps adding in new states to mparse that correspond to noisy reconstructions of the input could do it, but I have not tried to implement that so far. Thanks for your question!

  3. Also, you say that mparse at each step “enumerates all possible partial and complete parses.” This seems unrealistic from a human cognition standpoint (or even computationally difficult to keep that many different states in memory). Do you have any theories on how this might work given memory constraints?

    1. Hi Abby, good question! The number of dependency parses increases exponentially with the number of words in a sentence, so there are indeed many, many parses to explore. I was worried about this too initially, but I think it works to think of parsing in this way: At each word in a sentence, *people* (or people’s minds) don’t have to immediately recognize all possible ways the words can fit together. Instead, we assume a person knows which ways words can interact with each other and just tries out different ways of doing that in an incremental way. The mind can discover some or all of the possible parses on the way to an absorbing state by adding or removing links one at a time. In a given trial, a person might not visit all possible parses; they might only visit a handful before they find a parse that is as complete as possible. I don’t know whether it makes more sense to assume that people actually remember (in some sense of that word) the states they’ve visited, or if it would be better to assume a sort of “spotlight” that highlights the mind’s current parse state along with its nearest neighbors. In either case, the number of states a person has to have in memory is bounded.

      Mparse simplifies this picture by assuming that the states are known immediately after a word is read. By enumerating all parses up front, it simplifies the math considerably and makes a lot of analytical tools available. Note that, just like people, mparse does not have to visit each state in a given trial.

Leave a Comment or Question Below