Courses' slots:

Week one 9:00 - 10:30

Language and Computation introductory course:
The mental lexicon, blueprint of the dictionaries of tomorrow: linguistic, computational and psychological aspects of a highly valuable resource

Teacher
  • Michael Zock ()

Keywords: electronic dictionaries, mental lexicon, word access, organisation of the lexicon, index creation for navigational purposes

Abstract:

Whenever we read a book, write a letter or launch a query on google, we always use words, the shorthand labels of concepts. Words are building blocks which, if properly combined, allow us to express complex thoughts. No doubt, words are important. The question is how are they learned or stored, how are words represented, organized and accessed? The answer to these questions will be the topic of course.

After addressing some fundamental questions (what are words? Are they stored as holistc entities or in a modular form?, etc.), we will discuss various types of dictionaries (paper, electronic and mental), their making (by hand, semi automatically) and use (on-line, off-line). We will then introduce some of the techniques currently used in computational lexicography. Finally we will take a look at some of the findings in psychology and neurolinguistics and consider their possible relevance for dictionary builders (How should words be organized and indexed in order to allow for quick and intuitive access?).

With the advent of corpora and computational tools (computers and programs) many things have changed, so has our knowledge concerning the mental lexicon. Dictionaries are huge storehouses of words with various kinds of information associated with each of them. One might think, the larger the better. Yet, dictionaries, complete as they may be, are of limited use, if we cannot access the information they contain. Organization and indexing are critical factors for which lexicographers may get inspired by looking at the psychologists' work.

We will heavily draw on the following sources for this course (see also the reference section): Atkins & Rundell (2008) and Fontenelle (2008) for 'computational lexicography' and Aitchison (2003), Bonin (2007) for work on the 'mental lexicon'.

top

Language and Computation advanced course:
An introduction to minimalist grammars.

Teachers
  • Gregory Kobele ()
  • Jens Michaelis ()

Course material: main.pdf gaertner_michaelis-mtsat10-esslli07.pdf Kobele06-2.pdf michaelis-lacl01-esslli-09.pdf minimalism-lacl98-esslli09.pdf

External page

Abstract:

Research in the tradition of Chomsky's minimalist program is often inaccessible to non-minimalists, partly because of the highly intuitive level at which much of the work in this tradition is conducted. This course will show how major components of recent Chomskian syntax can be expressed in formal grammars inspired by Stabler's "minimalist grammar" (MG). Many MG variants have been rigorously related to MC-TAGs and other well-understood formalisms, so that now a wide range of Chomskian proposals can be understood and assessed by formally minded linguists from every linguistic tradition. Considering especially recent (empirically consequential) proposals about locality, copying operations, adjunction, and interfaces (phonetic, morphological, semantic), this formal treatment sometimes reveals surprising aspects of those proposals---in particular when being depicted against the "classical" background of parsing complexity and generative capacity---that have been obscured in the informal literature.

top

Week one 11:00 - 12:30

Language and Computation foundational course:
The Foundations of Statistics: A Simulation-Based Approach.

Teacher
  • Shravan Vasishth ()

External page

Abstract:

I will develop the statistical theory underlying hypothesis testing from first principles, using elementary probability theory and Monte Carlo simulations. I will use the programming language R (http://cran.r-project.org/). In addition, we will discuss issues such as data visualization and data management for realistic datasets.

We will use a newer version of this freely available online textbook: http://www.ling.uni-potsdam.de/~vasishth/SFLS.html

Day 1: Basics of R

Day 2: t-tests and confidence intervals

Day 3: statistical power, Type I and Type II errors

Day 4: linear models and their connection with t-tests

Day 5: multiple regression and mixed-effects models

This course will be useful for anyone doing any kind of quantitative research. Students typically learn cookbook statistics, an approach which often leads to significant misunderstandings regarding hypothesis testing. This course has been developed to help correct such problems.

Each lecture will be accompanied by practical exercises to be completed after class; solutions will be provided the following day.

top

Language and Computation introductory course:
Grammaticality Judgements as Linguistic Evidence.

Teacher
  • Brian Murphy ()
Abstract:

The current use of grammaticality judgements to evaluate the "goodness" of linguistic utterances is controversial (see e.g. Schutze; Wasow & Arnold). The more systematic approaches advocated by Bard and others remain a minority practise. Now, a new consensus is emerging (see e.g. Murphy; Featherston; Weskott & Fanselow), that i) theories of grammar may be investigated independently of models of acceptability judgements and ii) various judgement scales access a single cognitive competence. The course will begin with a quick review of theoretical views of grammaticality. The main part of the course will then introduce concrete methodological guidelines for gathering materials, composing instructions, presenting utterances, choosing among different judgement scales, and selecting appropriate statistical tests for analysis. Publicly available software will be introduced, with an emphasis on web-based testing. Acceptability judgements will be situated relative to other methodologies, including ERP analysis, timed reading and corpus analysis. The course will assume foundational knowledge of linguistics. particulars

top

Week one 14:00 - 15:30

Language and Computation workshop:
Parsing with Categorial Grammars

Organizer
  • Gerald Penn ()

External page

Keywords: categorial grammar, parsing, semantic inference

Abstract:

Among computational linguists, there has been an enormous resurgence of interest recently in parsing with categorial grammars, both because of their extreme lexicalism and because of their well-defined connection to interpretable semantic terms. The recent work of Clarke, Curran and others on Combinatory Categorial Grammars, and of Moot, Baldridge and others on multimodal extensions of categorial grammar, in particular, has produced a collection of efficient and expressive parsing tools that have only just begun to make an impact on tasks such as the Pascal RTE challenge. As the CL community attempts to push the state of the art from mere syntactic annotation into parsers that actually allow for semantic inference, CG's position can only improve. At the same time, there is no shortage of variations on "categorial grammar," and there has not to date been a great deal of communication between the adherents of these various strands on their relative linguistic or semantical merits, nor on more technical concerns of algorithm design and numerical parametrization.

The aim of this workshop is to bring researchers in these various strands to share and assess their progress in the spirit of promoting categorial grammar's overall advancement. ESSLLI has a long and distinguished history of offering CG courses and workshops, most recently Glyn Morrill's course on Type-Logical Grammar (2007), but these have typically mixed lectures on connections to linguistic theory with more formal lectures on parsing and tractability within one strand of scholarship. The focus of this workshop, however, will be squarely on the formal / computational side, with the intention of representing work across all variations of categorial grammars.

top

Week one 17:00 - 18:30

Language and Computation advanced course:
Computational Psycholinguistics.

Teacher
  • Roger Levy ()
Abstract:

Over the last two decades, computational linguistics has been revolutionized by increases in computing power, large linguistic datasets, and a paradigm shift toward the view that language processing by computers is best approached through the tools of statistical inference. During roughly the same time frame, there have been similar theoretical developments in cognitive psychology towards a view of major aspects of human cognition as instances of rational statistical inference, exemplified by work such as Anderson (1990) and Tenenbaum & Griffiths (2001). Developments in these two fields have set the stage for renewed interest in computational approaches to human language processing. Correspondingly, this course covers some of the most exciting developments in computational psycholinguistics over the past decade. The course focuses on probabilistic knowledge and memory in language processing, covering models, algorithms, and key empirical results in the literature.

top

Week two 9:00 - 10:30

Language and Computation advanced course:
Psycho-computational issues in Morphology Learning and Processing.

Teacher
  • Vito Pirrelli ()
Abstract:

By providing a comprehensive overview of current machine-learning, psycholinguistic and theoretical linguistic literature on the topic, the course is intended to answer the following questions. How are words singled out of their embedding input stream? How are they processed and eventually understood in working memory? Are morphologically complex words stored in long-term memory as a whole or are they rather composed "on-line" in working memory from sub-lexical constituents? Do formal regularity and morpho-semantic transparency play any role in this? Does word-level knowledge require parallel development of form and meaning representations, or do the latter develop independently at a different pace to interact only at later stages? To what extent does past knowledge affect on-line word processing? What principles govern this knowledge? Are they morphology-specific or are they rather based on brain memory structures generically devoted to the ordered activation of items in time? Do they capture local, syntagmatic relations among sub-lexical co-occurring constituents, or also enforce more global paradigmatic constraints over classes of such constituents in complementary distribution?

top

Language and Computation introductory course:
Standard XML query languages for natural language processing.

Teacher
  • Ulrich Schäfer ()

Course material: u_schaefer_xml_query.pdf

Abstract:

This course will introduce three standard XML query languages that have been designed by the World Wide Web Consortium (W3C), XPath, XSLT and XQuery. Although various query languages have been proposed and developed for accessing annotated corpora, they are often tailored to specific formats and phenomena. This course will focus on the standard query languages for which multiple and very efficient implementations exist that run on almost any platform. Applications and examples are presented not only for corpus access, but also other NLP-related tasks such as accessing RDF ontologies and integrating NLP component output. Finally, the course will also briefly show the frameworks that are used to embed the query languages in popular programming languages.

top

Week two 11:00 - 12:30

Language and Computation foundational course:
Case, Scrambling and Default Word Order.

Teachers
  • Miriam Butt ()
  • Heike Zinsmeister ()

Course material: 01-CaseScramblingWordOrder_reader.pdf 02-Mueller1999.pdf 03-Mueller2002.pdf 04-BresnanEtAL2007.pdf 05-Evert2006.pdf 06-LuedelingEvertBaroni2007.pdf 07-Meurers2005.pdf 08-BaderHaeussler.pdf 09-CahillForstRohrer2007.pdf 10-FilippovaStrube2007.pdf 11-Forst2007.pdf 12-Keller2000.pdf 13-PatilEtAl2008.pdf 14-SchulteImWalde2002.pdf

Abstract:

Many of the world's languages are so-called "free word order" languages, whereby the major arguments of a clause can be scrambled quite freely. This scrambling generally goes hand-in-hand with a robust case marking system and some means of verb-argument agreement (usually verb-subject agreement, but not always), which allows the identification of the various arguments of the clause (i.e., which is the agent, the patient, the goal, etc.). Sometimes, however, the correct identification of which syntactic argument encodes which of the semantic participants of a verb/clause can only be achieved by world or contextual knowledge. Additionally, effects of so-called word order freezing can be observed, whereby suddenly the word order is not free, but is fixed if one wants a certain mapping of semantic participants to syntactic arguments. Finally, one generally also refers to a "default word order" exhibited by languages which in principle allow for the (more-or-less) free scrambling of syntactic arguments. The theoretical status of this default word order is not clear and this course will examine the topic of argument scrambling, word order freezing and default word order with respect to two main perspectives: 1) theoretical linguistics; 2) corpus linguistics.

With respect to the theoretical perspective, students will be introduced to current theories of case and word order so that we can examine what (if anything) these theories have to say about default word order and word order freezing in particular. With respect to the computational perspective, we will examine to what degree information from corpora can help guide the analysis and help us understand why things are scrambled when they are scrambled and what status the "default word order" actually has in terms of frequency and distribution. As part of the course (one day), we will also present psycholinguistic studies that identify (combinations of) features that determine word order preferences.

top

Language and Computation introductory course:
Computational Lexical Semantics.

Teachers
  • Gemma Boleda ()
  • Stefan Evert ()
Abstract:

This course will provide students with an overview of current research in Computational Lexical Semantics, and with the necessary theoretical and methodological background to carry out their own research. Students will have an opportunity to work on practical examples, learning to tackle the difficulties mentioned above. Special emphasis will be put on the feedback between computational approaches and semantic theory.

top

Week two 14:00 - 15:30

Language and Computation introductory course:
Intelligent Computer-Assisted Language Learning: An introduction to an emerging interdisciplinary field.

Teacher Abstract:

Intelligent Computer-Assisted Language Learning (ICALL) is a relatively young field of interdisciplinary research exploring the integration of natural language processing in foreign language teaching. The course will introduce both the theoretical issues and the practical system development aspects of ICALL and provide the student with a firm basis for understanding the current research issues. Key questions discussed include the following: Where does ICALL fit into foreign language teaching? Why are notions such as noticing and awareness from cognitive psychology important for second language acquisition and ICALL research? How can natural language processing (NLP) be adapted to process learner language? What are the challenges for NLP in detecting properties of learner language and what is know about presenting feedback to learners? What are learner models and what roles do they play in ICALL systems? And last but not least, how can shallow semantic NLP analysis be used to provide feedback on meaning in addition to feedback on form, and why is this important?

top

Language and Computation advanced course:
Linguistic Information Visualization.

Teachers
  • Gerald Penn ()
  • Sheelagh Carpendale ()

Course material: carpendale_penn.pdf

Abstract:

Much of what computational linguists fall back upon to improve natural language processing and model language "understanding" is structure that has, at best, only an indirect attestation in observable data. The sheer complexity of these structures, and the observable patterns on which they are based, however, usually limits their accessibility, often even to the researchers creating or studying them. Traditional statistical graphs and custom-designed data illustrations fill the pages of CL papers, providing insight into linguistic and algorithmic structures, but visual 'externalizations' such as these are almost exclusively used in CL for presentation and explanation.

Visualizations can also be used as an aid in the process of research itself. There are special statistical methods, falling under the rubric of "exploratory data analysis", and visualization techniques just for this purpose, in fact, but these are not widely used or even known in CL. These novel data visualization techniques offer the potential for creating new methods that reveal structure and detail in data. Visualization can provide new methods for interacting with large corpora, complex linguistic structures, and can lead to a better understanding of the states of stochastic processes.

Instructed by a team of computational linguists and information visualization researchers, this tutorial will bridge computational linguistic and information visualization expertise, providing attendees with a basis from which they can begin to leverage information visualization in their own research. It will equip participants with: - An understanding of the importance and applicability of information visualization techniques to computational linguistics research; - Knowledge of the basic principles of information visualization theory; - The ability to identify appropriate visualization software and techniques that are available for immediate use and for prototyping; - A working knowledge of research to date in the area of linguistic visualization.

This tutorial will be an extended version of the 3-hour tutorial offered at ACL-2008, which had 25 attendees. The instructors have previously taught portions of the content in advanced undergraduate and graduate courses as well. Students are expected to have a solid background in computational linguisics. No experience with visualization is required.

TUTORIAL OUTLINE

Day 1: Introduction; Information Visualization Theory (representational theory, cognitive psychology, preattentive processing, interaction & animation, assessing and validating visualizations)

Days 2 and 4: Review of Linguistic Visualizations (document content visualizations, text collection analysis, literary analysis, streaming data visualization, convergence of linguistic data and social network analysis, corpora exploration, visualization uncertainty in statistical NLP output, linguistic analysis, visualization of speech data)

Day 3: Tools for Visualization (software solutions: Excel, Tableau, Spotfire, programming toolkits: prefuse, processing, flare, InfoVis Toolkit, online tools: ManyEyes, Swivel, collaborative visualization tools in development)

Day 5: Case Study: Visualization for Statistical MT; Open Research Problems (CL problems that could benefit from visualization, Visualization of language areas that need CL expertise); Closing

top

Week two 17:00 - 18:30

Language and Computation introductory course:
Corpus-Based Argument Structure.

Teacher Abstract:

The aim of the course is twofold. The linguistic goal is to discuss the notion of argument structure (valence) both from the syntactic and from the semantic point of view, with some emphasis on the argument/adjunct dichotomy and diathesis (argument alternations). The computational goal is to present diverse techniques of learning valence information from corpora. This automatic learning task is usually split into two stages: the linguistic stage of collecting information about the co-occurrence of argument-taking lexemes and various types of phrases (possible arguments), and the statistical inference stage at which reliable valence hypotheses are selected. Both stages will be discussed in detail. The course will conclude with the presentation of various evaluation methods and various uses of automatically extracted valence information. The closing bracket of the course will be the discussion of the extent to which automatic valence acquisition can help in distinguishing arguments from adjuncts.

top

Language and Computation advanced course:
Distributional Semantic Models - Theory and Empirical Results.

Teachers
  • Stefan Evert ()
  • Alessandro Lenci ()

External page

Abstract:

Distributional semantic models (DSMs) are based on the assumption that the meaning of a word can (at least to a certain extent) be inferred from its usage, i.e. its distribution in text. Therefore, these models dynamically build semantic representations "in the form of multi-dimensional vector spaces" through a statistical analysis of the contexts in which words occur.

With their distributed vector-space representations, DSMs challenge traditional symbolic accounts of conceptual and semantic structures. However, their true ability to address key issues of lexical meaning is still poorly understood, and will have to be carefully evaluated in linguistic and cognitive research.

This course aims to equip participants with the necessary background knowledge for carrying out such research. In addition to the mathematical foundations of DSMs and their application to semantic analysis, we will put particular emphasis on relating the computational models to fundamental issues of semantic theory.

top