Experimenting with IBM Watson natural language understanding

From Competendo - Digital Toolbox
Revision as of 10:14, 10 May 2022 by Nils.zimmermann (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Learners’ use IBM's Watson and its module “Watson Natural Language Understanding”. They explore what AI extracts from (their) human communication and learn about the AI's approach and perception.


  • Get familiar with Big Data and, in particular, machine-based language understanding
  • Understand the principles and modes of operation of real AI systems
  • Better understand how AI perceives and accordingly classifies information


1. Introduction: Watson is a free, accessible AI about natural language understanding. Applying Watson might offer an entry point for practice and experimentation with AI: Link

2. Material: choose (English) texts or webpages for analysis. These might be extracts from press, or from communications in social media or articles. Participants might use their published texts preferred.

3. Text analysis: let Watson conduct a text analysis. Watson provides analysis by surfacing meta-data from your text content, including keywords, concepts, categories, sentiment, and emotion.


Think in your group about what the AI attributes to certain aspects of the text.

  • Emotions: Watson distinguishes sadness, joy, fear, disgust, anger. Why is something marked as sad and something else as joy? What might be reasons for the AI-selected decisions?
  • Play with wording, and see if alternative words influence the AI categorisation?
  • What do you think - how intelligent is Watson? What makes IBM’s machine intelligent? (Diverse, large amounts of data from different human contexts, computing capacity, learning algorithms, etc.)
  • How do you assess the result?
  • Think of (other) scenarios where Watson might be useful? Where would it eventually cause harm? Justify your thoughts.
  • What consequences could an AI assessment of this text have in practice (social scoring, social cooling, data extraction, etc.)?
  • What might be the consequence for democracy if AI supposes a negative conception of the term and related words?


Georg Pirker

Person responsible for international relations at the Association of German Educational Organizations (AdB), president of DARE network.

Explore: Language Processing

Learn more about Language Processing: Adam Geitgy: Natural language understanding is fun. Medium: https://medium.com/@ageitgey/natural-language-processing-is-fun-9a0bff37854e

Variation: Cambridge Analytica

To help make this concept more concrete, let us look at the case of Cambridge Analytica. The company used the psychological four oceans model for screening contributions of social media users. These were assigned to a category and third parties had the opportunity to filter out these users according to the categories. This allowed targeted influence of electors during the US 2016 elections. However, the example shows also the limitations of AI, as long as it relies on easy computable psychologic models, in this case the “Big 5 personality traits” (also known as OCEAN). A discussion about the capacity of AI could also include the capacity of its auxiliary models.

Variation: Governance & Control

Moreover, when very sensitive personal data like emotions and psychological predispositions can be identified and shared on the basis of an algorithm, one could lead the discussion also toward governance and control of AI. Beyond emotions and psychological characteristics, other sensitive data might be generated and shared. Is that acceptable? What, if the machine is wrong? What needs to be done in order to improve, control and monitor AI?

Variation: AI and social discourses

This may lead also to a discussion about social media as raw material for automated language understanding. Since AI for learning largely depends on massive availability of texts and data, social media is a rich and accessible training field for automated language understanding. What kind of impact do AI decisions have on topical fields on real social discourses? What if topics that deal with governance, democracy and justice are perceived as difficult, associated with negative emotions or treated as less important? What kind of content is given high priority through social media algorithms and psychological models? How does this impact users and their communication in democratic and also authoritarian societies?

Time 1.5 hours

Material Standard, text resources, digital board, projector, IBM Watson web access

Group Size individually or in small teams

Keywords playful learning, AI, bias, discrimination




Also interesting:

Learning the Digital

Digital-book-cover.png A Competendo Handbook

Read more