Apple 2020 
Published by Apple Worldwide Developer's
Conference, 1990 May 9.
SUMMARY: This is a visions video produced by Apple. The
list below indicates many of the novel computer science
innovations that must be used for this vision to become real.
Some are currently available, with others many years in the
future.
Virtual laboratory: 
    - Simulations of physical processes (such as chemical
        experiments)
- Hand gestures tied to moving screen artifacts e.g., test
        tubes 
- Key events in the laboratory trigger outside action e.g.,
        notify the instructor 
Groupware: 
    - On-line video phone, switching, and access control 
- Remote sketching e.g., writing "Chapter 1" by
        hand gestures on the other person's screen
Interaces for disabled: 
    - Glasses for the deaf that displays text of other person's
        speech
- System can understand crude gestures and can recognize
        difficult speech (for handicapped boy) 
- Cooking instruction presented by the agent is paced for
        mentally disabled (for girl) 
Hardware: 
    - No keyboard or mouse! 
- Screen is light-weight, high-resolution, flat, color,
        very fast 
- Video camera: hidden (in-screen?) 
- Microphone: hidden (in-screen?) 
- Gesture recognizer: hidden (in-screen?) 
- Ordinary glasses as display (deaf person), with display
        on lower half and hidden speech input device
Multi-modal input: 
    - Speech 
- Gestures for control of the interface
- Pointing to screen objects 
Speech/language recognition: 
    - Continuous voice 
- Disambiguates 
            - Noise (such as the passing train), 
- Outside communication (talking to kids), 
- Asides (talking to self) 
 
- Language translation (French to English) 
- Untrained speech recognition (Restaurant scene: can
        understand voice of the woman's client, waiter) 
- Speech to text (for dictation, for deaf woman's glasses) 
- Context understanding (errors corrected in context) 
- Real English text produced (true natural language
        understanding)
Voice output: 
    - Completely natural, including inflection 
- Tied to items on the display (highlighting, motion) 
Hand gesture recognition: 
    - Controls screen objects 
- Deixis: speech understood as references to screen
        elements e.g., "Show me this one" 
Sound output: 
    - Everyday sounds for feedback (e.g., experiment blowing
        up) 
- Abstract sounds for feedback (e.g., tones indicate
        events) 
Remote control: 
    - Computer senses and controls kitchen appliances e.g., scale with cup, microwave oven 
Multimedia: 
    - On-screen stills, video & audio 
- Novel scrolling (circular rolodex) 
- Film techniques for fades, panning.... 
- Film editing & production 
Database queries: 
    - Fuzzy queries e.g., "looking for disabled, in last
        15-20 years..." 
- Approximate solutions and estimate of number of results 
Intelligent agents: 
    - Within an application: e.g., cook 
- External to applications: e.g. main persona of system 
- Controls display, contents, highlighting 
- Context sensitive help/coaching e.g., "Cindy, why
        did you stop?"