ET Elijah Tabachnik
Back to projects

Multimodal AI Project

Integrating AI in Recreating Iconic Personalities

A proof-of-concept multimodal AI experience that explored how a historical figure could be represented through LLM behavior, voice synthesis, and animated visual presentation.

The project explores how language models, voice synthesis, and avatar presentation can be combined into a more convincing historical interaction experience.

Demo Preview

Open the interactive demo.

The preview links to footage showing the voice, avatar, and language-model pieces working together.

Reviving History project still
open demo video offline static live

System Stack

Personality emulation across text, voice, and avatar layers.

The system combines OpenAI-driven personality emulation, ElevenLabs voice synthesis, and a digital portrait to create a more immersive historical interaction than a text-only chatbot.

OpenAI integration

The assistant was shaped using JFK speeches, interviews, and secondary sources to keep responses historically grounded.

Voice synthesis

ElevenLabs was used to generate speech output that matched the chosen historical figure's recognizable voice.

Avatar layer

A digital portrait was used to add visual presence, expression, and stronger multimodal immersion.

Challenges and impact

Believability mattered, but so did restraint.

  • Historical accuracy had to be balanced against the tendency of generative models to hallucinate.
  • Multimodal integration across LLMs, voice, and animation introduced meaningful engineering complexity.
  • The educational use case was a major motivation: making history feel conversational and immediate.

Project Links

Demo materials

Demo footage and the original concept pitch for the project.