Allele: Multi-platform Video AI
Artificial Intelligence, Metadata
Team Members: Chanel Luu Hai, Nathan Koch, Leah Willemin, Marcus Banks. This is a work-in-progress first created for Parsons & Verizon's AI Hackathon
When was the last time you look for a tutorial video? Have you ever had problems looking for the right video that matches your skill level? Have you ever had troubles following the tutorial because the video was too fast or too slow?
Video is a linear content in a non-linear world. Allele is an AI that treats video as a material instead of an unit. Allele mines deep metadata that includes, but not limited to: desirable time frames and pixels using eye-tracking method, emotional responses, geo-location, and structure (horizontal vs vertical videos).
With the metadata, Allele is able to deconstruct and reconstruct multiple video to create a multi-platform experience with appropriate content for each device. Watch the demo video below.
Marcus: Learning how to make an old-fashioned cocktail
- Ipad: AR allows scanning of the ingredients that he already has
- Monitor: plays videos that were generated based on the ingredients
* Both Leah and Marcus were able to share a video of their learning experience with friends. This programmatic video consists of their emotions / feedback and final results.
Leah: Learning how to pumpkin carving
- Cellphone: List of items needed to carve a pumpkin
- Ipad: AR overlay allows Leah to see where in the pumpkin she needs to carve out
- Laptop: Video plays in the same speed as she moves