VR storyteller is a project created during the 2016 MIT Media Lab VR Hackathon. The team and I develop an algorithm that can read a story that the user types in or says orally to the device, extracts key elements in the story using semantic analysis, and predicts the mood of the story using pre-trained machine learning classifiers. The VR generative algorithm then matches the key elements extracted from the story to the 3D objects in the library, places them in the scene by looking at the context, and stylizing through ambient light and sound based on the mood of the story.
This project was designed for variety of applications, for example: 1) for educational purposes to encourage children to write and visualize stories; 2) for screenwriters to rapidly render a scene from a script; 3) for movie producers to visualize the cost of producing a film. This project received two awards from the hackathon: the most refined VR experience and the best up and coming hackers, and also gained interests from investors and Lucas Films.
This project received two awards from MIT Media Lab VR hackathon: the most refined VR experience and the best up and coming hackers, and also gained interests from investors and Lucas Films.