top of page

Oslo - trip reflection | Nils

From November 22nd to the 25th (including travel days on the 22nd and 25th), we had the opportunity to visit the theatre that provided us with our assignment. On the 23rd and 24th, we spent the day at the theatre, in one of their performance halls. Both of our contact persons with whom we have been in regular contact with regarding our progress and the planning of the trip were present as well and spent the majority of both days iterating on our concept. I think we can say that the product wouldn't have turned out the same if we did not get the chance to visit Oslo. It felt much better to be able to work together physically, in the very theatre that would show the play.


On the 2nd day, they invited a professional from the machine learning/AI development industry, to whom we showcased our current prototype.


Regarding specific tasks I personally worked on, that has already been written about in the blog post about week 11. To reflect on what went well and what could have gone better specifically during this trip, we decided to make individual blogposts about it.


Since we only spent 2 working days there instead of the initially planned 5 working days (when we were still applying for the Erasmus grant), it was very important to make the most of the hours we had.


Something that could have gone better, is that we didn't really have a testing plan in mind. Then again, our concept wasn't set in stone yet so it was challenging to pinpoint exactly what we wanted to test.


We didn't really have a plan of approach (or not as much as we would have liked/should have) since the days/weeks before travelling to Oslo, we were busy with planning the trip &(me personally) fixing bugs that were essential to have removed before being on-site in Oslo.


For my tasks, we were able to properly test the custom instructions, ChatGPT itself, DALL-E and the Whisper (text-to-speech and speech-to-text) API. So in my eyes, the trip was succesfull and very useful for our end result. Being able to work together with the theatre employees really boosted our creative capacity and allowed us to really show what the technology we were working with was capable of doing.


After listening to client feedback about the way the ChatGPT output appeared on screen, I worked on 3D text instead of a simple 2D text object on the screen. This made the sentences/words that were not relevant anymore slowly disappear off the screen. Relevant blogpost here.


As you can see in the video below, it was very interesting to see how much difference it makes to have a professional actor/actress being the facial motion capture target vs someone who has no experience acting. The amount of facial expressions of the MetaHuman while the actress was performing was much larger than when our contact person (the man with the purple sweater) was acting. This shows the importance of exaggeration with articulation so that the facial motions properly transfer to the MetaHuman.



Comments


bottom of page