top of page

Refining the ChatGPT prototype, Motion Capture - week 13/14/15



ChatGPT

"Star Wars" type ChatGPT output

Empathize, Prototype

When we had a client meeting in week 13, the client asked us if it would be possible to display the ChatGPT output like the iconic Star Wars credits scene. What they meant with that was that the words/sentences should slowly disappear into the void while new sentences are added.


The reason for this was that during our testing phase in Oslo, we noticed that the eyes of our MetaHuman would move from left to right and back when the actor that was being used for the facial animation capture was reading the output. It felt like this would break the immersion, and therefore a solution was necessary to minimize or even prevent this eye-movement. There are still some refinements to do.


Adding a menu for custom instruction setting

Empathize, Prototype

To allow more user-friendly customization of the ChatGPT personality and other settings, we wanted a menu in which it is possible to change the custom instructions that are fed to ChatGPT.

I divided this into 3 categories:

  • Personality

  • Technical Details

  • DALL-E


On application startup, it checks if there is a save file with previous custom instructions. If so, it loads that file and reads the contents. For each category, it assigns the correct instructions found in the file. Otherwise if no file exists, a grey "hint-text" is shown, which acts as a placeholder. It tells the user what they should enter in the textbox.



Changing UI into tabs for ChatGPT, DALL-E & Custom Instructions

Prototype

While the ChatGPT prototype was still in early-mid stages of development, I put everything (at that moment ChatGPT and DALL-E) on 1 screen, as you can see in the image below.

The left half would serve as UI for ChatGPT, and the right half would show the output of DALL-E. However, eventually this started to feel too cluttered, especially when the custom instructions menu would be added.


Now, each component has its own window. The custom instructions window is already shown in the last paragraph, so I won't show that again here.

Acting as a testing rabbit for Motion Capture

Prototype, Test

While at the theatre in our time in Oslo, we played with the idea of using motion capture to be able to give even more life and personality to the AI by copying the movements of the actor.


Back at Saxion, we were eager to test this with the motion capture suit Saxion had available. I served as the main testing rabbit, and also combined it with facial motion capture with the iPhone. It was a good learning experience for my "hardware" learning goal.


Eventually, we tested 2 different motion capture suits. First, the most accurate suit with sensors that are all connected to a wearable mainframe which sends the motion data to the computer hosting the motion capture software.


Second, we used a less advanced but still good enough motion capture suit, which consisted of wireless sensors for each body part. A full suit for this was not needed, only a vest and a lot of velcro bands to which the sensors were attached to.


I also helped others putting on the motion capture suits, not just wearing them myself. For the wired suit, it consisted of correctly placing all the sensors on each body part, wiring them through the suit to the mainframe and making sure it connected properly to the computer. The other suit was less work, which only required attaching the velco straps to the person, placing the sensors and turning them on.


Both suits were rather finnicky at times, since the wired suit's mainframe would sometimes not connect to the host. In the end, this was because the person that used the suit before us somehow broke the wearable mainframe. This is when we switched to the wireless suit. On that suit, some sensors would also not connect to the computer but that was just a matter of turning sensors off and on.


Comentarios


bottom of page