In this research, I explore how Augmented Reality can be integrated into the exhibit, enhancing the experience and identify and analyze the possible benefits and drawbacks of using augmented reality (AR) as a visual language in various domains. AR has the potential to enhance learning, communication, and creativity, but also poses some challenges such as technical limitations, ethical issues, and user acceptance.
I made use of these software to get my vision come through:
Blender (3D model and space)
After Effects and Premiere pro (Video editing and UI)
Spark AR (Prototyping AR elements)
Museums play a crucial role in preserving and curating the cultural heritage and historical narratives of different societies, but they face a constant and dynamic challenge, especially in a time of rapid technological changes.
Blender
Using blender to create the model, I've also looked into how the circuit might look like! And with the surface having the properties of translucent plastic, the insides of the trinket can be seen, looking like a retro vibe like gameboy hehe.
Render
SO to sell my idea of the trinket, I went ahead to animate the lighting to move slowly to reveal the trinket. I also looked into how big the trinket is going to be and its smaller then the average smartphone.
Comments
Some of the feedbacks from the lecturers are:
Is this device possible to create in masses?
How can this device utilize further with the screen on it?
Is this trinket expensive to create and if it is, there has to be a system in which people will return it back to the museum.
Research
I did some research on the different types of projector out there, with it comes some unique features. When I design this projector, I wanted it to look retro but also features needed to make this project viable. So the projector has a lens that will be projecting, a high powered projection.
Extra stuff
The additional feature needed is the scanner that will detect the trinket. This device will then decide how the visuals will interact with the motion captured by the trinket. Right now it seems complicated but i believe in the future, this can be expanded upon to be refined.
Feedbacks
SO now the questions remains, will this be a device that is only mounted on the ceiling and can it be mounted else where? Is this the most cost effective way to go about and with 2 devices connecting to one another, how will this work with multiple sensors going off as once when its in the mass.
Human
This human was rigged and has the average person look and height, however hand motion is limited to my level of competence so I had to make sure the audience know the person is actually moving his hand when the trinket detects hand gestures to play with the visuals
The exhibit
As for the exhibit, I am following the theme of Napoleon from last semester, and seeing how the interaction will work with the guest and the device. I do not have a living proof of this working so I needed to create a visual representation of this concept.
The hard part is now making the UI and bringing it over to the 3D world, as the UI animation cause alot of lag and the projection feel is off from the real life projection.
Feedback?
The expected outcome of this user test was to obtain insights and feedbacks that would inform the design of coherent AR content. The procedure for this experiment was similar to the previous one, except that the visualizer was adjusted to be larger and more visible by altering its position and height. The participants were instructed to use the trinket to scan three images, each of which would display an AR image with text options . They were then asked to answer the question and provide feedback.
Aim
The aim of this user test was to examine the impact of font types and size on the readability of AR content. This was motivated by the previous user test results, which indicated that the visibility of text was challenging in different background contexts. Therefore, I selected a variety of font types from Google Fonts, including both serif and san-serif fonts, to compare their performance in AR scenarios
Result
Using the existing technology, I developed a prototype that approximates the optimal design of the exhibit, simulating an immersive AR experience and identifying possible challenges and solutions. I will have to reduce the amount of visual content and enhance the user experience and accessibility of the devices. However, these tests also indicate some positive outcomes, such as the testers' engagement and the constructive feedback session.
Within the context of engaging visitors with AR in museums, the results are consistent in showing the signs of potential and these data will inform my next steps as I advance towards the final prototype and keeping note of the flaws from this experiment to not happen again.
Recording
The results of the user test indicate that Playfair Display is the most preferred font overall, but a font with thicker strokes, such as Roboto Mid, is more suitable for a dark background. The feedback also suggests that fonts smaller than 36 are difficult to read, which can be attributed to various factors, such as lighting and wall textures, that affect the readability of the AR content
Blender and AE
To get this to work, the environment from Blender is transferred over to AE so I can imitate a real life situation of how a visitor will see and interact with the AR UI element. Again, there is no hand gestures as I can't figure out how to animate BUT I am able to get across the idea that it is very similar to the UI from Apple Vision Pro as it's a great reference for me with a clean UI.
From Spark to AE
After testing with Spark AR, I decided to move over to After Effects to get some visual animation and POV perspective from the visitors on what the visuals will look like in the virtual exhibit I've created. This is needed as Meta Spark is still an evolving software that is unstable at the moment so AE is my best bet to show the animations needed.