Proposal for a multimodal interaction with MotoStudentVG

Héctor Olmedoa, Karle Olaldeb

aIndependent researcher, Bilbao, Bizkaia, SPAIN

bUniversity of Basque Country, Vitoria-Gasteiz, Araba, SPAIN

Abstract

In university engineering education there are usually many problems to obtain a spatial ability of the student, especially in the first courses. So, we have dared to raise other resources that facilitate spatial ability through models in three dimensions (3D) easy to reach by the student and easy to manipulate to obtain a better vision of all their geometry and functionalities. Through the augmented reality (AR) and the virtual reality (VR), we can obtain using open software, different projections and visualizations of the bodies that are generated in an assembly as is the case of Final Degree Projects or projects that students voluntarily perform, as is the case of MotoStudent. This is done through the publication of web pages and viewers that allow to see the objects or in this case the pieces in 3D with the ability to manipulate them as if we had them in hand. But we want to improve the way our students can interact with their prototypes. So, they will be able to interact not only with the usual graphical interaction, also with a spoken interaction. This is multimodal interaction.

Keywords: Augmented Reality; Virtual Reality; Engineering; CAD; Spoken interaction; Graphical interaction; Multimodality; Human-computer interaction; Dialogue systems

1.     TEACHING OF ENGINEERING & AR/VR

New technologies must help our students to take an active part in our classes in order to become more involved in learning, without having to listen to endless lectures or passive PowerPoint presentations. Inductive learning must be continuous. AR/VR can be used in education to show the students models that cannot be seen in the real world. In the field of drawing in engineering, it can be very effective for students who wish to improve their spatial ability on the screen which makes it possible to view objects such as 3D images, which can be handled to rotate, scale or section them in real time. AR systems are an extension of the concept of Virtual Environment (VE). These systems present the user with an enhanced view of the real world. This view contains virtual elements. The visual augmentation may be accompanied by sound, tactile (haptic) and other types of augmentation. Rendered elements are displayed in the user’s view of the real world. By mixing this view, the virtual graphics and text can be done in either of two ways. See-through devices allow the user to see the real world directly. They display the graphics on a transparent screen located between the real world and the users’ eyes. The main advantage of AR systems over regular VE systems is that they combine virtual and real worlds thus providing a much richer experience. AR systems can be single-user or multi-user collaborative. Single-user systems have been applied to science, engineering, training and entertainment, among others. Collaborative systems have been applied to the same areas with much more valuable results. For example, we can collaborate with other colleagues such as computer technicians, mechanics or mathematicians and use automation or other disciplines around us, to obtain a multidisciplinary work. Many processes, ideas and concepts can be better illustrated by using both images of the real world and graphics.

University teaching methodologies have not evolved much over the centuries. The method of attending lectures, taking notes, and taking a final exam date back to the 15th and 16th centuries. Recently, new technologies have appeared in the classroom. For example, it is common to see PowerPoint presentations and use networked platforms like Moodle. Using these new technologies does not imply an increased interaction between students and the teacher. In fact, information often keeps on flowing in only one direction, from the teacher to the students. For students to learn more and better, education must be both experimental and interactive. We learn more from hands-on experiences than from traditional lectures. Also, collaboration and discussions between students help them with their education by teaching them opinions and methods proposed by their peers. This is more interesting for engineering students. But other disciplines such as law may benefit from new technologies, like teleconferencing to attend or participate in remote trials.

AR is mature enough [19, 20] to be applied to many everyday activities. Education is one of them [18], especially for the following reasons [5]: (i) AR supports seamless interaction between real and VEs; (ii) AR makes it possible to use a tangible interface metaphor for object manipulation; (iii) Finally, AR makes for a smooth transition from reality to virtuality. AR can also be used for online education. Project MARIE (Multimedia Augmented Reality Interface for E-Learning) uses AR to present 3D information to the students [23]. The authors argue that AR is more effective than VEs in terms of price, realism, and interactivity. They also predict that in ten years AR will be used in many everyday applications. On the other way, VR is interesting to isolate students from the real world and make them concentrate on 3D models allowing manipulations that are more difficult to do in the real world, this is the case of engineering students. We intend to use AR/VR to help to teach in several different classes: in the first level of engineering, in a class of expression graphics [11, 12] and advanced graphics and finally in project work. These classes are well suited for this purpose because:

  • It can be used in different subjects or departments.
  • All of them are based on the knowledge of computer-aided graphics.
  • Models and practices are much better understood using 3D models with rendering and tangible interfaces.

We will explain the expected benefits of using AR/VR in the classroom. We will describe our objectives, our classroom setup, and the results obtained from our experience with AR/VR and Engineering Graphics. More specifically, we will present the outcome of a satisfaction questionnaire filled out by the student subjects of the experience. And finally, as an example of sharing experiences, the MotoStudent project will be introduced. Finally, we will explain what we have done and what will be our next challenge.

1.1.  Justification, expected benefits and objectives

Not every student has the same 3D spatial perception abilities. Some students have difficulties envisioning 3D objects drawn or displayed in 2D. This is relevant in Engineering where students must analyse the 3D models to see the correct answers to the class problems. 2D produces optical illusions that usually stem from the ambiguity of 2D renderings. The ambiguity of 2D models together with the difficulties of 3D analysis and perception implies that very often important concepts are not assimilated. Also, problems that can be easily solved by rotating a model and analysing its face and geometry are almost unsolvable, even for competent students. Therefore, many students end up memorizing models and problem solutions before the exam. A few days afterwards the students have forgotten most of them. Instead, models should be derived from much simpler concepts. This would help the knowledge to settle in their minds. To improve our students’ comprehension of spatial capacity we introduce an AR/VR system. It allows tangible interaction with the virtual models, thus simplifying their 3D analysis. The two main benefits of applying AR/VR techniques to our classes are: (1) Students have a much better understanding of the fundamental concepts and models presented in the classes; (2) A powerful and flexible AR/VR tool simplifies the teacher’s task of explaining the basic concepts related to models and spatial capacity. Overall, we have two objectives: (I) to improve the students understanding of models and assemblies using AR/VR, and (II) to provide the teacher with a tool to better explain those models that require good 3D spatial intuition. Additionally, we have the following specific objectives, including computer-related objectives. With respect to the students’ spatial ability, our intention is to achieve the following objectives: (1) To get them actively involved by introducing a novel technology like AR/VR; (2) To provide them with a tool to view different 3D models in an intuitive way; (3) To increase their 3D analysis and perception skills; (4) During the class we want them to manipulate the models independently or in groups not only with the teacher but also on their own, while working on problem solving; (5) To develop aptitudes such as initiative and class participation by manipulating the structures; we also want them to collaborate in groups; (6) Bringing new computer technologies to the students, increasing their knowledge, their abilities and their communication skills; these skills are critical for the students to later be able to successfully join multi-disciplinary teams with experts from other areas.

With respect to teachers of Graphics Expression we aim to achieve the following objectives: (a) To provide them with a tool that will catch students’ attention by attracting and surprising them; so, students’ attention and participation will then be maximized; (b) Increasing the teachers’ options to effectively teach concepts where a good spatial intuition is critical for the students understanding.

Finally, our computer related objectives are: (i) To compile a database of 3D models of mechanical of engineering; (ii) To use it in a collaborative AR/VR system with markers and cameras; (iii) To implement our software system with open-source libraries; (iv) To promote free software usage.

1.2.  Classroom setup, first impressions, results and our next challenge

To apply AR/VR technology to our classes, we do not want to introduce substantial changes. Instead, our objective is to naturally improve our current methodology using the AR/VR system. We alternate between using the blackboard, PowerPoint presentations and other teaching resources, and using structure analysis with the AR/VR system. There are some models that have been built using the VRML modelling language and X3DOM [27] from the original 3D models developed with CATIA. The system allows students to inspect a set of models by moving a maker or showing the model on the web and moving the mouse. We also have model libraries in the browser allowing easy access and manipulation of the model built with AR.js. using their associated markers. The marker is recognized by AR.js [8], an open-source library for AR application development. Note that the makers easily identify the structures. The 3D models of the material structures are superimposed on the markers when these are recognized. Finally, we allow the students print their 3D models to accomplish their projects totally. In order to know the students’ opinion about this project, a survey was carried out in the Graphics Expression classes. The main objective of this survey was to collect their opinions about the advantages and disadvantages of using AR/VR techniques. We also wanted to know whether it was useful from the point of view of the students as users of this methodology. The survey group was made up of forty-five students from the fourth course of the grade of engineering. They were randomly chosen from the different classes where our system was used and grouped in pairs. Note that some of them were in more than one of these classes. We wanted to know if they could understand the project that each student was developing using AR/VR to visualize it. The general opinion among them was that using AR/VR to understand 3D models was extremely useful. All the students surveyed considered that AR/VR was a powerful tool that helped them to understand the 3D arrangement of these models. Besides, the possibility to print their models on 3D was like a prize for their effort developing their projects.

1.3.  The MotoStudent Project

MotoStudent Competition [15] is a challenge between University students’ teams from all over the world. The objective is to design, manufacture and evaluate a racing motorbike prototype, which is then put to the test and final evaluation at the MotorLand Aragón Circuit. The competition itself represents a challenge to the students. They will have to prove their creativity and innovation skills to directly apply their engineering abilities against other teams from universities all over the world during a period of three semesters/terms. MotoStudent brings benefits to the students, to the universities, to the industry and with our proposal also to the society. The challenge to teams is to develop a motorbike that can successfully accomplish with all the tests and events along the MotoStudent Competition. MotoStudent gives the teams the chance to prove and demonstrate their engineering skills, creativity, and business abilities in competition to teams from other universities around the world, so there is a need to make the project universally reachable. A group of students of mechanical engineering from the University of Basque Country desired to participate in this contest. Their idea was using the CAD software at the University to design their prototypes. But there was a need of sharing these prototypes to promote their work to the audience of the contest and maybe to possible investors. The lack of cheap software for 3D on the web was a handicap. Besides, the economic situation gave the students few chances to develop an initiative out of the official budgets. Thus, the use of software like X3DOM [27] and AR.js [8] allowed them to share their 3D designs over the Internet with low costs and, they could enrich their experience collaborating with computing science engineers and web designers. Some examples of pieces developed with CATIA for this project with their conversion for the web allowed the students to share their developments with other students and the general public with no investment for them or for the public. This was our MotoStudentVG repository [22].

2.     FROM CAD SOFTWARE TO THE REALITY-VIRTUALITY CONTINUUM

CAD software refers to the most widely used in the field of mechanics such as aerospace, automotive engineering and many other fields of engineering mainly in manufacturing. This type of software is always expensive and there are students, customers and partners that cannot afford to buy licenses. Sharing 3D contents using websites and AR/VR apps based on open standards offers an excellent opportunity to encourage the public to become acquainted with our products with no specific investment. There are open technologies to diffuse 3D contents, but they are not widely used nowadays because producers of plugins for visualizing 3D contents on the web are leading this technology. But most used web browsers include native possibilities for visualizing 3D contents; it is only a question of developing special websites or adding the necessary modifications to the actual websites. This is the aim of our project. Basically, we will focus on the CAD programs we have at our disposal. These have allowed us to see all the possibilities for the AR/VR environment. From 3D models stored in files with the different extensions provided by CAD programs, we try to transfer them to AR/VR software, making the appropriate changes, rendered application layers, lighting and even movement. Thus, we get the effect of visualization features as real as possible and the users can manipulate them as if they were in their hands. Such supplements are obtained from other specific programs [9] and tools for rendering, animation, or illumination of scenes, such as Autodesk 3D Studio [1], Maya [13] or Blender [6], the latter Open Source.

As mentioned above, the information transfer from CAD models to the AR/VR applications is sometimes carried out in a direct way, through specific AR/VR software or through intermediaries such as could be Sketchup [26], 3DS Max [1] or Maya [13] which allow models to be interpreted by the AR/VR software. Our proposal [17] allows 3D designers to export their contents developed with the usual author tools such as Catia [7], AutoCAD [3], NX11 [24], etc. to be shown on the Internet inside websites with no need for downloading plugins or any special configuration by the users. This process is shown in Figure 1 (VIRTUAL REALITY). The 3D model developed with the authoring tool (CATIA) must be converted to X3D format using the “aopt program” [2]. The code on the obtained X3D file from the .wrl file exported from CATIA must be inserted into our webpage HTML code under the tag. Stylesheets x3dom.css and blog-web.css must be associated to this webpage together with the last version of X3DOM’s JavaScript libraries. After this process is done, we have it all to display the 3D content in the usual Web browsers for PCs, laptops, tablets, or mobile phones. Thus, users can interact with this 3D content resizing it, changing perspectives, etc. 3D content can be shown also as AR, as seen at Figure 1 (AUGMENTED REALITY). For visualizing as AR more development is needed depending on whether it is location-based, marker-based or even Oculus Rift [16] based but we always use JavaScript and HTML with no commercial plugins. To do this, we convert the .wrl file exported from CATIA to .ply format using Meshconv program [14]. With stylesheets and AR.js library we develop a website capable to show the 3D model when detecting a marker. Once we could show our 3D models through the Web3D, 3D printing was the next step, and this was done by means of a similar process where instead of producing web pages, files formatted for 3D printing were provided for downloading (STL, stereotype layered, etc.). See Figure 1 (MIXED REALITY).

From CATIA to VR, AR and MR

 

3.     SPOKEN INTERACTION, XMMVR & DIGITAL ASSISTANTS

The popularity of digital assistants (Figure 2) as Google Home, Amazon Echo and Apple HomePod with their voice assistants Alexa, Google Assistant and Siri offer us the possibility of allowing our students interacting with their 3D models in a different way, not only with graphical interaction also with spoken interaction, this is multimodal interaction.

Digital assistants (Google Home, Amazon Echo and Apple HomePod)

This could be helpful for them to ask the voice assistant to show the front view, the right-side view or the top view of a 3D model, for example (see Figure 3).

Front view, right-side view, and top view

We think adding this new way of interaction could enrich their spatial capabilities, so this example could be our first application. But to do this we will need an architecture to integrate spoken and graphical interaction allowing manipulation of 3D models and XMMVR architecture [21] will be our first candidate.

XMMVR architecture with Amazon Echo and MotoStudentVG

In Figure 4, the possible integration of the XMMVR platform, the Amazon Echo digital assistant and the MotoStudentVG repository integration is proposed. To make it work we will need to develop a plugin developed with the AVS Device SDK [25] to integrate it with the XMMVR architecture’s dialog manager and a graphical API to integrate our AR/VR 3D models based on X3DOM and AR.js respectively. Of course, we could also use Google Home or Apple HomePod instead of the Amazon Echo digital assistant. In that case, we should develop the plugin to integrate it with the XMMVR architecture’s dialog manager with the [26] Google Assistant SDK or with the [27] SiriKit, respectively. Even these devices could be substituted very easily by the corresponding apps, but we think our system should be independent from the mobile apps ecosystem.

4.     CONCLUSIONS AND FUTURE WORK

We have developed a process to allow engineering students to share their 3D models and helping them to develop their spatial capabilities, but we want to improve it with multimodal interaction. To get it done we will have to find or develop a graphical API to manipulate AR/VR models made with X3DOM & AR.js. We will need to develop a plugin to join the XMMVR platform with the digital assistant chosen to allow the spoken modality. Once we had developed this platform, our first application to be run will be the multimodal application “Front view, right-side view & top view”. This app will show different views of a 3D piece answering spoken orders given by the user, allowing graphical interaction with the 3D piece. It will be a test for our proposal. We expect our proposal to help teachers to explain theory of drawing, changing the way of teaching. It will support students, even more and in a more natural way, succeeding in their work and study because the increase of interest and productivity provoked will impact in the quality of the projects developed by them and even in their marks.

REFERENCES

[1] 3DSMax. Accessed May 10, 2020. https://www.autodesk.es/products/3ds-max/overview.

[2] AOPT. «Using AOPT to Create Optimized X3DOM Content». Accessed May 10, 2020. https://doc.x3dom.org/tutorials/models/aopt/index.html.

[3] AutoCAD. Accessed May 10, 2020. https://www.autodesk.com/products/autocad/overview.

[4] AVS Device SDK. Accessed May 10, 2020. https://developer.amazon.com/es-ES/alexa/alexa-voice-service/sdk

[5] Billinghurst, Mark. Augmented Reality in Education 2012.

[6] Blender. Accessed May 10, 2020. http://www.blender.org/

[7] CATIA. Accessed May 10, 2020. https://www.3ds.com/products-services/catia/.

[8] Etienne, Jerome. «AR.Js.» Accessed May 10, 2020. https://github.com/jeromeetienne/AR.js/.

[9] Kosmadoudi, Zoe, Theodore Lim, James Ritchie, Sandy Louchart, Ying Liu, and Raymond Sung. «Engineering Design using Game-Enhanced CAD: The Potential to Augment the User Experience with Game Elements.» Computer-Aided Design 45, no. 3 (March 1, 2013): 777-795. doi:10.1016/j.cad.2012.08.001. http://www.sciencedirect.com/science/article/pii/S0010448512001698

[10]      Google Assistant SDK. Accessed May 10, 2020. https://developers.google.com/assistant/sdk

[11]      Liarokapis, F., N. Mourkoussis, M. White, J. Darcy, M. Sifniotis, P. Petridis, A.Basu, and P. F. Lister. «Web3D and Augmented Reality to Support Engineering Education.» World Transactions on Engineering and Technology Education 3, no. 1 (2004): 11-14. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.70.7785&rep=rep1&type=pdf

[12]      Martín-Gutiérrez, J., «Proposal of methodology for learning of standard mechanical elements using augmented reality,» 2011 Frontiers in Education Conference (FIE), Rapid City, SD, 2011, pp. T1J-1-T1J-6, doi: 10.1109/FIE.2011.6142708. https://ieeexplore.ieee.org/document/6142708

[13]      Maya. Accessed May 10, 2020. https://www.autodesk.es/products/maya/overview.

[14]      Meshconv. Accessed May 10, 2020. http://www.patrickmin.com/meshconv/

[15]      MotoStudent. Accessed May 10, 2020. http://www.motostudent.com

[16]      Oculus Rift. Accessed May 10, 2020. https://www.oculus.com

[17]      Olalde Azkorreta, Karle and Héctor Olmedo Rodríguez. «Augmented Reality Applications in the Engineering Environment.» Springer, Cham, 2014. https://link.springer.com/chapter/10.1007/978-3-319-07485-6_9

[18]      Olalde, Karle, Beñat García, and Andres Seco. «The Importance of Geometry Combined with New Techniques for Augmented Reality.» Procedia Computer Science 25, (January 1, 2013): 136-143. doi:10.1016/j.procs.2013.11.017. http://www.sciencedirect.com/science/article/pii/S1877050913012222

[19]      Olalde, Karle and Imanol Guesalaga. «The New Dimension in a Calendar: The use of Different Senses and Augmented Reality Apps.» Procedia Computer Science 25, (January 1, 2013): 322-329. doi:10.1016/j.procs.2013.11.038. http://www.sciencedirect.com/science/article/pii/S187705091301243X

[20]      Olmedo, Héctor. «Virtuality Continuum’s State of the Art.» Procedia Computer Science 25, (January 1, 2013): 261-270. doi:10.1016/j.procs.2013.11.032. http://www.sciencedirect.com/science/article/pii/S1877050913012374

[21]      Olmedo, Hector, David Escudero, and Valentín Cardeñoso. «Multimodal Interaction with Virtual Worlds XMMVR: eXtensible Language for MultiModal Interaction with Virtual Reality Worlds.» Journal on Multimodal User Interfaces 9, no. 3 (/09/01, 2015): 153-172. doi:10.1007/s12193-015-0176-5. https://link.springer.com/article/10.1007/s12193-015-0176-5

[22]      Olmedo, Héctor, Karle Olalde, and Beñat García. MotoStudent and the Web3D. Vol. 75 2015. doi://doi.org/10.1016/j.procs.2015.12.220. http://www.sciencedirect.com/science/article/pii/S1877050915036819

[23]      Petridis, Panos and Fotis Liarokapis. «Multimedia Augmented Reality Interface for E-Learning (MARIE).» World Transactions on … (2002). http://www.academia.edu/318042/Multimedia_Augmented_Reality_Interface_for_E-Learning_MARIE_.

[24]      SIEMENS NX. Accessed May 10, 2020. https://www.plm.automation.siemens.com/global/en/products/nx/

[25]      SiriKit. Accessed May 10, 2020. https://developer.apple.com/documentation/sirikit

[26]      Sketchup. Accessed May 10, 2020. https://www.sketchup.com/

[27]      X3DOM. Accessed May 10, 2020. http://www.x3dom.org/