top of page

Building an AR Magnetic Field Visualizer—The Vision and Challenges

In our previous post, we provided a rationale for why we believe using personal mobile devices (PMDs) to visualize real magnetic fields with sensor data is important. In this post, we take a look at specifically what we would like to accomplish, and the challenges that we are experiencing as we work our way to our goal.

We propose the development and testing of new software utilizing AR frameworks for PMDs that will allow learners to visualize a 3-D magnetic field from multiple perspectives as their PMD moves through it. This software will utilize the new sensitivities of the recently-introduced AR frameworks to determine physical location of the PMD based upon both the inertial sensor unit (accelerometer, gyroscope) and the camera, and overlay the screen with field information based upon the sensor data being collected as the user “sweeps” through space surrounding a field source. As an example, with these capabilities, learners of science, technology, engineering, and mathematics (STEM) could, for the first time, have the experience of moving around and through the magnetic field produced by a bar magnet, a current-carrying wire, or more complex magnetic field sources in the lab or in the natural environment. We hypothesize that these new visualizations will allow learners to improve visuospatial skills and have a deeper understanding of fields and their interactions.

AR technology would allow the user to see this 3-D visualization of a real field, not just the idealized presentation available in textbooks or on computer simulations, and would allow students to physically move through the space surrounding the magnet, allowing them to view the field and its change in intensity from various angles. Google ARCore has the potential to make these visualizations possible, but the software frameworks in their current state need significant development and testing.

We have seen some interesting approaches to solve this problem of field visualization (including through the use of AR), but few of them use real data, or can model fields analytically. For example, we have identified only one pre-existing AR resource that displays field data—Field Visualizer (i-Realitysoft, 2013), which can be used to display the strength of a 2-D magnetic field of WiFi data along a coordinate plane, which is then displayed as a 3-D plot. However, Field Visualizer is incapable of visualizing 3-D magnetic fields. our work is differentiated from the 3-D modeling of magnetic fields of Buchau et al. (2009), Billinghurst & Duenser, (2012), Ibanez, et al. (2014). Buchau et al. identified specific scenarios for lecture demonstrations that included computer-projected imagery of real lab equipment with simulated fields superimposed, but no sensor data was collected. Billingurst (2012) was reliant not only upon marked objects specially made for prepared magnetic field visuals, but required the use of a specialized viewing padlet. Ibanez, et al. (2014) also used marked objects and prepared visuals, but students could view the visuals through an iPad. Although the work by Scheucher (2009) used actual data about electricity flowing through a real coil in the students’ laboratory, it used those data to computationally generate a resulting magnetic field in the virtual computer environment--no actual measurements of the magnetic field were made.

Since we authored our proposal, we have learned of three additional efforts more closely related to our work, but still significantly different. Matsutomo et al. (2017) have created a head-mounted device using a smartphone and an external camera that allows stereoscopic visualization of magnetic fields of marked items that can be picked up and manipulated in front of the user. However, again, no real magnetic data is collected, and this simulation does not allow for irregular field sources. We learned about two additional collaborative groups at the 2018 Winter Meeting of the American Association of Physics Teachers. C. Porter and C. Orban (A controlled study of steroscopic virtual reality in freshman electrostatics, presented at AAPTWM18) are performing educational research on students who are using simulated 3-D electric fields (analogous to magnetic fields) for viewing in headsets versus those who view only 2-D paper drawings, and while they are not collecting data or superimposing it onto real world images, what they learn about educational impacts of visualization will influence our visualization design. Yoo, J.; Patk, J.; Lee, D.; Jin, S.; and Hwang, J. (Visualization of real magnetic field using sensor and AR, presented at AAPTWM18) are using a marked Arduino-based magnetometer that is visually tagged. The data is sent by Bluetooth to a Project Tango smartphone (which is equipped with a depth-sensor camera), connecting the magnetic field data to three-dimensional coordinates determined by the camera. Our project aims to remove the obstacle of needing both the external magnetometer and the depth-sensing camera.

In our idealized scenario, the user will sweep the PMD through the magnetic field near the source. The PMD will collect 3-D magnetic field sensor data corresponding to its estimated position based on the inertial sensor. After a given number of sweeps and sufficient data collection, the PMD will display the magnetic vector field showing its strength and direction. The diagram below presents a highly simplified mock-up of magnetic field contour lines without quantitative strength or vectors. While visualizing the magnetic field, users can move through the 3-D magnetic field to view it from various perspectives and to continue to collect data in areas that are incomplete, or to increase the number of gradations to more precisely model the field. At this early stage, up to two “anchor points” can also be selected by the user on the touch-screen of the PMD, which can currently be used to make measurements, and, in the future, to calculate specific field strength directions and values at those points

There are a significant number of obstacles to resolve the data acquisition, linkage, and visualization challenges.

How can we filter magnetometer data to get meaningful information about the field around a magnet?

  • We need to filter out background magnetic field data from the Earth. This can be accomplished by creating an interface that directs the user to calibrate the data by measuring the magnetic field while the PMD is at rest and flat with respect to the Earth.

  • As users move the phone around, we must use other sensors to make sense of how the PMD is oriented as it collects data. This baseline environmental magnetic field, collected far away from any magnetic object, and associated with a plane that always remains parallel to the Earth's surface, can be "subtracted" from any readings collected. A filtering algorithm to manipulate the data will be used to “normalize” collected magnetic field data along x, y, and z axes through axis rotation calculations (using the accelerometer to identify the vertical axis with respect to the ground) and calculating magnetic field components in that particular orientation.

Solving the second of these two data collection challenges can be thought of like the "Gyrobowl" that has gained popularity in recent years. Imagine collecting data in the plane of the central bowl, which always maintains the same orientation despite the rotations of the outside bowls! Gravity doesn't solve our problem though, but we can use the downward pull of gravity to make corrections for our data by using a rotation matrix (which is more or less equivalent to doing some simple component calculations with trigonometry.)

How can we associate the filtered magnetometer data with the 3-D location information of AR?

The above two challenges primarily have to do with rotation, which is fairly easy for a smartphone to measure because of its internal gyroscope. Translation, or movement along planes, is a much bigger challenge, and one we have to deal with because magnetic fields must be measured across space.

Most smartphones sense planar movement with their internal accelerometer, but this approach alone, akin to dead reckoning, is very poor because of the error accumulation resulting from a lack of sensor precision and associated integration error. The AR software accounts for these problems by additionally using the visual cues.

The biggest challenges, however, are the following:

  • Even when we have plane detection from AR, we need to "link" the magnetometer data to points in the grid. This might sound like a simple task, but ARCore and ARKit source code is not open for developers—it is not possible to directly apply magnetic field data to the field grid through which it has passed. (We've asked Google about this, and they said it was proprietary and that there are no plans to release it). Overcoming this challenge will require some aspect of "manually" associating sensor data acquired by our own software and touching field markers from ARCore or ARKit. At this point, we've decided that our first task is to plot data on a 2-D grid, perhaps only allowing "sweeps" in 2-D until we can build or "stack" our data.

  • The magnetometer is not located at the same point as the camera, meaning that any magnetic field data collected when the camera is centered at a visual point will need to be off-set. This can be accomplished manually by determining the location of the magnetometer with a small magnet and finding the point of highest reading. We can account for the differences in the reading (which would be lower, if the magnetometer is not located on the camera) with the inverse cube law for a dipole. This estimation might become significantly more complicated with a different magnetic field setup. Another option might be to center the point of interest in a non-centered quadrant of the phone, which, at a given perpendicular distance from the source, places the magnetometer directly over the point of interest.

How can we appropriately visualize the field?

These remaining challenges are partly technical, partly cognitive.

  • We will also determine how many vectors per time unit must be created to visualize the field without overloading the user (perhaps allowing a variable sensor rate and variable visualization display types within the app for the user to determine this on their own). In addition to considering data density within each “sweep,” we will determine how many “sweeps” are necessary to appropriately visualize the field.

  • We must determine the best way to represent fields to help the user understand what they are seeing: field vectors, field lines, or even contour lines. To accomplish this, we will work with educational researchers and consultants to determine the priorities in visualization types (i.e., magnetic force vectors, field lines, and/or field density color plots), which will likely be influenced by the precision and accuracy of the field model that can be derived from the magnetic field and ARCore data.

Ultimately, the visualization will be overlaid on the camera using Sceneform and SceneKit, the standard graphics library for rendering 2-D and 3-D data on PMDs. We will go back to the education team to ensure that we are using the appropriate visualizations, to make sure that students can visualize the field holistically from the simple vectors.

The educational research that will result from this proposal has the possibility to revolutionize our understanding about students’ understanding of 3-D magnetic fields based on devices that are personal and mobile like no other general tool that can be used for scientific purposes. It has the possibility of providing significant insights into reforming the way the complex topics of fields is taught at the introductory physics level. The cognitive elements of this proposal will build on a rich history of NSF-funded discipline-based education research.

Most importantly, these technological and educational findings have the opportunity to impact the current and future workforce: as technologically-literate students graduate and move into the workforce, the 92% of young adults who own a smartphone are likely to keep their PMDs and their sensor capabilities close-at-hand. Never before has there been such rapid acceptance of technology that is mobile, personal, and so highly capable that it can transform education and continue to support users even into their careers.

This work is funded by NSF Grant #1822728. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page