Just 10 meters removed from the hustle of 9th Avenue in New York City, a security guard sat adjacent to a singular elevator bank, hidden in plain sight. As if I was claiming the winning lottery ticket, I furtively asked him if this was the correct place to pick up Google Glass. A nod confirmed that it was, and 10 floors up, I stepped into a sparsely (but tastefully) decorated waiting room. One simple LED sign hung on the wall above the only couch: GLASS.
This story actually begins several months earlier, when I was invited to join Google’s Healthcare Advisory Board. This was a gratifying accomplishment for someone who had always held true to the notion that eventually mobile devices and the Internet would change medicine forever. The attraction of a wearable device that could stream live video, connect to the Internet and facilitate live peer-to-peer interaction – without the use of your hands – seemed like a fantastic combination for teaching fellows and consulting colleagues. Thus, I lobbied the forward-thinking leader of the Ad Board, Ryan Olohan, to allow me to trial Glass in a medical setting. Fast-forward dozens of emails later and I was now sitting with my “Guide,” about to join the very few lucky enough to become a Google Glass Explorer.
Imagine a scenario in which an EMT is evaluating a patient with chest pain, nausea and diaphoresis in the field. Vitals are obtained and an electrocardiogram is performed and all data, including the 12-lead ECG, are transmitted to the STEMI team without the EMT ever using his hands.
Consider consulting a cardiothoracic surgeon on a patient while you are actually performing the angiogram, allowing the surgeon, or perhaps even the referring physician(s) to see the images as you do.
The social bane of the tablet-based EMR systems – the most glaring evidence that we are not connecting with our patients – is that the doctor often spends more time with their eyes on the screen, than on the patient. What if we could see all we really needed by glancing upward? What if we could record our exam using voice recognition and image capture rather than a stylus or an iPad?
All of these eventualities will soon become a reality as wearable computers like Google’s Glass begin to make their way into mainstream medicine. The ability to connect directly with selected colleagues in real-time and allow them to see what you are doing and seeing may be the feature of these devices with the greatest impact. However, the utility of technology like Glass and similar devices has only just begun to be discovered.
Glass is worn like any other set of reading glasses, but has a prism perched above the user’s right eye that acts as the computer screen (Figures 1-2). The “frames” are actually one contiguous rim of titanium that is lightweight, flexible, and strong. Although it comes in its box with both clear and tinted non-prescription lenses I was loathe to use lenses at all, as I do not wear glasses normally, and the dark (sunglass) tint diminishes the ability to see the screen somewhat. On the right arm of Glass is where the business end of the device is housed. The battery rests behind your ear, where there is also a tiny bone conduction speaker (now changed to a standard ear piece in the 2nd generation). Along your temple there is a touch-sensitive trackpad that is used to scroll backward, forward, up, and down in order to perform different actions. This area also contains the processor and camera as well.
In order to use features other than the camera, e.g. perform Google searches, confer with colleagues using Google Hangout, and receive and send texts/emails, you must be hooked up to a WiFi or Bluetooth data connection. I easily hooked into my own hospital’s free WiFi network and when outside the range of WiFi, I was able to use my iPhone as a personal hotspot. There is an Android app for Glass but I am an iPhone loyalist and cannot speak to the utility of its use. My Glass was managed via my Apple MacBook Air and my Mini iPad.
The “screen” or display portion of Glass sits just above your right eye so that it is out of your field of vision. You power on Glass by tilting your head back 30 degrees (this can be adjusted) or tapping the trackpad. The slightly awkward gaze upward to see the screen gets you some interesting (read: concerned) stares – particularly in the hospital setting where it appears you are having a partial seizure. According to Google, the display is equal to a 25-inch high-definition screen from eight feet away.1 It is not quite like looking at my laptop, but it certainly is clear enough to enjoy and bodes very well for future iterations, if this is their first stab at it.
Glass works using swiping motions and taps on the trackpad or with a fixed set of voice commands, all prompted from the home screen by saying “Okay, Glass…” Basic requests like “take a picture,” “record a video,” and “send a message” are easily understood and effective. Hangouts is Google’s answer to video conferencing and can be done with anyone in your Google Circles – but first you must create those Circles via a Google+ account and hope (or in my case, encourage) that all of your friends have a Google+ account and actually check it. Google Hangouts should be one of the most important features of Glass, as it gives the user potential to conference in (via sound and/or video) colleagues, mentors, or students. Other useful features include the ability to receive and send emails using voice commands, navigation by simply speaking an address, and Google Now, which makes suggestions based on where you are and what you do.
In the cath lab at Morristown Medical Center in Morristown, New Jersey, we have already begun to stream live cases to colleagues and fellows for both educational purposes – ours, and theirs. The ability to teach by providing a first-person perspective – they see what you are seeing – provides wonderful opportunities for remote learning without great expense and theoretically can expand the utilization of techniques like the transradial approach and percutaneous valves. Essentially, Glass allows us to give a wonderful point-of-view account of the small procedural details of a procedure, with the ability to field questions and instruct over Google Hangouts in real-time. This facilitates discussion and familiarity with relatively novel procedures like transradial intervention and structural work that some may not have exposure to at their own institution – and at very minimal cost, to boot. At the 2013 Mid-Atlantic Radial Symposium (MARS), a large part of the financial burden was attributable to the audio-visual production crew hired to transmit live cases one floor above. We could now transmit these cases across the world, to anyone in my Google Circles, at almost no cost.
One can also envision a scenario in which the roles are reversed and the Glass-wearer becomes the receiver of instruction: surgeons using Google Glass in remote locations where money and mentors are scarce, and consultants are unavailable, may be able to call in more experienced consultants, transmit video to them directly, and confer with them, all while keeping their hand free in order to perform the task. The ability to video-conference with experts from anywhere in the world, allowing them to directly visualize the case/patient/lesion/angiogram through Glass, and never needing your hands to do any of that, is, as they say in the digital realm, disruptive. It is something that can change the normal patterns of how we practice medicine.
The benefits are not limited to the doctor using wearable devices like Glass. As alluded to above, waiting for the transmission of ECGs from the field, first to the emergency room physician and then to the interventional cardiologist, can potentially waste critical time in STEMI activations. If first responders were able to utilize this technology, a HIPAA-compliant network consisting of the interventionalist, emergency department attending and cath lab nurse could all visualize the ECG and vitals simultaneously and in real-time, without the paramedic skipping a beat (pun intended). Allowing those responders to transmit critical data without using their hands or stopping/altering their actions may certainly save time, and potentially lives.
Several physicians and digital health developers have already begun to utilize Glass in the outpatient setting as well. Google Glass works through a series of “cards” that you can swipe through using a finger stroke on the side of the device. Each card may contain a brief summary of the demographics, clinical complaint and history of present illness (HPI). Recent ECGs and imaging reports would eventually be added as well and the doctor could read the essential elements of the patient’s visit by simply glancing upward. Documentation could be completely via voice command, although we could see that presenting some awkward doctor-patient interactions.
From office-based practice improvement to the operating room, the race to develop healthcare-specific software for Google Glass has commenced. Recently, Pristine Eyesight reported their success in developing a HIPAA-compliant solution for surgeons to allow first-person video streaming and two-way audio, currently being piloted at the University of California Irvine Medical Center. Other companies have developed software to allow office-based practices to integrate with EMRs, capturing patient info on ‘cards’ that the physician can swipe through on Glass prior to entering the room, and then permitting the physician to document the visit with voice commands and pictures.
Doctors have traditionally been rapid adopters of technology and mobile devices (tablets and smartphones) are no exception. Recent data looking at 3,015 practicing physicians showed that 62% have used their tablet for professional purposes and more than half of the tablet-owning doctors used the device at the point of care.2 The same survey found that more than two-thirds of physicians use video to self-educate and stay up to date with clinical information.2 The younger generation of physicians will be more prolific and more likely more dependent on their selected devices. In the Journal of Postgraduate Medicine, a survey of 108 interns found that 94% owned a smartphone and 87% used them for the purposes of work – most commonly for communication, and less frequently for medical and drug references.3 The ability to accomplish similar tasks without staring down at a tablet or smartphone seems inevitable.
I love Google Glass, but I think I love its concept more at this point. The potential of wearable, hands-free, mobile devices that transmit information quickly, privately and to selected recipients is a game-changer for so many healthcare settings. The largest limitation at this point seems to be the lack of dedicated software for the device to allow it to be HIPAA-compliant and efficient. There is little doubt that wearable technology will be integrated into medicine. The biggest question is, how soon?
Dr. Jordan Safirstein can be contacted at email@example.com.
- Google Glass Frequently Asked Questions: Tech Specs. Google. Available online at https://support.google.com/glass/answer/3064128?hl=en. Accessed January 16, 2014.
- Vecchione A. Doctors’ tablet use almost doubles in 2012. InformationWeek. May 16, 2012. Available online at http://www.informationweek.com/mobile/doctors-tablet-use-almost-doubles-in-2012/d/d-id/1104392?. Accessed January 16, 2014.
- O’Connor P, Byrne D, Butt M, Offiah G, Lydon S, Mc Inerney K, Stewart B, Kerin MJ. Interns and their smartphones: use for clinical practice. Postgrad Med J. 2013 Nov 15. doi: 10.1136/postgradmedj-2013-131930.
- Fox S. Pew internet: health. PewInternet. December 16, 2013. Available online at http://www.pewinternet.org/Commentary/2011/November/Pew-Internet-Health.aspx. Accessed January 16, 2014.