Benefits

How HIPAA-Compliant Are Digital Health Apps?

By Andie Burjek

Apr. 16, 2019

I recently received a pitch about how Amazon’s Alexa now has a “HIPAA-compliant upgrade” through which people can book appointments, ask health care questions and check on the status of prescription deliveries. The immediate reaction of my editor and me was, “How can this possibly be HIPAA-compliant?”

I bring this up because I’ve also recently read a Washington Post article about a pregnancy tracking app that claims to be HIPAA-compliant. And there was a lot to unpack here. From a patient advocacy perspective, a lot of scary things to unpack.

Before I get into that, a quick anecdote from my high school years. My dad gave me some job advice I’ve never forgotten. Watch out for yourself and if you want to quit, don’t feel guilty about leaving a company you don’t want to be working at anymore. If the tables were turned and the company had to sack a bunch of people, it’d feel no guilt about letting you go. It would make a non-emotional business decision. Employers mostly watch out for themselves, and employees should too. Loyalty can only go so far.

I know that many employers tout a “culture of health” nowadays and make broad claims about how much they care about the health of their employees. As a benefits and health writer, I don’t buy that. Not for nefarious reasons, but because I know that at the end of the day, it’s all about the business. That’s their No. 1 priority. Just like my career should be my No. 1 priority. To believe otherwise is naïve.

Also read: Trendy Digital Health Firms Seek Solutions to Questions It Never Thought to Ask

I’d argue that this self-interest extends to health plans. As this Washington Post article stated, “The real benefit of self-tracking is always the company. People are being asked to do this at a time when they’re incredibly vulnerable and may not have any sense where that data is being passed.”

How can employers benefit from self-tracking? Through digital health apps that employees sign up for, employers could access aggregate data of employee health; the data is “de-identified,” which means it’s stripped of information like name, social security number and email addresses that could be used to identify the patient. Employers who don’t pry into these anonymous identities can still use this data to understand the overall health of its organization and identify issues that afflict many employees, which could help inform and shape its health strategy.

As for sneakier employers, the article notes that it’s “relatively easy” for companies to identify patients (in this case, women using the pregnancy app) “based on information relayed in confidence, particularly in workplaces where few women are pregnant at a time.” Someone could, for example, cross-reference the app’s data with other data. This potentially could impact people’s health care costs or coverage.

An excerpt:

The apps, [health and privacy experts] say, are designed largely not to benefit the women but their employers and insurers, who gain a sweeping new benchmark on which to assess their workers as they consider the next steps for their family and careers. … Experts worry that companies could use the data to bump up the cost or scale back the coverage of health care benefits, or that women’s intimate information could be exposed in data breaches or security risks.

This is why I’m skeptical about digital health apps. I’ve heard arguments on both sides, but if it’s possible for someone’s private medical information to be used against them, how is that OK? Why aren’t there more protections for patients? And how could current patient protection rules be up to date with the digital age?

To quote an informative article from The Verge: “In 1996, the year Congress passed its landmark health privacy law [HIPAA], there was no Apple Watch, no Fitbit, no Facebook support groups or patients tweeting about their medical care. … [It] is still a key piece of legislation protecting our medical privacy, despite being woefully inadequate for dealing with the heath-related data we constantly generate outside the health care system.”

The Post article brought up something else noteworthy: the app’s 6,000 word “terms of use” agreement that women must consent to. A lot of us in the health space have probably heard the statistic of how few people know how to define basic health care terms like “deductible” and “premium,” suggesting low health literacy rates among people. So how is a person supposed to understand the legal and health care jargon in a 6,000-word “terms of use” agreement? Is that realistic? Do people really know what could happen with that data?

Further, according to the article, while a spokeswoman said the company doesn’t sell aggregate data for advertising purposes, the “terms of agreement” tell a different story. The company has a “royalty-free, perpetual, and irrevocable license, throughout the universe” to “utilize and exploit” de-identified personal information for scientific research and “external and internal marketing purposes.”

Digital health companies are a relatively new thing. And in any communications they make — whether it’s a press release, an executive’s quote in the media or the employee they pick to make a statement to the press — they’re marketing themselves. Of course the focus will be on the positive.

That’s why it’s healthy to be critical of these new institutions that have the potential to greatly impact people’s lives, health and security. If nobody pushes forward to seek change that could protect people’s health privacy, then the future health care environment is not going to be a safe place for patients. Patient advocacy groups should have a greater say in how these digital health companies operate. Insurance companies and employers can easily benefit from the wide array of data in these apps, but what about patients?

Candice Sherman
Candice Sherman

One final thought comes from an interview I had about six months ago with Candice Sherman, the CEO of the Northeast Business Group on Health. The NEBGH released a fascinating guide about genomic medicine and employers that came from a roundtable including many key stakeholders, including employers, clinical experts, benefits consultants and genomic vendors. The missing stakeholder was a patient.

Also read: New Wellness Bill HR 1313 Gets Flak for Genetic Privacy Concerns

I asked Sherman about that, and she explained how health privacy concerns would stop patients from participating in a discussion like this. I do understand this, logically — and I am by no means trying to criticize Sherman or the NEBGH roundtable, since I love that they met up to have a discussion on a health-related topic that’s only going to become more prominent.

That said, I think it would be valuable for businesses or business groups to find a way to include the patient stakeholder in conversations like this. Maybe through an advocacy group or an expert who can make sure to represent the patients’ interests without experiencing the same privacy concerns. There are options.

This is a lot of information, but this topic is important now and it’s not going anywhere anytime soon. In summation, de-identified, aggregate data doesn’t always stay anonymous; just because a digital solution is HIPAA-compliant doesn’t mean it’s necessarily harmless to a patient; and patients deserve to have their voice represented in health care conversations.

I understand the power of data for organizations to understand big picture trends, but if this data could easily be used against an employee, it’s not worth it.

Health data privacy is important. I’m curious what discussions we all must have and how laws should be rethought to represent patients — your employees.

Andie Burjek is an associate editor at Workforce.com.

Schedule, engage, and pay your staff in one system with Workforce.com.