Skip to main content

As AI advances, Oregon lawmakers seek to specify only humans can be nurses

A narrow bill would be one of the first attempts in Oregon to regulate the rapidly changing technology that some worry could supplant medical professionals
Image
AI nurse markey material
Marketing material for an AI program that performs some functions of nurses. | SCREENSHOT
January 30, 2025

At a time when artificial intelligence is shaking up health care while sparking concerns and lawsuits, state Rep. Travis Nelson wants to make sure AI can’t get away with pretending to be a nurse.

Nurses in Oregon and elsewhere have become increasingly anxious since a tech company’s announcement it had developed a $9-an-hour AI program that could take over work normally done by nurses paid ten times that rate.

Nelson, a nurse and a Portland Democrat, is responding with House Bill 2748, a narrowly focused bill that would ban any “nonhuman” entity, including artificial intelligence, from using the title of “nurse.” 

The bill comes as AI is already remaking health care. High-tech algorithms are being used to increase revenues and lower costs across the industry, such as by triaging patients and assisting with diagnosis, imaging and other care decisions. It’s making coverage decisions and accelerating hospital “throughput” by directing providers in a manner intended to discharge patients as quickly as possible.

But while supporters cite increased efficiency, critics say AI is being used by companies like UnitedHealth to deny patients access to needed care. AI is less accountable and can hurt care quality by replacing provider judgment and decision-making authority — or replacing them outright, some fear.

“What if an AI system gets it wrong? Who gets sued?”

Just one page long, the bill is one of the more modest state-led attempts to put safeguards around AI in health care and would be among the first efforts to regulate the technology in Oregon. 

Nelson said he’d like a broader bill to let patients opt out of health care providers using AI to treat them. But he said that probably won’t happen before the session ends in June, and it’s important to act fast given how quickly AI is evolving and spreading.

Image
JAKE THOMAS/THE LUND REPORT
State Rep. Travis Nelson, D-Portland, speaks at the CCO Oregon conference in September.

“By the time the session ends, we’ll have taken another big leap on the AI front,” he said. “I think it’s keeping pace with where we are.”

Jennifer Mensik Kennedy, the Oregon-based president of the American Nurses Association, told The Lund Report the bill is needed to protect patients.

“A lot of this is public safety, right?” Kennedy said. “The public needs to know if I call myself a ‘nurse’ what that entails. Someone off the street can’t call themselves a nurse because there is an assumption of education and licensure.” 

Lawmakers in other states have tackled a number of bills regulating AI with mixed results. In California, Gov. Gavin Newsom last year signed the “Physicians Make Decisions Act,” which requires physicians to review decisions made by AI technology that deny coverage.

President Donald Trump has moved to  deregulate the development of AI, overruling Biden administration safeguards.

$9-an-hour nurse sparks worries over AI

In 2021 Thailand-based Botnoi Group  began marketing an “AI nurse” to “pre-assess” patients for diseases. Last year, tech company Hippocratic AI announced it was developing “empathetic AI healthcare agents” that could relieve staffing shortages by completing “low risk, non-diagnostic, patient facing tasks over the phone.” 

The company compared a nurse’s $90 an-hour wage to the $9-an-hour cost of using the software. The company did not respond to a request for comment from The Lund Report. 

“The public needs to know if I call myself a ‘nurse’ what that entails. Someone off the street can’t call themselves a nurse because there is an assumption of education and licensure.”

The announcement sparked an outcry from nurses, including in Oregon, as well as continuing concerns that computers can’t be trusted to do nurses’ jobs. The idea of replacing nurses with AI “threatens patient safety, undermines trust in healthcare, and diminishes the human aspects of nursing such as empathy, critical thinking, and decision-making,” said an Oregon Nurses Association statement.

Kennedy said that many AI companies don’t understand the work of nurses. She recalled watching a company demonstrate an AI product at a conference by presenting a transcript between a nurse and a patient. The patient, she recalled, was a low-income, diabetic, Hispanic single mother who was caring for her mother, who was going in for surgery. 

The AI program produced recommendations for the nurse to give to the patient, she said. Those included encouraging the patient to eat healthier by going to a Chipotle franchise while referring her to expensive home care agencies that do not take insurance. Kennedy called the recommendations racist and unhelpful.

“You can augment the care the nurse gives with AI, but you cannot replace it,” she said.  

Keep humans involved, expert says

Dr. William Hersh, a professor of medical informatics and clinical epidemiology at Oregon Health and Science University, told The Lund Report that most widely used AI health care technologies keep “a human in the loop” that can correct errors.

“The goal is to streamline,” he said. “Computers slow doctors down.”

ChatGPT, a widely used AI chatbot, has shown potential to diagnose patients. Hersh said that could be useful in lower-resource settings, but a clinician would still need to be involved in diagnosing patients.

Hersh said that the legal questions around AI in health care remain significant

“What if an AI system gets it wrong?” he said “Who gets sued?”

Bill gets first hearing

During a hearing on the bill Tuesday in the House Behavioral Health and Health Care Committee, state Rep. Cyrus Javadi, R-Tillamook, said that AI products are getting “really, really close” to a “software nurse.” He asked if these products should carry a label to alert patients, be required to meet standards or if we should “just wait and see.” 

“By the time the session ends, we’ll have taken another big leap on the AI front."

Kennedy responded that she partially wanted to “wait and see what happens in the marketplace.” But she added that there is the issue of who is accountable for advice given to patients by an AI. 

“I can give you some really good examples of bad AI advice in the last six months from technology companies,” she said. “It’s just not there yet.”

Next steps for the bill haven’t been scheduled.


You can reach Jake Thomas at [email protected] or at @jthomasreports on X.

Comments