By John Ullyot
In a Nov. 1 story on VA’s hospital rating system, New York Times reporter Dave Philipps ignored the department’s multiple, thorough, on-the-record responses to his many questions and failed to reflect VA’s position in almost any way.
In addition to ignoring VA’s numerous, comprehensive responses, Philipps falsely implied that VA gave almost no comment and did not engage much with him for his article when the opposite is the case.
Ordinarily, we would expect more journalistic integrity from the New York Times, but given Philipps’ history of false and biased reporting on VA, this is unfortunately par for the course.
Consider the facts:
Philipps’ story says: “When the Department of Veterans Affairs released the annual ratings of its hospitals this fall, the facility in Atlanta dropped to the bottom, while the one in West Haven, Conn., shot to the top. It was something of a mystery as to why.”
In reality: It wasn’t a mystery at all. As we told Philipps on Oct. 24, “When a facility has only very small changes in metric values when VA as a system is improving overall, facilities will show a ‘trivial change’ from their past performance, and their rating relative to their peers may drop by one or more stars. This is what happened in Atlanta.”
Philipps’ story says: “What is most worrisome to some experts is the role that the star ratings now play in grading performance of hospitals and their managers. They say it creates an incentive to conceal problems rather than grapple with them, in order to collect bonuses or sidestep penalties.”
In reality: As we told Philipps on Oct. 31, the premise of this allegation is false, as “[Strategic Analytics for Improvement and Learning] includes multiple dimensions of performance and measures that would be extremely difficult to ‘game’ or manipulate, such as surveys of Veterans and reviews of medical documentation done by independent third parties.”
Philipps’ story says: “The gaming can put patient care on the line. At the hospital in Roseburg, Ore., administrators turned away some of the sickest patients to keep them from affecting the facility’s scores, doctors there have said.”
In reality: This is false. In Roseburg, the facility was simply basing admissions decisions on its actual clinical capabilities.
VA has asked the New York Times for evidence backing up its Roseburg claims multiple times and the New York Times has not been able to provide it.
And as we told Philipps Oct. 31, the premise of this allegation is false, “as SAIL includes multiple dimensions of performance and measures that would be extremely difficult to ‘game’ or manipulate, such as surveys of Veterans and reviews of medical documentation done by independent third parties.”
Philipps’ story says: “The chief of surgery at another veterans’ hospital in a major metropolitan area said in an interview that administrators discussed whether the hospital should not perform certain operations because they could impact the hospital’s quality statistics.”
In reality: As we told Philipps Oct. 31, “the premise of this allegation is false, as surgical outcomes are tracked by the VA Surgical Quality Improvement Program, not SAIL.”
Further, we asked the New York Times for evidence backing up these allegations, and the New York Times has not been able to provide it.
Philipps’ story says: “But the department declined to make key officials available to discuss the system.”
In reality: In response to his questions, we sent Philipps detailed responses totaling dozens of pages, but he included only one sentence representing the department’s viewpoint in his story.
Philipps’ story says: “The department refused multiple requests to interview Dr. Almenoff, and he did not respond to direct inquiries seeking comment.”
In reality: In response to his questions, we sent Philipps detailed responses from Dr. Almenoff totaling dozens of pages, but he included only one sentence representing the department’s viewpoint in his story.
The fact that Philipps failed to mention that or include any of Dr. Almenoff’s responses is simply a misrepresentation of the facts.
Philipps’ story says: “The New York Times contacted eight veterans’ hospitals, including those in Atlanta and West Haven, asking to interview their directors about Sail. None were willing.”
In reality: Some regional and facility directors sent Philipps statements praising SAIL. The fact that the New York Times failed to mention that or include any of their responses is simply a misrepresentation of the facts.
Philipps’ story says: “The department says its star ratings help keep veterans informed.”
In reality: That is precisely the opposite of what we told Philipps. On Oct. 31, we told him, “those comments demonstrate a fundamental misunderstanding of the purpose of SAIL.
“SAIL is a rating system for internal improvement. The ratings are released publicly to motivate all facilities to do better.
“VA developed www.accesstocare.va.gov as a way for Veteran patients to find useful and easy-to-understand information on quality and wait-time information about VA hospitals.”
Philipps’ story says: “‘I wanted to move away from Sail,’ said Dr. Shulkin…”
In reality: Shulkin fully supported the use of SAIL during his tenure and was the driving force for publicly sharing SAIL ratings. He repeatedly and whole-heartedly embraced the technically sophisticated analytic tools that SAIL provides.
Philipps’ story says: “Agency employees say that only Dr. Almenoff and a few members of his staff know exactly how the system weighs and adjusts the 60 publicly available measures that go into a score.”
In reality: As we told Philipps on Oct. 31, “VA updates its performance metrics and the weights used to calculate overall performance each year. Medicare does the same. This helps mitigate the tendency to ‘teach to the test.’”
We also provided Philipps with detailed information about our training, accessible to all staff, about how SAIL works, including its component metrics, their associated weights, and the general approach to scoring. We made it clear that SAIL includes tools that can be used to drill down to individual patients to identify where care may have gone awry.
Finally, we explained that VA does not reveal the specific details of its risk adjustment protocols, because those metrics cannot be reproduced locally, and knowledge of the specific statistical adjustment procedures is unnecessary for identifying clinical care processes that need to be improved.
Philipps’ story says: a VA employee “alerted the department’s Office of Accountability and Whistleblower Protection that Sail was statistically unsound and open to gaming, and submitted a lengthy paper showing how a host of problems made the system a ‘credibility crisis waiting to happen.’ The reply came nearly a year later: The department planned to take no action.”
In reality: We provided Philipps with detailed rebuttals to the employee’s arguments, all of which he ignored. As we told Philipps Oct. 4, her paper “demonstrated the author’s fundamental misunderstanding of how SAIL works and its purpose.”
John Ullyot is VA Assistant Secretary for Public and Intergovernmental Affairs, and a Marine Veteran.