A Comprehensive Guide to Selecting and (Potentially) Replacing PACS: Navigating the Decision-Making Processes

This stage represented the culmination of the PACS selection process. This phase was designed to provide a comprehensive evaluation of the top three vendors through a series of onsite demonstrations, in-depth RFP assessments, and cost analyses. The objective was for the committee and department to scrutinize each system's functionality and alignment with the department's specific needs.

Onsite Demonstrations

The three final vendors were each invited to perform a 3-day onsite demonstration that would allow the Committee and other members of the department to assess the features of the PACS.

All radiologists and modality managers were invited to attend the demonstrations. A sign-up tool was used to ensure that demonstrations were not over-crowded. The leadership team worked with the radiologist’s administrative assistants to ensure that each member of the selection committee was scheduled to attend each demonstration. The Radiology department leadership made the PACS selection a priority. Radiologists were encouraged to help each other attend the demonstrations and members of the selection committee were prioritized for office time on the day of onsite demonstrations.

Prior to the demonstration, the Radiology informatics team prepared an anonymized imaging dataset. Each vendor was asked to sign a business associate agreement so that data could be shared safely. After signing the document, each vendor was given the data and asked to load it onto their system so that attendees could see how the system would work using internal real-world data. Data was selected to highlight potential challenges including patients with more than 1000 comparison studies, studies with more than 20,000 images, studies with more than 1000 series, studies with mixed modalities, and studies with atypical data types (e.g., enhanced DICOM).

Vendors were responsible for setting up their systems and orchestrating the flow of their presentations. Their teams, which varied in size from two to eight individuals, adapted the demonstrations to the interests and questions of the attending radiologists and technologists, who participated in small groups of one to three people to foster a more interactive and personalized experience. All demonstrations were held in a reading room designed for testing and training. The radiologists could adjust the conditions to their liking, like clinical reading rooms.

Assessment of Onsite Demonstrations

The selection committee deliberated on the most effective method to evaluate the different PACS. They considered several approaches:

1.

Utilizing the same assessment form as used in previous phases.

2.

Modifying the assessment form so that evaluations were based on roles.

3.

Assigning specific system features to individuals to enable consistent evaluation across vendors.

Ultimately, the committee opted for the second approach, modifying the original survey to cater to the specific perspectives of radiologists, technologists, and administrators. This adaptation included more open-ended questions to capture feedback on system usability and functionality, and additional rating options for radiologists to assess various toolsets.

Like before, each element was rated on a 5-point Likert scale. The Committee members were also asked to provide an overall assessment score between 1 and 100. In addition to the selection committee, non-committee members were also invited to provide their assessments. Their evaluations included open-ended questions to express opinions and an overall assessment on the same 0 to 100 scale.

After the surveys were completed, the committee chair categorized the open-ended responses to assess the most liked and most disliked features of each PACS. The categories and items mentioned more than one time are included in Table 5.

Table 5 Qualitative assessment of the positives and negatives features of each PACS vendor following onsite demonstrationsRequest for Proposal

The leadership team authored an RFP document (Supplement Document 2). This document was extensive, containing approximately 920 targeted questions that covered a wide range of topics crucial for assessing each system’s suitability [12, 22,23,24,25,26]. These topics included pricing, data migration capabilities, system architecture, required hardware and software, ongoing support and maintenance, implementation and training processes, system upgrades, administration, and workflows specific to various roles such as general users, reading room assistants, technologists, and radiologists.

The vendors were given 1 week to review the RFP and submit any clarifying questions. The leadership team then responded to these inquiries the following week, and the clarifications were shared with all vendors to maintain transparency and fairness in the bidding process. Vendors had 4 weeks total to respond to the RFP from the time it was originally sent. As part of their responses, vendors were also required to include detailed projections of the 5-year total cost of ownership for their system.

Request for Proposal Assessment

All vendors responded to the RFP. Each vendor’s response included multiple files beyond the original documents. The total number of files provided by vendors ranged from 4 to 57.

The committee chair independently reviewed each response, applying a structured scoring system to ensure a semi-objective evaluation. Scores were assigned based on the following scale:

No score: Applied where no response was provided or the question was not applicable (often due to conditional if/then scenarios).

0 (Does Not Meet Expectations): Used for responses that failed to meet the baseline requirements or did not provide the requested features.

1 (Meets Expectations): Assigned to responses that adequately met the RFP requirements.

2 (Exceeds Expectations): Given to responses that surpassed what was asked, indicating a higher value or innovation beyond the basic criteria.

The average score for each vendor was calculated and then multiplied by 100 to create an RFP response score. Items not scored were not included when the average was calculated. A score of 100 or more indicated that responses generally met or exceeded expectations, while scores below 100 suggested deficiencies in meeting the requirements.

The RFP responses were comparatively ranked based on the quality of key features. In cases where differences were meaningful, rankings were assigned to highlight those vendors that provided superior responses. If vendors tied in their rankings on features, they were given the same score, and adjustments were made to the ranks of subsequent vendors accordingly. The RFP comparison score was calculated for each vendor by summing up their ranks across all features, with a lower score indicating a more favorable evaluation.

Cost Comparison

Each vendor responded with a quote so that a 5-year total cost of ownership could be calculated. Two of the three remaining vendors provided a cost per study quote in case of overages to the contracted study volume.

Outcome of Onsite Demonstrations and RFP Scoring

The comprehensive analysis of the assessment data was conducted by the committee chair. The analysis incorporated various metrics to provide a multifaceted view of each vendor’s performance and compatibility with the department’s needs.

The metrics assessed included the following:

1.

Mean overall score from Selection Committee members (1–100; higher is better)

2.

Median overall score from Selection Committee members (1–100; higher is better)

3.

Sum of ratings from Selection Committee Members (20–100; higher is better)

4.

Average rating score from Committee Members (1–5; higher is better)

5.

Sum of weighted scores from Committee Members [Aggregate of weighted scores, calculated by multiplying the relevant pillar scores, the concept scores, and their average ratings] (higher is better)

6.

RFP comparison score (lower is better)

7.

RFP response score (higher is better)

8.

Other reviewers’ mean overall score (1–100; higher is better)

9.

Other reviewers’ median overall score (1–100; higher is better)

10.

5-Year total cost of ownership

11.

Cost per study

The results of each assessment and the ranking of each assessment are included in Table 6.

Table 6 Results of assessment after different modes of final assessmentOther Methods of Assessment

In addition to the structured onsite demonstrations and RFP assessments, the leadership team employed other methods to further validate vendor capabilities and reliability.

Reference Calls

The leadership team conducted reference calls with customers of the two non-incumbent vendors to gain insights into real-world applications and performance. To ensure a balanced view, the leadership team spoke with one customer identified via their personal network and another customer suggested by the vendor. These discussions typically involved a PACS administrator and a radiologist from the reference sites, who provided detailed feedback on various aspects of their PACS. At the outset of the call, the leadership team asked call participants for any potential conflicts of interest. During the subsequent conversation, questions were asked to determine implementation success, vendor support and responsiveness, system architecture, system performance, and downtimes.

In-person Meetings

The team also leveraged the Society for Imaging Informatics in Medicine (SIIM) Annual Meeting to engage directly with the vendors. These interactions focused on addressing specific concerns and deficiencies noted during earlier assessments. Discussion topics included system architecture, business continuity, cloud and AI readiness, total cost, and the potential relationship with the department.

Comments (0)

No login
gif