PSC Benchmarking Methodology

Objectives of a PSC Benchmarking Methodology
To develop a fair system, you need to make sure you compare apples to apples and oranges with oranges, and when it comes to shipping this means comparing ships of the:

  • Same Type
  • Same Fleet Segment (DWT Wise)
  • Same Year of Built (YoB)
  • Same performance on Same Port

We have applied these principles consistently, throughout the platform to make sure everyone is treated fairly.


Splitting the Global Fleet into Segments

Benchmarking may be impossible unless you split the global fleet into segments, that we have employed as follows:

  1. Bulker – Handysize (<35k DWT)
  2. Bulker – Handymax (35-50k DWT)
  3. Bulker – Supramax (50-67k DWT)
  4. Bulker – Panamax (67-100k DWT)
  5. Bulker – Cape (>100k DWT)
  6. Bundle: All dry bulk (Segments 1, 2, 3, 4,5)
  7. General Cargo
  8. Tanker – Small Product (<25k DWT)
  9. Tanker – MR1/MR2 (25 – 60k DWT)
  10. Tanker – LR1/LR2 (60-125k DWT)
  11. Tanker – Suezmax (125-200k DWT)
  12. Tanker – VLCC (>200k DWT)
  1. Bundle: All tankers (Segments 8,9,10,11,12)
  2. LNG/Gas Carriers
  3. LPG Carrier
  4. Container – Feeders (<10k DWT)
  5. Container – Large (10-90k DWT)
  6. Container – Ultra Large (>90k DWT)
  7. Bundle: All containers (Segments 16, 17, 18)
  8. Vehicle Carrier
  9. Ro Pax
  10. Offshore
  11. Other Ship Type
  12. Bundle: All ships (All above segments)

Making Sure what we count makes sense
We count for deviations from the assigned benchmark on two (2) parameters mainly

  • ·The Deficiency per Inspection (DPI), i.e. Number of Deficiencies / Number of Inspections for a given period
  • The Detention Rate (DER), i.e. Number of Detentions / Number of Inspections x 100 for a given period

For every Ship there is a benchmark that is provided by the AVERAGE performance of the same ship YoB, Fleet Segment and Port that is calculated for a given period, as per the example below

The Deviation is the % of the deviation of actual performance vs the Benchmark (which is the average) x 100


How we calculate the Performance with a worked example
This is a Ship Benchmarking Example for Ship Z, on the Cape Fleet Segment, with YoB: 2010).

This is the actual performance of the ship for the period under investigation:

 

Calculation for the above example:
DPI Benchmark: (3.33 – 2.50)/2.50 x100% = +33.2% | DER Benchmark : (16.7 – 3.29)/3.29 x100%= +407.6%
Overall Benchmark Performance for the Period as the average Benchmark of DPI/DER = (33.2 + 407.6) /2 = +220.1%

Following remarks apply:

  1. DPI: Deficiencies Per Inspection
  2. DER: Detention Rate per 100 inspections
  3. BEP: Benchmarking Performance
  4. Port Benchmarks are for same Port, Fleet Segment (cape) & Age (YOB=2010) and then calculated/adjusted per ship
  5. Fleet Benchmarks are calculated on the basis of sums/averages of Ship Inspections & Benchmarks
  6. Comparing DPIs and DER the smallest number is beating the competition, therefore the smaller the value below zero the better the performance
  7. Overperforming means Beating the average, e.g BEP=-13%
  8. Underperforming means Exceeding the average, e.g BEP=+13%


Experience it Firsthand


Dive Deeper

Principles of the RISK4SEA Platform

PSC Intelligence

Powered by PSC Inspections

We host the largest and most comprehensive PSC intelligence database, going beyond just PSCIs and deficiencies. Our platform offers deep insights into actual inspections, from calculating PSCI windows for every port call to generating tailored checklists for specific ports, ships, and managers—ensuring everything is prepared efficiently and effectively.

PSC KPIs

Explore DPI, DER & KPIs vs Ship Age on specific Ports

Read More

Challenging Ports

Take a deep dive into the most challenging ports

Read More

POCRA

Get POCRA for your Ship @ next port of Call with Real Data

Read More

Best Performers

Find the top performers on each fleet segment

Read More

PSC WiKi

Explore & learn from the latest PSC Procedures

Read More