PSC Top 10 Performers

PSC Top 10 Performers

Up to Q3 2024

About the PSC Top 10 Performers

At RISK4SEA, we are dedicated to providing actionable PSC intelligence to illuminate Port State Control (PSC) performance and support the journey toward sustainable shipping. With a focus on embracing excellence, we are thrilled to announce the release of the Top PSC Performers for the first time, alongside the launch of our new website that provides transparent access to the documentary evidence and in-depth research underpinning our findings.

We have meticulously mapped the entire PSC ecosystem—including PSC stations, ships, managers, and PSC inspections (PSCIs)—and finalized a robust benchmarking methodology., designed to be practical, accurate, and verifiable, ensuring it is both meaningful and applicable in real-world scenarios, incorporating geotagged PSCI data for the specific ship type, fleet segment, and age group.

We are excited to introduce this initiative, aiming to make PSC intelligence accessible to a broader segment of the market with the goal to update and circulate these rankings quarterly, covering the last 36 months of data.

By doing so, we aspire to illuminate PSC performance and promote Safety excellence, while also recognizing smaller players who often excel but may not receive the same spotlight as industry giants.

Apo Belokas
Founder & CEO

Methodology

1. Objectives of a PSC Benchmarking Methodology
To develop a fair system, you need to make sure you compare apples to apples and oranges with oranges, and when it comes to shipping this means comparing ships of the:

  • Same Type
  • Same Fleet Segment (DWT Wise)
  • Same Year of Built (YoB)
  • Same performance on Same Port

We have applied these principles consistently, throughout the platform to make sure everyone is treated fairly.


2. Identify PSC Data used for Benchmarking

RISK4SEA is using a team of analysts and third-party vendors and experts to collect, evaluate and validate data used in the platform. As a raw input the model uses PSC Data (Inspections and Detentions along with the accompanying details of each inspection) that are assigned to Port, Country, Manager, Class, Flag at the time of the Inspection, for each PSC inspection. Here the following PSC data parameters have been accounted for Ocean going Performance, all ports, Countries, MoUs etc


3. Set the Ship Types to be used for Benchmarking & Split Global Fleet into Segments

Benchmarking cannot proceed unless we split the global fleet into segments, and RISK4SEA platform uses the following parameters:

  1. Bulker – Handysize (<35k DWT)
  2. Bulker – Handymax (35-50k DWT)
  3. Bulker – Supramax (50-67k DWT)
  4. Bulker – Panamax (67-100k DWT)
  5. Bulker – Cape (>100k DWT)
  6. Bundle: All dry bulk (Segments 1, 2, 3, 4,5)
  7. General Cargo
  8. Tanker – Small Product (<25k DWT)
  9. Tanker – MR1/MR2 (25 – 60k DWT)
  10. Tanker – LR1/LR2 (60-125k DWT)
  11. Tanker – Suezmax (125-200k DWT)
  12. Tanker – VLCC (>200k DWT)
  1. Bundle: All tankers (Segments 8,9,10,11,12)
  2. LNG/Gas Carriers
  3. LPG Carriers
  4. Container – Feeders (<10k DWT)
  5. Container – Large (10-90k DWT)
  6. Container – Ultra Large (>90k DWT)
  7. Bundle: All containers (Segments 16, 17, 18)
  8. Vehicle Carrier
  9. Ro Pax
  10. Offshore
  11. Other Ship Type
  12. Bundle: All ships (All above segments)

4. Set the Age groups to be used for Benchmarking
In depth analysis has indicated that all and any  KPIs that may be used for PSC performance Benchmarking are sensitive to age, therefore to set the comparison straight one crucial step is to set either the exact age and/or age groups to avoid comparing new ships with old ships and so on.

The platform may use both approaches using the following five (5) age groups

  • 0-5 Years of age
  • 6 to 10 Years of age
  • 11-15 Years of age
  • 16-20 Years of age
  • Older than 20 Years of age

 

5. Set the Time Intervals for Benchmarking
A limited number of data and therefore PSCIs makes any comparison irrelevant due to insufficient data pools and therefore the following data have been accounted for in this benchmark:

  • Last 36 Calendar Months at the date of issue, as identified at the bottom of the report

 

6. Set the KPIs to be used for Benchmarking
Arriving to the final stage before the final calculation for the performance analytics the most critical is to define the Key Performance Indicators (KPI) that the model will be using. Here the following KPIs are being used

DPI: Deficiency per Inspection (Value, Number)
Definition: Average number of deficiencies per inspection.
Performance: The smaller figure indicates better PSC performance

DER: Detention Rate (%)
Definition: Percentage of detentions per 100 inspections.
Performance: The smaller figure indicates better performance


7. Benchmarking Performance Calculation
To Benchmark against a value you need to clearly identify the methodology and the benchmarks on how the values have been set and processed, with the following approach applied here:

7.1 KPI Benchmarking Performance Calculation: Making Sure what we count makes sense
Benchmarking is about accounting for deviations from the assigned benchmark on two (2) parameters mainly

  • The Deficiency per Inspection (DPI), i.e. Number of Deficiencies / Number of Inspections for a given period
  • The Detention Rate (DER), i.e. Number of Detentions / Number of Inspections x 100 for a given period

Each entity is benchmarked for each respective KPI against a set Benchmarked as follows :

  • Each ship is benchmarked against the average of other ships of similar type and age group at the same port
  • Each manager is benchmarked against the average of other managers on the basis of the fleet they manage for the period under review

For every Ship there is a benchmark that is provided by the AVERAGE performance of the same ship YoB, Fleet Segment and Port that is calculated for a given period, as per the worked example provided below

For every Manager there is a benchmark that is provided by the SUM of the fleets of the same Group that  they operate as calculated for a given period, as per the worked example provided below

The Deviation is the % of the deviation of actual performance vs the Benchmark (which is the average) x 100

KPI(Benchmarking Performance) (%)  = [ KPI (value) – KPI (benchmark) ] / [KPI (benchmark))

In case KPI Benchmarks are set the same may be calculated for set of Ships, e.g. fleet and managers, on the following basis

KPI(benchmark Value) = Average [ KPI(of type A1 and age group A1), KPI (of type B2 and age group B2), KPI(of type C3 and age group C3)

At any given time to set an Overall Benchmarking Performance (BP) score anyone may do so by simply averaging all the factors used as each KPI is benchmarked against a set corresponding benchmark.


7.2 Worked Example on Ship Benchmarking Performance Calculation
For demonstration purposes the of Ship#1 is used, on the Cape Fleet Segment, with YoB: 2012, corresponding to the 10-14 age group. This is the actual performance of the ship for the period under investigation:

PSCI # PSC Ship Inspection Portfolio Port Benchmarks Benchmarking Performance
Port MoU #DEFs DET DPI DER DPI DER Avg
1 Port A Paris 3 0 4.12 6.23% -27.2% -100.0% -63.6%
2 Port B Paris 2 0 3.93 4.14% -49.1% -100.0% -74.6%
3 Port C Tokyo 4 0 5.23 3.12% -23.5% -100.0% -61.8%
4 Port D USCG 7 1 8.76 8.45% -20.1% 1,083.4% 531.7%
5 Port E USCG 3 0 2.16 2.15% 38.9% -100.0% -30.6%
6 Port F Med 1 0 1.16 6.12% -13.8% -100.0% -56.9%
Average 3.33 16.67% 4.23 5.04% -21.1% 231.0% 104.9%

Where
# DEFs: Number of deficiencies
DET: Number of Detentions (either 0 or 1 on each port)
DPI: Deficiency Per Inspection KPI
DER: Detention Rate KPI, per 100 inspections (%)
Port Benchmarks are for same Port, Fleet Segment (cape) & Age (YOB=2012)
Benchmarking Performance: The deviation from the Actual Performance from the Benchmark
Ship/Fleet Benchmarking Performance: The average of both DPI and DER Benchmarking Performance

Calculation for the above example:
DPI Benchmark: (3.33 – 4.23)/4.23 x100% = -21.1%
DER Benchmark :
(16.67 – 5.04)/5.04 x100%= +231.0%
Overall Benchmark Performance for the Period as the average Benchmark of DPI/DER = (-21.1 + 231.0) /2 = +104.9%

 

7.3 Worked Example on Manager/Fleet Benchmarking Performance Calculation
For demonstration purposes a fleet of 6 ships is calculated, including the Ship#1 of the above calculation, with each ship gone through the process of Ship Benchmarking Performance Calculation (as per above), Therefore this is the actual performance of the fleet for the period under investigation:

Ship # PSC FLEET Inspection Portfolio Ship Benchmarks Ship BP
Ship Segment DPI DER DPI DER
1 Ship #1 Cape 3.33 16.67% 4.23 5.04% 104,9%
2 Ship #2 Cape 1.14 0.00% 2.50 3.30% -77,2%
3 Ship #3 Cape 1.60 0.00% 2.50 3.30% -68,0%
4 Ship #4 Cape 1.00 0.00% 2.50 3.30% -80,0%
5 Ship #5 Cape 1.20 0.00% 2.50 3.30% -76,0%
6 Ship #6 Cape 4.00 12.00% 5.00 7.00% 25,7%
  Average 2.05 4.78%% 3.21 4.20% -11,3%

The Abbreviations and the notes used in the Ship Worked example apply here as well.

On the above example the benchmarking performance of the fleet is +25.7%

 

7.4 Performance Tiers
To make sure that all reports contain the same level of reliability a scale has been used across the full spectrum of the RISK4SEA platform to convert the actual benchmarking Performance of the ships and Managers to the actual Performance Tier. For the purposes of this report the following scale has been applied:

Performance Tier Benchmarking Performance
Max Min
Top 10% -100% -95%
Top 20% -95% -70%
Top 30% -70% -40%
Top 40% -40% -10%
Average 50% -10% 10%
Bottom 40% 10% 40%
Bottom 30% 40% 70%
Bottom 20% 70% 100%
Bottom 10% 100%  

8. Clarifications

8.1 Managers Benchmarked and Minimum PSCI Limit to be included in the Benchmarking Pools
Research has indicated that normally when we plot a manager/PSCI plot the median number of PSCIs per manager (i.e. the Number of PSCIs corresponding to approximately the half of the managers) should be the limit to exclude those who do NOT meet this criterion. Normally this pool of PSCIs is in the range of 10-20% of the total Segment PSCIs for the period under review.

This is why we provide the full benchmarking Dataset Breakdown on each of the Top Performers Segment:

Benchmarking Dataset Breakdown
PSCI Pool PSCI Range Managers PSCIs PSCI %
Not Benchmarked 0 – 9 PSCIs 1,034 4,625 11.6%
Small PSCI Pool 10 – 24 PSCIs 586 9,041 22.6%
Medium PSCI Pool 25 – 49 PSCIs 299 10,395 26.0%
Large PSCI Pool ³50 PSCIs 157 15,920 39.8%
2,076 39,981 100%

8.2 Missing PSCIs
The case of a missing PSCIs may be a case (especially zero deficiency PSCIs from west Africa ports not officially reported to the port or the national Authority or International to any MoU or other party which it does NOT affect the actual results presented here in this report for the following reasons:

  1. These Reports (if any) have NOT been reported to anywhere else than the ship and therefore cannot be verified or inserted into the platform unless provided by the Manager of the ship.
  2. We have reasons to believe that the phenomenon may be more or less of the same limited extend and that may be applicable to other managers as well.
  3. Given the methodology uses the addition of an additional PSCI (with the exception of a Detention) will NOT alter the benchmarking performance of a fleet that, has to do with the sum of PSCIs from many ships, as per the examples presented above.
  4. The basis of the benchmarking is to have and apply a set of rules and criteria to be used equally across the full spectrum of the data under review and this is wat we have done in this report.

PSC Top 10 Performers

Up to Q3 2024

PSC Intelligence

Powered by PSC Inspections

We host the largest and most comprehensive PSC intelligence database, going beyond just PSCIs and deficiencies. Our platform offers deep insights into actual inspections, from calculating PSCI windows for every port call to generating tailored checklists for specific ports, ships, and managers—ensuring everything is prepared efficiently and effectively.

PSC KPIs

Explore DPI, DER & KPIs vs Ship Age on specific Ports

Read More

Challenging Ports

Take a deep dive into the most challenging ports

Read More

POCRA

Get POCRA for your Ship @ next port of Call with Real Data

Read More

Best Performers

Find the top performers on each fleet segment

Read More

PSC WiKi

Explore & learn from the latest PSC Procedures

Read More