Server 4.9.1 - V1
Testing Results Summary
Excellentable Installed? | Excellentable Count | FCP (First Contentful Paint) (in seconds) | SI (Speed Index) (in seconds) | LCP (Largest Contentful Paint) (in seconds) | TTI (Time to Interactive) (in seconds) | TBT (Total Blocking Time) (in milliseconds) | CLS (Cumulative Layout Shift) |
---|---|---|---|---|---|---|---|
❌ | 0 | 0.8 | 0.8 | 0.8 | 1.9 | 70 | 0.003 |
| 0 | 1.0 | 1.2 | 1.0 | 5.5 | 10 | 0.003 |
| 1 | 1.0 | 3.6 | 1.0 | 6.5 | 860 | 0.037 |
| 2 | 1.2 | 3.3 | 1.2 | 6.7 | 1,050 | 0.122 |
| 3 | 1.0 | 3.6 | 1.0 | 8.1 | 1,870 | 0.193 |
Definitions
FCP (First Contentful Paint)
FCP measures how long it takes the browser to render the first piece of DOM content after a user navigates to your page. Images, non-white <canvas>
elements, and SVGs on your page are considered DOM content; anything inside an iframe isn't included.
SI (Speed Index)
Speed Index measures how quickly content is visually displayed during page load. Lighthouse first captures a video of the page loading in the browser and computes the visual progression between frames.
LCP (Largest Contentful Paint)
LCP measures when the largest content element in the viewport is rendered to the screen. This approximates when the main content of the page is visible to users.
TTI (Time to Interactive)
TTI measures how long it takes a page to become fully interactive. A page is considered fully interactive when:
The page displays useful content, which is measured by the First Contentful Paint,
Event handlers are registered for most visible page elements, and
The page responds to user interactions within 50 milliseconds.
CLS (Cumulative Layout Shift)
Version Improvements
Shows the latest 5 versions.
Version | Improvements |
---|---|
4.10.1 - Spread Charts | Adding SpreadJS Charts Library of size 2.2 MB |
4.10.1 - No Spread Charts | Dynamic loading of Spread Export Libraries |
4.9.1 - V2 | Clean up Atlassian Plugin XML files |
4.9.1 - V1 | Dynamic loading of Excellentable |
4.8.2.2 | Earlier Excellentable Setup |
Testing Methodology
Explain our methodology and why it is trustworthy.
Using Ghost Inspector to find automated tests and capture the data.
What we tested
Hardware
Mainly be Server
What is our hardware that we are putting it out on.
Which environment and the specs of it?
Testing Results for Specific Actions
Performance Test Results
Could be tested appropriately per action with Ghost Inspector.
Maybe only for most recent version and previous version? To show changes?
Just show most recent version and then there would be a history that is already there.
Description | Response time(ms) with the app installed | Response time(ms) without the app installed |
---|---|---|
Login | ||
Create a page in Confluence | ||
Edit a page | ||
View page | ||
Add comment to a page | ||
Create blog | ||
View blog | ||
View dashboard | ||
Logout |