What We Learnt from Optimising for Fashion Clients
Two of the most important rules of experimentation are: a) Listen to your users, and b) Test everything.
Endless Gain are firm believers in these two principles. Our UX researchers and optimisation strategists are constantly at work finding issues and trying to resolve them for our clients.
Along the way, they learn a lot about both users and the different eCommerce product verticals our clients belong to.
Fashion eCommerce sales in the UK grew 5% between December 2020 and 2021 (according to IRP Commerce Data) , but conversion rates and average order values in the sector have declined. This shows the need for customer experience optimisation in the eCommerce business.
Below we share some of our learnings from optimising for clients in the online fashion sector.
How Do Differences in User Behaviour Impact Customer Experience for Online Fashion Retailers?
It’s almost impossible to find two brands whose users behave the same. The differences in user behaviour among brands can be subtle or very broad, and understanding these differences is what allows us to continuously improve customer experience for each brand.
Here are some common behavioural differences that affect user experiences and optimisation strategies:
Age of the target audience: The average age of a brand user can affect the type of images that should be used: lifestyle images vs studio images, showing a model with a face or not, etc.
Brand affinity: The structure or layout of a page needs to be different depending on the type of products sold on a website, the brand reputation, or how users view a brand. For example, luxury brands need heavy imagery, while functional products need more description. High-end fashion also seems to often have a core returning user base, and new customers are often harder to tap into due to higher-than-average price points.
Brand values vs audience expectations: Some clients—mostly high-end fashion brands—expect their customers to have the knowledge or at least an understanding of their brand and its values. Some clients also expect the brand values to be a factor that influences users to buy from them. However, this is not always the case. We’ve seen in user research sessions that users like the products from a brand but have no knowledge of the brand’s values.
Brand values in UX and design: Among high-end fashion brands, we’ve noticed brand values influencing the product imagery on their site. However, this can confuse users if they don’t know what the brand values are. User sessions have shown us that most users go to a website to browse or shop, and not to research the brand they want to shop with.
Banner blindness in cookie messaging: Many people ignore the cookie messaging if it is in a banner format—they do not accept or deny cookies, and don’t even interact with it to close it. This causes different issues:
When users don’t accept cookies, their on-site behaviour cannot be tracked. It can block several tools working in the background, including experimentation platforms and analytics. This impacts the performance of ongoing experiments and marketing campaigns that depend on user behaviour tracking.
Many sites have USP banners, SALE banners, etc. on top of the page, which may be running as part of experimentation plans or marketing campaigns. If the cookie banner is also at the top of the page and it gets ignored and not closed by users, then these elements are not seen by the user. This diminishes the effectiveness of the ongoing campaigns.
To work around this, more eCommerce retailers need to move to full-page pop-ups of the cookie messaging so that the user cannot avoid the action. The choices of actions offered to the user can also be experimented with—for example, what works better? Accept/Deny or Accept All/Let Me Choose/Manage Cookies?
Which Pages Usually Need the Most Optimisation?
Product Detail Pages (PDPs) are often the most important for a fashion eCommerce website because they are the landing pages that usually get the highest amount of paid traffic (Google Shopping, for example).
These are also the webpages that require the most optimisation. They’re often not well optimised or do not follow users’ mental models and therefore fail to do as well as they should.
PDPs need to sell the product (and the seller) and should also be easy to use for customers. And getting everything just right for your audience can be tricky. This is why, often, a lot of experimentation needs to be done before you arrive at a great customer experience in a PDP.
Homepage and product listing pages (PLPs) also need a lot of optimisation on a website. This is because all organic user journeys usually include these pages.
What Page Elements Commonly Need Optimisation?
In our experience, the elements on a page that usually need to be improved are:
On the PDP:
USP information design
Homepage content display
Search functionality and design
USP bar design and transition elements
Filters on the PLP
Personalisation or better experience for users landing on pages via Google Shopping
Sometimes, simple changes can bring in great results. For example, minor menu and navigation changes based on UX and behavioural psychology principles often improve organic customer journeys considerably and help drive conversions.
What Pages Are the Most Difficult to Get Wins On?
Homepage and Checkout flow.
There are usually too many elements on a homepage, and it’s difficult to determine what change can add the most value. This makes it one of the harder pages to get a win on.
Checkout flows also need extensive experimentation to get right. And it is very critical to successfully optimise these pages because problems in this area have the highest impact on conversions. Therefore, when we get wins on the checkout page, they are worth a lot more to our clients.
What Ideas and Experiment Types Have Worked Well for Some Clients but Failed for Others?
This is a harder thing to generalise, because each client has different needs and solutions. There might be things that commonly work for different clients if the user base and expectations are similar, and there too, it might only work on one device type and not on another.
However, there are certain experiments that are more likely to get success because the behavioural psychology behind them is strong. Here are some examples:
Use of star rating as quick visual reference: Using star ratings instead of showing detailed reviews can work well on mobile. On mobile, users tend to want to complete an action faster than they would on desktop. So, displaying star ratings more prominently can inspire trust and help reduce anxiety. However, this principle does not always apply on desktop, where detailed reviews might also be required.
Product descriptions: These are important for users who are knowledgeable and understand or know what is important to them in a product. However, for new users or novices, less description and more visual elements can be better.
Adding visual filters on the PLP: Visual filters usually work because it makes finding a product category easier for users. However, when we tried this for two leading fashion clients, the experiment failed.
Adding recommended products carousel on Google Shopping landing pages: This experiment has been successful for so many clients that we consider it an easy one to try for clients who are losing money on Google Shopping traffic. However, when we tried this for a men’s fashionwear client, the experiment failed.
Sitewide bars highlighting USP and trust messaging: These have worked well for brands that don’t have an established online presence but haven’t made much of a difference for brands that are already well known.
Triggers under the CTA on PDP: These have worked very well for some clients but delivered neutral to no results for others.
How Has Experimenting During Sale Periods Worked?
We have to be cautious when experimenting during sale periods for fashion clients. We’ve seen experiments being both affected and unaffected by sale periods. Sometimes conversion rates increase dramatically, while on others there’s no difference.
The real difference happens when:
You are testing sale-specific information like sale banners and countdown timers.
There is a higher-than-usual visitor volume, especially from visitors who are only on the site because a sale is going on. This impacts metrics such as site usage, transactions, and average order values. The experiment may not have been planned with these unusual metrics in mind, which impacts its performance and results.
Sometimes when conversion rates increase dramatically, it can throw other micro conversions up or down.