SEO

Robots.txt

Robots.txt is used to instruct search crawlers URL or part of URL that they should ignore and not index. It’s important that the search crawlers will get access to static content; example style sheet or JavaScript, and images to be able to detect that your website is mobile friendly. 

SEO optimized product lists

When product listings and the sorting and filters was built a lot of investigation was made to ensure that we follow the ‘unwritten’ rules to ensure high hit rate and few indexed pages.

If you have product listing that the visitor can sort, all of the products will be displayed in each sorting. External search engines will see all of this pages that have different sorting as unique and will split the search score between them in their search result.

To avoid this and only display product listing as single pages to external search engines the canonical attribute is used to instruct that this search result is the same as the search result without sorting.

If you have product listing and the visitor can filter them, probably not all of the combination of the filters are interesting for search engines. If you let search engines crawl all combinations together with sorting you will get millions of pages to index that will split the search score between all the pages.

Accelerator is pre-configured to only index one category with maximum one filter, if multiple filters should be used for search crawlers the project need to reconfigure the inbuilt rules.

How have we made these restrictions

To ensure that not too many filter combinations will be indexed, the Accelerator is using the meta tag for ‘robots’ and ‘canonical’ to instruct the search crawlers what they should index. If the current page contains the same content as another page. To minimize that search crawlers are following links that should not be indexed, the link-attribute ‘rel’ is used on links to pages that not should be indexed. This will also decrease the time it takes for the search crawler to index the website.

Configure

In the navigation settings that is found as a page in each website the administrator can decide which of the filter that should be indexed. A good rule is to think about what the visitors are searching for on the external search engines; “brown jeans” or “green shoe”, they will probably not search for a combination of multiple attributes like “brown and small and jeans”.

Page types to index

For all the regular page type you will find the setting on the SEO tab in back office to instruct what the search crawlers should do with the content. For most of the pages this should be set that the crawlers can both index the page and follow link that is found. The search result page type is one page that not should be either indexed or got the links followed. This is because you as a merchant cannot guarantee that the current products still exists on the same page in the search result when next visitor is redirected from the search engine. If the expected product isn’t on the page the visitor is coming to they will probably return to the external search result and try next hit in the list.

 

Is this page helpful?
Thank you for your feedback!