Skip to end of metadata
Go to start of metadata


Question: why is the Google Analytics report different to the Commerce Vision stats report?

  •  
  • A common question our customers ask us is why site traffic statistics in their Google Analytics reports vary from those in Commerce Vision statistics reports.


  • To answer this question, it is important to understand how both the Customer Self Service eCommerce Platform and the Google Analytics engines work. And more importantly, to know how the Internet works when indexing and searching websites.

  •  

How does Google Analytics work?


Google Analytics works by including a block of JavaScript code in the pages of your website. When a webpage loads, it executes JavaScript.

  •        
  • What is JavaScript?

    JavaScript is a scripting language used to create and control dynamic website content. This includes anything that moves, refreshes or otherwise changes on your screen without requiring you to manually reload a web page. When users on your site view a page, the JavaScript code executes/runs the tracking operation for Google Analytics and stores this as a visit. It will also track a user if they have gone to a product view page or checkout.

  •    

How does site traffic statistics on the Commerce Vision Customer Self Service eCommerce Platform work?

  •  
  • Important Note

    Because your website configuration and reporting needs might differ from a standard setup, it's a good idea to understand the general tracking process to ensure that your reports deliver data as you expect. This way, you can decide how to configure Analytics tracking to best suit your website.

  •    
  • Commerce Vision tracks all user interactions on your website. Human users are defined as Guest Users (users that not logged in or created an account who are browsing your site (usually in the Public Role)), and Authenticated Users. User interactions with the website may include but are not limited to human users browsing the website from an internet browser, crawlers or bots that interact with the website, or any automatic script that calls commands from the website such as get pricing or get product information. Every page rendered, search executed, and all data service calls are logged in the User Session table for the site.


  • IMPORTANT - The Statistics page of the application is based on the User Session data to formulate Page and User statistics. This page will show all user interactions regardless of user or method used to access the site.

  •  


What is a Crawler Bot?

A web crawler/bot is like someone who goes through all the books in a disorganised library and puts together a card catalogue so that anyone who visits the library can quickly and easily find the information they need. To help categorise and sort the library's books by topic, the organiser will read the title, summary, and some of the internal text of each book to figure out what it's about. A web crawler, spider, or search engine bot downloads and indexes content from all over the Internet. Such a bot aims to learn what (almost) every web page on the Web is about so it can  retrieve the information when needed. It's called a "web crawler" because crawling is the technical term for automatically accessing a website and obtaining data via a software program.


Loading pages by bots

IMPORTANT- A bot will not always load a web page in the same way a browser will load and execute the page. In most cases, so bots can more quickly scan an entire site, they will not execute the JavaScript on a web page, which in the case of Google Analytics, will not fire off the statistical information to capture this event in the Google Analytics portal.



  • Good Bots

A "good" bot is any bot that performs useful or helpful tasks that aren't detrimental to a user's experience on the Internet. Because good bots can share similar characteristics with malicious bots, the challenge when putting together a bot management strategy is to make sure good bots aren’t blocked.

There are many kinds of good bots, each designed for different tasks. Here are some examples:

    • Search engine bots: Also known as web crawlers or spiders. These bots "crawl," or review, content on almost every website on the Internet, and then index that content so that it can show up in search engine results for relevant user searches. They are operated by search engines like Google, Bing, or Yandex.
    • Copyright bots: These bots crawl platforms or websites looking for content that may violate copyright law. They can be operated by any person or company that owns copyrighted material. Copyright bots can look for duplicated text, music, images, or even videos.
    • Site monitoring bots: These bots monitor website metrics, e.g., monitoring for backlinks or system outages, and can alert users of major changes or downtime. For instance, Cloudflare operates a crawler bot called Always Online that tells the Cloudflare network to serve a cached version of a webpage if the origin server is down.
    • Commercial bots: These bots are operated by commercial companies that crawl the Internet for information. They may be operated by market research companies monitoring news reports or customer reviews, ad networks optimizing the places where they display ads or SEO agencies that crawl clients' websites.
    • Feed bots: These bots crawl the Internet looking for newsworthy content to add to a platform's news feed. Content aggregator sites or social media networks may operate these bots.
    • Chatbots: These bots imitate human conversation by answering users with pre-programmed responses. Some chatbots are complex enough to carry on lengthy conversations.
    • Personal assistant bots: Examples: Siri or Alexa. Although these programs are much more advanced than the typical bot, they are bots nonetheless since they are computer programs that browse the web for data.


Some bots are bad for business. Here are some examples:

    • Website scraper bots: These bots will generally send a series of HTTP GET requests and then copy and save all the information that the web server sends back. They will continue doing this through the hierarchy of a website until it's copied all the content. More sophisticated scraper bots can use JavaScript to, for instance, fill out every form on a website and download any gated content. "Browser automation" programs and APIs allow automated bot interactions with websites and APIs, pretending to be a traditional web browser in an attempt to trick  the website’s server into thinking a human user is accessing the content. True, an individual can manually copy and paste an entire website instead, but bots can crawl and download all a website's content in a matter of seconds. Even for large sites like e-commerce sites with hundreds or thousands of individual product pages.

    • Price scraping bots: These bots downloads all the pricing information from a competitor company's website so that a business  can adjust their own pricing accordingly.

        

How do web crawlers affect Search Engine Optimisation (SEO)?


SEO stands for search engine optimisation. This is the discipline of readying content for search indexing so that a website shows up higher in search engine results. If search engine bots do not crawl a website, then it cannot be indexed, and it won't show up in search results. For this reason, if a website owner wants to get organic traffic (not from paid ads) from search results, they mustn't block web crawler bots.

  •  

Summary


The differences between Commerce Vision's statistics and those of Google Analytics typically vary for site traffic, especially on the home page.

    • Commerce Vision tracks all of the website traffic including bots and crawlers whereas Google Analytics only tracks user sessions. 
    • For orders and checkout, since Crawlers and bots do not check out or order, the statistics should match.
    • For product views and pricing, the statistics will vary as crawlers will also be able to make calls to get this information.