Point the canonicals to the pages you kn will have the high organic traffic.
I would rewrite t pages to be different if you want the best results.
All all the best,
Tom
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Point the canonicals to the pages you kn will have the high organic traffic.
I would rewrite t pages to be different if you want the best results.
All all the best,
Tom
I agree that they are much better than they where I’m still using Incapsula & Fastly.
I use CloudFlare Enterprise for China.
Glad to hear you’re doing well!
Tom
Bryan,
I’m glad that you found what you where looking for.
I must have missed the part about it being 100% Instapage when you said CMS I thought meant something on else with instapage I think of it as landing pages not a CMS
I want to help so you asked about Google search console how often you need to request google index your site.
First make sure
You should have 5 urls in Google search console
your domain, http://www. , http:// , https://www. & https://
you should not have to requests google index once you’re pages are in googles index. There is no time line to make you need to requests google index.
Use search consoles index system to see if you need to make a request and look for notifications
Times you should request google crawl when adding new unlinked pages , when making big changes to your site , whatever adding pages with out a xml sitemap or fixing problems / testing.
I want to help so as you said you’re going to be using Shopify.
Just before you go live running on Shopify in the future you should make a xml sitemap of the Instapage site
You can do it for free using https://www.screamingfrog.co.uk/seo-spider/
you’re running now name it /sitemap_ip.xml or /sitemap2.xml upload it to Shopify
& make sure it’s not the same name so it will work with your Shopify xml sitemap /sitemap.xml
submit the /sitemap._ip.xml to search console then add the Shopify /sitemap.xml
You can run multiple xml sitemaps as long as they are not overlapping
just remember never add non-200 page, 404s, 300sno flow , no index or redirects to a xml sitemap ScreamingFrog will ask if you want to when you’re making the sitemap.
Shopify will make its own xml sitemaps and and having the current site as a second xml sitemap will help to make sure your change to the site will not hurt the intipage par of the Shopify site
https://support.google.com/webmasters/answer/34592?hl=en
know adding a XML Sitemap is a smart move
I hope that was of help I’m so about miss what you meant.
respectfully,
Tom
hi
so I see the problem now
https://www.nomader.com/robots.txt
Does not have a robots.txt file upload it to the root of your server or specific place where Developer and/or CMS / Hosting company recommends I could not figure out what to type of CMS you’re useing if you’re using one
make a robots.txt file using
http://tools.seobook.com/robots-txt/generator/
https://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/exportrobots.php
https://moz.com/learn/seo/robotstxt
It will look like this below.
User-Agent: *
Disallow:
Sitemap: https://www.nomader.com/sitemap.xml
it looks like you’re using Java for your website?
https://builtwith.com/detailed/nomader.com
I am guessing you’re not using a subdomain to host the Landing Pages?
If you are using a subdomain you would have to create a robots.txt file for that but from everything I can see you’re using your regular domain. So you would simply create these files ( i’m in a car on a cell phone so I did quick to see check if you have a XML site map file but I do think you do
https://www.nomader.com/sitemap.xml
You can purchase a tool called Screaming Frog SEO spider if your site is over 500 pages you will need to pay for it it’s approximately $200 however you will be able to create a wonderful site map you can also create a XML site map by googling xml sitemap generators. However I would recommend Screaming Prod because you can separate the images and it’s a very good tool to have.
Because you will need to generate a new site map whenever you update your site or add Landing Pages it will be done using screaming frog and uploaded to the same place in the server. Unless you can create a dynamic sitemap using whatever website of the infrastructure structure using.
Here are the directions to add your site Google Search Console / Google Webmaster Tools
https://support.google.com/webmasters/answer/34592?hl=en
If you need any help with any of this please do not hesitate to ask I am more than happy to help you can also generate a site map in the old version of Google Webmaster Tools / Google Search Console.
Hope this helps,
Tom
You don’t need to worry about instapage & robot.txt your site has the robots.txt & instapage is not set to no index.
so yes use google search console to fetch / index the pages it’s very easy if you read the help information I posted below
https://help.instapage.com/hc/en-us#
hope that helps,
Tom
If you cannot turn off “Meta Noindex“ you cannot fix it with robots.txt I suggest you contact the developer of the Instapage landing pages app. If it’s locked to no index as you said that is the only of for countering a pre coded by the company Meta Noindex issue?
I will look into this for you I bet that you can change it but not via robots.txt. I
will update it in the morning for you.
All the best,
Tom
That would normally be the case but not tonight.
LOL, I am picking up a lot of the UK Q&A I will be at BrightonSEO and search love London if any of you guys will be in the area I'd love to grab a pint?
sincerely,
Thomas
Sorry Nigel
was not trying to make this more complicated was just trying to make sure that we were all on the same page.
FYI if you need a method of adding the rel canonical to your website quickly you can use Google tag manager or if you want to add to the header
https://support.stackpath.com/hc/en-us/articles/360001445283-EdgeRules-Adding-a-Canonical-Header
Use a self-referencing canonical
https://blog.seoprofiler.com/google-recommend-self-referencing-canonical-tags/
Please let me know if you want me to remove the image below?
you can use this one if needed http://bseo.io/c1vMSv
Hey man I understand is a big deal
could you do me a huge favor and run your site through screaming frog SEO spider send me a couple of pages with the domains whited out so I can tell you 100% what to do in this situation because I am basing this on what you have told me and honestly I would like to look at what a tool can show me and that will tell me what I need to do.
Or you can tell me if the mobile version of the site hit's Google's index yes or no?
respectfully,
Tom
The single self-referencing URL will work.
What URLs are you using with the “alternate” tag on?
You said
”1. We have multiple brand sites, that have a similar setup. They all have mobile and desktop versions of the sites running on the same URL, both of which show the same content.
2. The server determines whether if you're on a desktop or mobile devices using the header information, and points the user to the site relevant files for the given device.”
thats Dynamic serving same URL
Dynamic serving is a setup where the server responds with different HTML (and CSS) on the same URL depending on which user agent requests the page (mobile, tablet, or desktop).
that would NOT give you the mobile or m.example.com & www.example.com different URLs
**But If you do have a different m.example.com & www.example.com URLs you should use this code or XML site maps **
for different URLs use this:
Annotations in the HTML
On the desktop page (http://www.example.com/page-1), add the following annotation:
<linkrel="alternate"media="only screen="" and="" (max-width:="" 640px)"<="" span="">href="http://m.example.com/page-1"></linkrel="alternate"media="only>
On the mobile page (http://m.example.com/page-1), the required annotation should be:
<linkrel="canonical"href="http: www.example.com="" page-1"=""></linkrel="canonical"href="http:>
This rel="canonical" tag on the mobile URL pointing to the desktop page is required.
Or
Annotations in sitemaps
We support including the rel="alternate"annotation for the desktop pages in sitemaps like this:
<urlsetxmlns="http: www.sitemaps.org="" schemas="" sitemap="" 0.9"<="" span="">xmlns:xhtml="http://www.w3.org/1999/xhtml">
<loc>http://www.example.com/page-1/</loc>
<xhtml:linkrel="alternate"media="only screen="" and="" (max-width:="" 640px)"<="" span="">href="http://m.example.com/page-1"/></xhtml:linkrel="alternate"media="only></urlsetxmlns="http:>
You should have the same URL on mobile and desktop
You should have the same rel canonical tag on your URLs unless and this is a big unless you're talking about using Google AMP?
If the URL you want to be indexed is the same URL point everything to that URL if that makes it easier to understand.
respectfully,
Tom
Unless you are using AMP?
Then you would add
In order to solve this problem, we add information about the AMP page to the non-AMP page and vice versa, in the form of tags in the .
Add the following to the non-AMP page:
<link rel="amphtml" href="https://www.example.com/url/to/amp/document.html">
And this to the AMP page:
<link rel="canonical" href="https://www.example.com/url/to/full/document.html">
are you using AMP pages?
https://support.google.com/webmasters/answer/139066?hl=en
https://www.ampproject.org/docs/fundamentals/discovery
I hope that helps you if not please let me know.
Respectfully,
Tom
Cool, that's what I thought when I heard your description I just wanted to be very thorough because sometimes you get very little information and I appreciate you letting me know that.
dynamic serving URLs are identical to each other so you should have a self-referencing canonical tag because the URL does not change the real canonical tag just decides what should be in the index and the same URL.
You're Rel canonical should be something like this example below
Example URL https://www.example.com/example-url/
because the end URL is the same and URL that you want to be indexed in Google you want to be certain that you have a self-referencing URL to prevent query strings and other things like that and you do not need to point a URL to an identical URL you just need a self-referencing canonical if that makes sense.
See: https://yoast.com/rel-canonical/
I hope that is of help,
Tom
You guys are fast I was going to answer this and had to do some other things but let me weigh in on couple things.
as you said
“We are in a predicament of how to properly use the canonical and alternate rel tags**. Currently we have a canonical on mobile and alternate on desktop, both of which have the same URL because both mobile and desktop use the same as explained in the first paragraph.”**
so what you’re saying is that you have a dynamic site so you don’t need to add “alternate"media” tags to the site.
https://developers.google.com/search/mobile-sites/mobile-seo/dynamic-serving
As it is not immediately apparent in this setup that the site alters the HTML for mobile user agents (the mobile content is "hidden" when crawled with a desktop user agent), it’s recommend that the server send a hint to request that Googlebot for smartphones also crawl the page, and thus discover the mobile content. This hint is implemented using the Vary HTTP header.
Annotations in the HTML
On the desktop page (http://www.example.com/page-1
), add the following annotation:
<code dir="ltr"><linkrel="alternate"media="only screen="" and="" (max-width:="" 640px)"<br="">href="http://m.example.com/page-1"></linkrel="alternate"media="only></code>
On the mobile page (http://m.example.com/page-1
), the required annotation should be:
<code dir="ltr"><linkrel="canonical"href="http: www.example.com="" page-1"=""></linkrel="canonical"href="http:></code>
This rel="canonical"
tag on the mobile URL pointing to the desktop page is required.
We support including the rel="alternate"
annotation for the desktop pages in sitemaps like this:
<code dir="ltr"><urlsetxmlns="http: www.sitemaps.org="" schemas="" sitemap="" 0.9"<br="">xmlns:xhtml="http://www.w3.org/1999/xhtml">
<url><loc>http://www.example.com/page-1/</loc>
<xhtml:linkrel="alternate"media="only screen="" and="" (max-width:="" 640px)"<br="">href="http://m.example.com/page-1"/></xhtml:linkrel="alternate"media="only></url></urlsetxmlns="http:></code>
The required rel="canonical"
tag on the mobile URL should still be added to the mobile page's HTML.
**to be sure **
Are you willing to share your domain with us? Or one domain?
We're talking about multiple websites that all have the identical site structure or at least mobile and desktop site structure?
Your server is making the change for you?
Would you be kind enough to install this plug-in on chrome in order for you to show a couple examples of the canonical and the URL?
In addition, would you be kind enough to run your site through the two tools here ( 100% free and very easy to use)
If you would not mind doing this and sending screenshots it would mean a lot to us and getting your canonical's straightened out.
screenshots https://snag.gy/ then upload to http://imgur.com/
everything is on the same server I'm assuming?
Of the three below how would you categorize your site?
Respectfully,
Tom
I’m happy that you’re keeping up with Google it’s really important that you try & stay close as you can with Googles changs your accuracy that has become an industry standard I applaud “Moz Rank“ staying the industry standard.
I would us a different rel="canonical" only url for the canonical & kee the microdata link as just a link.
I agree it is probably Just the tool but from what I can see mixing microdata & the canonical is not the best way to go.
<link rel="canonical" href="http: example.com="" "=""></link rel="canonical" href="http:>
you want a free way to test up to 500 pages https://screamingfrog.co.uk/seo-spider/ like Paul said any tool can be wrong but it looks like you should not mix the canonical something the end Users can click on
tom
If you need anyone to back up what Roman said he's exactly right.
You need to add the canonical to your site so it is self-referencing I would not add it to any URLs that have parameters/query strings or any URL that you want to be in Google's index.
In your example you show the same page twice I added https:// just to make it a full URL for the example and please do that when you add the canonical's
With the rel canonical, you're telling Google that your parameter is not something you want to rank for
You want https://domain.com/page.html to rank
** not**
**Page URL: https://domain.com/page.html?xxxx **
So as Roman said you would add a rel canonical like this below. Please keep in mind when you add these you must add HTTP or HTTPS depending on what your site is up for as well as www. or non-www. & always use absolute URLs
For example, search crawlers might be able to reach your homepage in all of the following ways:
Cite: https://moz.com/learn/seo/canonicalization
More references
I hope that helps,
Tom
WPEis run on a base server using Apache with a Nginx proxy so you can use the WP engine 301 redirect system built-in or you can simply add a redirect to the HTAccess file. If you would like to use a tool to do this I recommend this one another alternative is to ask WP engine to make a change for you.https://www.aleydasolis.com/htaccess-redirects-generator/non-slash-vs-slash-urls/ApacheJust copy to your htaccess:```
https://example.com/page/
**https://example.com/page**
```
<label for="nonslash">**Slash to Non-Slash URLs**</label>
> <ifmodule mod_rewrite.c="">RewriteEngine on
> RewriteCond %{REQUEST_FILENAME} !-d
> RewriteRule ^(.*)/$ /$1 [L,R=301]</ifmodule>
**Non-Slash to Slash URLs**
```
****Apache**
https://example.com/page**
> <ifmodule mod_rewrite.c="">RewriteEngine on
> RewriteCond %{REQUEST_FILENAME} !-f
> RewriteRule ^(.*[^/])$ /$1/ [L,R=301]</ifmodule>
USEING Nginx to do
**https://example.com/page/**
As you see, there is one tiny difference between those two URLs, and it’s the trailing slash at the end. In order to avoid duplicate content, if you are using Nginx you can **remove the trailing slash from Nginx** URLs.
Place this inside your virtual host file in the server {} block configuration:
> ```
> rewrite ^/(.*)/$ /$1 permanent;
> ```
Full example:
> ```
> server {
> listen 80;
> server_name www.mysite.com;
> rewrite ^/(.*)/$ /$1 permanent;
> }
> ```
All done, now Nginx will remove all those trailing slashes.
USEING Nginx to do
https://example.com/page
https://example.com/page/
Add a trailing slash by placing this inside your virtual host file in the server {} block configuration:
> ```
> rewrite ^(.*[^/])$ $1/ permanent;
> ```
Full example:
> ```
> server {
> listen 80;
> server_name www.mysite.com;
> rewrite ^(.*[^/])$ $1/ permanent;
> }
> ```
From now on, Nginx should add your trailing slash automatically to every url
* https://www.scalescale.com/tips/nginx/add-trailing-slash-nginx/
* https://www.scalescale.com/tips/nginx/nginx-remove-trailing-slash/
I hope this helps,
Tom
The domains are intended for development use and cannot be used for production. A custom or CMS-standard will only work robots.txt on
Live environments with a custom domain. Adding sub-domains (i.e., dev.example.com , ``test.example.com
) for DEV or TEST will remove the header only, X-Robots-Tag: noindex
but still, serve the domain. robots.txt
To support pre-launch SEO testing, we allow the following bots access to platform domains:
If you’re testing links or SEO with other tools, you may request the addition of the tool to our robots.txt
Pantheon's documentation on robots.txt: http://pantheon.io/docs/articles/sites/code/bots-and-indexing/User-agent: * Disallow: / User-agent: RavenCrawler User-agent: rogerbot User-agent: dotbot User-agent: SemrushBot User-agent: SemrushBot-SA Allow: /