I'm happy to be of help!
Posts made by BlueprintMarketing
-
RE: How will canonicalizing an https page affect the SERP-ranked http version of that page?
"I have the option of choosing either www.example.com or example.com, but no option for https://www.example.com or https://example.com."
Go in to the https & pick the same format use what ever your site retain the www. if you had it drop it if you did not.
"I have Search Console verified for both HTTP and https pages"
- On the Search Console Home page, click the site you want.
- Click the gear icon, and then click Site Settings.
- In the Preferred domain section, select the option you want.
Nice! So you have 4 URLs? pick the same format as you had but from # 3 or 4 below.
- HTTP://
- HTTP://www.
- HTTPS:// if you did NOT have www use this
- HTTPS://www. if you had www use this
- ** The to do list. https://support.google.com/webmasters/answer/6332964**
- https://docs.google.com/spreadsheets/d/1XB26X_wFoBBlQEqecj7HB79hQ7DTLIPo97SS5irwsK8/edit?usp=sharing
**Next, **
Make certain that you force https:// on your hosting environment or WAF/CDN
**Check it using a **Redirect mapper
- https://varvy.com/tools/redirects/
- If you get lost and need to fix something
- https://online.marketing/guide/https/
- https://www.deepcrawl.com/blog/news/2017-seo-tips-move-to-https/
Add HSTS once everything is definitely working.
Make sure everything is working correctly before Google crawls it use
all the best,
Tom
-
RE: How can i solve this redirect chain issue?
Everything is all set except for your non-HTTPS redirects twice either with a plug-in or through PHP to your non-www HTTPS all you have to do is find the extra redirect and everything else is redirecting properly does that make sense?
no www no HTTPS redirects twice to https no www
https://varvy.com/tools/redirects/
big photo: http://i.imgur.com/FYyLpVf.png
2 Redirects
http://yourdomain.ro
** 301 redirect**
https://yourdomain.ro/
https://yourdomain.ro/
** 301 redirect**
https://www.yourdomain.ro/Final status code: 200
Let me know if I can be of more help,
Tom
-
RE: Http to https:Risky?
You are going to definitely need HTTPS on your site browsers are already making websites that do not have https look insecure I would change over ASAP.
If you perform it correctly with the proper 301 redirects and update Google search console you will not suffer for more than a week max I have never seen ranking a drop personally out of over 20 sites but I have heard of some it is extremely unlikely that it will occur now.
A how to checklist** is at the top if you. follow that you will be okay.**
- http://www.aleydasolis.com/en/search-engine-optimization/http-https-migration-checklist-google-docs/
- http://www.aleydasolis.com/htaccess-redirects-generator/https-vs-http/
- https://www.semrush.com/blog/https-just-a-google-ranking-signal/
Everyone here is ready to help if you have any questions.
I hope this is of help all the best,
Tom
-
RE: White H1 Tag Hurting SEO?
If the background of the site is white and you made the H1 White you are doing something black hat rather your intentions are good or not in the eyes of Google.
I'm not saying you're trying to do something wrong if the client does not want to use the H1 tag what are they using for a title? You can still combine the two if you must is much better than making it essentially clear text which is what Google will think the white text is if the background is white.
you have to tell your client the H1 tag is extremely important and they need to introduce the page with the H1 tag displayed to the end-user.
Respectfully,
Thomas
-
RE: How will canonicalizing an https page affect the SERP-ranked http version of that page?
"My question is about canonical tags in this context. Suppose I canonicalize the https version of a page which is already ranked on the SERP as http_. Will the link juice from the SERP-ranked_ http version of that page immediately flow to the now-canonical https version? Will the https version of the page immediately replace the http version on the SERP, with the same ranking?"
Yes, it will as long as you set it in Google Webmaster tools and create the proper 301 redirects please look at this reply.
Okay you have been creating duplicate content for Google I would decide to use HTTPS and point your self-referencing canonical as well as your 301 redirect to HTTPS
you need to correct everything in Webmaster tools/Google search console these things are essential in order to maintain your traffic. Please look at my post here and make sure your canonical is self referencing to HTTPS://
- http://www.aleydasolis.com/htaccess-redirects-generator/https-vs-http/
- http://www.aleydasolis.com/en/search-engine-optimization/http-https-migration-checklist-google-docs/
<ifmodule mod_rewrite.c="">RewriteEngine On
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]</ifmodule>here is an updated article on using HTTPS the browsers alone are forcing people to do it.
https://www.semrush.com/blog/https-just-a-google-ranking-signal/
I hope I have been of help let me know if I can clear anything up.
All the best,
Tom
-
RE: How to Evaluate Original Domain Authority vs. Recent 'HTTPS' Duplicate for Potential Domain Migration?
This will help too
http://www.aleydasolis.com/en/search-engine-optimization/http-https-migration-checklist-google-docs/
& use http://www.aleydasolis.com/htaccess-redirects-generator/https-vs-http/
<ifmodule mod_rewrite.c="">RewriteEngine On
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]</ifmodule> -
RE: How to Evaluate Original Domain Authority vs. Recent 'HTTPS' Duplicate for Potential Domain Migration?
As Romans stated you will need to go into search console and add all four properties. Then pick which one you want to be your canonical or Chosen URL.
-
On the Search Console Home page, click the site you want.
-
Click the gear icon, and then click Site Settings.
-
In the Preferred domain section, select the option you want.
-
HTTP://
-
HTTPS://
** The to do list. https://support.google.com/webmasters/answer/6332964**
** make certain that you force https:// on your hosting environment or WAF/CDN**
**Check it using a **Redirect mapper
- https://varvy.com/tools/redirects/
- If you get lost and need to fix something
- https://online.marketing/guide/https/
- https://www.deepcrawl.com/blog/news/2017-seo-tips-move-to-https/
Add HSTS once everything is definitely working.
Make sure everything is working correctly before Google crawls it use
all the best,
Tom
-
-
RE: Best site structure for us?
I am talking about the subfolder and anything past the forward /
-
RE: Best site structure for us?
Hi Andy,
No worries man sorry not trying to be confusing.
Well, I'm assuming you already have the main domain?
if I already have a site architecture which looks like a pyramid including my top keywords as my navigation then I move down through categories trying not to go deeper than three clicks from the homepage.
Like: http://i.imgur.com/Kew1e64.png
So let's say your domain name is example.com I want to create a landing page for dishwashers specifically ones made by Miele
I would create the URL example.com/dishwashers/miele
https://www.distilled.net/blog/seo/why-you-should-map-out-your-sites-information-architecture/
WordPress:
https://yoast.com/site-structure-the-ultimate-guide/
https://www.slideshare.net/dohertyjf/id2013-optimizing-your-websites-architecture-for-seo
https://builtvisible.com/solving-site-architecture-issues/
See: http://i.imgur.com/EA8gfEe.gif
I hope this makes sense if you have any questions please feel free to ask if you already have the architecture in place keep it in place if you do not map it like the photographs.
Hope this helps,
Tom
-
RE: Best site structure for us?
Thank you for clarifying.
Believe if you have a site with parent pages you can benefit from a popular subfolder
but I would go with the most relevant to the keywords.
So to answer your question each with specific keywords
All the best,
Thomas
-
RE: Best site structure for us?
Can you send me the domain?
- http://www.wpbeginner.com/wp-themes/how-to-use-multiple-themes-for-pages-in-wordpress/
- http://www.bloggingwizard.com/top-wordpress-landing-page-plugins/
- https://unbounce.com/landing-page-articles/what-is-a-landing-page/
- a great theme goes a long way!
- https://kinsta.com/blog/wordpress-free-vs-paid-themes/
Do NOT use CAP's in URLs
"My question is...Would we be better off to include all landing pages under a domain.com/wordpress-themes/ category/tax and then go for the optimized-landing-page-title.php page"
Use Yoast or something like it to set the URL & optimized landing page title using WordPress I also love Unbounce if you need it another wise WordPress is fine.
"domain.com/wordpress-themes/ category/tax"
Only if you want to add the category you can do it without or with
I would not URLs end in .php unless internal
- https://www.digitalocean.com/community/questions/nginx-remove-php-from-url
- https://wordpress.stackexchange.com/questions/81769/how-to-remove-the-index-php-in-the-url
If you want a good UX the URL below is worth a look at.
Hope this helps,
Tom
-
RE: Is there any way to automatically filter negative keywords out of Keyword Explorer results?
For a feature request
Send an email to the great folks at help@moz.com for help with future requests. In fact, if I bring up the need for a wonderful Moz associate they may just reply to you right here.
I think There are two simple ways to do it.
Create a new negative only keyword list of Brod Mach terms on Moz
Their ability to create separate Lists makes this easy you
if know enough about what he's going to be a negative keyword
In some cases you want to set on similarity on zero in other cases you want similarity mid-low & high.
There are SEM tools from Spyfu, WordStream & SEMRush that tries to create negative's keywords based on your AdWords keywords.
I have to say that Moz "Keyword Explorer" Is an amazing tool https://moz.com/explorer and will be able to find similar results.
hope that helps,
Tom
-
RE: Is robots met tag a more reliable than robots.txt at preventing indexing by Google?
Test for what works for your site.
Use tools below
- https://www.deepcrawl.com/ (will give you one free full crawl)
- https://www.screamingfrog.co.uk/seo-spider/ (free up to 500 URLs)
- http://urlprofiler.com/ (14 days free try)
- https://www.deepcrawl.com/blog/best-practice/noindex-disallow-nofollow/
- https://www.screamingfrog.co.uk/seo-spider/user-guide/general/#robots-txt
- https://www.deepcrawl.com/blog/best-practice/noindex-and-google/
So much info
https://www.deepcrawl.com/blog/tag/robots-txt/
Thomas
-
RE: Is robots met tag a more reliable than robots.txt at preventing indexing by Google?
Hi Luke,
In order to exclude individual pages from search engine indices, the noindex meta tag
is actually superior to robots.txt.
But X-Robots-Tag header tag is the best but much hader to use.
Block all web crawlers from all content
User-agent: * Disallow: /
Using the
robots.txt
file, you can tell a spider where it cannot go on your site. You can not tell a search engine which URLs it cannot show in the search results. This means that not allowing a search engine to crawl an URL – called “blocking” it – does not mean that URL will not show up in the search results. If the search engine finds enough links to that URL, it will include it; it will just not know what’s on that page.If you want to reliably block a page from showing up in the search results, you need to use a meta robots
noindex
tag. That means the search engine has to be able to index that page and find thenoindex
tag, so the page should not be blocked byrobots.txt
a
robots.txt
file does. In a nutshell, what it does is tell search engines to not crawl a particular page, file or directory of your website.Using this, helps both you and search engines such as Google. By not providing access to certain, unimportant areas of your website, you can save on your crawl budget and reduce load on your server.
Please note that using the
robots.txt
file to hide your entire website for search engines is definitely not recommended.see big photo: http://i.imgur.com/MM7hM4g.png
_(…)_ _(…)_
The robots meta tag in the above example instructs all search engines not to show the page in search results. The value of the
name
attribute (robots
) specifies that the directive applies to all crawlers. To address a specific crawler, replace therobots
value of thename
attribute with the name of the crawler that you are addressing. Specific crawlers are also known as user-agents (a crawler uses its user-agent to request a page.) Google's standard web crawler has the user-agent name.Googlebot
To prevent only Googlebot from crawling your page, update the tag as follows:This tag now instructs Google (but no other search engines) not to show this page in its web search results. Both the and
name
the attributescontent
are non-case sensitive.Search engines may have different crawlers for different properties or purposes. See the complete list of Google's crawlers. For example, to show a page in Google's web search results, but not in Google News, use the following meta tag:
If you need to specify multiple crawlers individually, it's okay to use multiple robots meta tags:
If competing directives are encountered by our crawlers we will use the most restrictive directive we find.
irective. This basically means that if you want to really hide something from the search engines, and thus from people using search,
robots.txt
won’t suffice.Indexer directives
Indexer directives are directives that are set on a per page and/or per element basis. Up until July 2007, there were two directives: the microformat rel=”nofollow”, which means that that link should not pass authority / PageRank, and the Meta Robots tag.
With the Meta Robots tag, you can really prevent search engines from showing pages you want to keep out of the search results. The same result can be achieved with the X-Robots-Tag HTTP header. As described earlier, the X-Robots-Tag gives you more flexibility by also allowing you to control how specific file(types) are indexed.
Example uses of the X-Robots-Tag
Using the
X-Robots-Tag
HTTP headerThe
X-Robots-Tag
can be used as an element of the HTTP header response for a given URL. Any directive that can be used in an robots meta tag can also be specified as anX-Robots-Tag
. Here's an example of an HTTP response with anX-Robots-Tag
instructing crawlers not to index a page:HTTP/1.1 200 OK Date: Tue, 25 May 2010 21:42:43 GMT _(…)_ **X-Robots-Tag: noindex** _(…)_
Multiple
X-Robots-Tag
headers can be combined within the HTTP response, or you can specify a comma-separated list of directives. Here's an example of an HTTP header response which has anoarchive
X-Robots-Tag
combined with anunavailable_after
X-Robots-Tag
.HTTP/1.1 200 OK Date: Tue, 25 May 2010 21:42:43 GMT _(…)_ **X-Robots-Tag: noarchive X-Robots-Tag: unavailable_after: 25 Jun 2010 15:00:00 PST** _(…)_
The
X-Robots-Tag
may optionally specify a user-agent before the directives. For instance, the following set ofX-Robots-Tag
HTTP headers can be used to conditionally allow showing of a page in search results for different search engines:HTTP/1.1 200 OK Date: Tue, 25 May 2010 21:42:43 GMT _(…)_ **X-Robots-Tag: googlebot: nofollow X-Robots-Tag: otherbot: noindex, nofollow** _(…)_
Directives specified without a user-agent are valid for all crawlers. The section below demonstrates how to handle combined directives. Both the name and the specified values are not case sensitive.
- https://moz.com/learn/seo/robotstxt
- https://yoast.com/ultimate-guide-robots-txt/
- https://moz.com/blog/the-wonderful-world-of-seo-metatags
- https://yoast.com/x-robots-tag-play/
- https://www.searchenginejournal.com/x-robots-tag-simple-alternate-robots-txt-meta-tag/67138/
- https://developers.google.com/webmasters/control-crawl-index/docs/robots_meta_tag
I hope this helps,
Tom
-
RE: How to compete with business names and urls that include location?
Google can tell how far away you are via your ISP IP address examples Comcast something like that and if you're on a mobile device they can measure distance from the person to the chiropractor's office. What's important is that you Optimize or chiropractic work and make sure that you use a tool like moz local this will get you into the universal services. I would also recommend a site audit to make sure everything is running smooth & googlebot not being blocked.
You can also use schema as well as on page signals to let Google know where you are located. Also fill out my business on Google.
I really don't think that the URL is going to make that much of a difference it may have a small part to play but if your site is better off my list you're going to win if you have more authority or if you're closer they could just be closer
hope it helps,
tom
-
RE: Only One Canonical URL Tag
Smart move. Let me know if I can be of any more help then does this answer your question?
-
RE: Only One Canonical URL Tag
You have an RSS feed that is adding a canonical to your own site?
RSS normally can add canonical's to third-party sites I don't think that the issue is Yoast I believe it is with the RSS post importer.
this is the second best SEO plugin you can use
https://wordpress.org/plugins/all-in-one-seo-pack/
every decent word press seo plugin will add canonical's you can choose to turn off canonical's on Yoast however I would do this using the RSS post importer
In case your are using WordPress SEO plugin, you’ll have to add a filter in the functions.php file just before the closing of PHP as shown in the screenshot below.
See
http://i.imgur.com/cr7IU4W.jpg
Here is the exact code:
add_filter( 'wpseo_canonical', '__return_false' );
It will disable canonical tags across the site and no page, no post and no archive will show this tag. But if you want to disable tagging function on certain posts, pages or category archives, you’ll have to use this code instead of the code mentioned above.
function wpseo_canonical_exclude( $canonical ) {global $post;if (is_single( '348' )) {$canonical = false;}return $canonical;}
As you know every post, page, category or any another archive has its own unique ID in the wordpress. So in the above example we used 348 which is the post ID for a specific post where the canonical tag will not show. If you don’t know how to find the ID, here is a good article for you. To hide this tag from multiple posts, use this code.
function wpseo_canonical_exclude( $canonical ) {global $post;is_single( array( 17, 19, 1, 11 ) ) {$canonical = false;}return $canonical;}
I still strongly recommend that you keep the canonical on via plug-in and not via RSS there are too many things that it could miss and cause problems for you.
I hope this helps,
Tom
-
RE: Only One Canonical URL Tag
here is a CSV file showing the pages with more than one canonical you must make it so there's only one canonical per a page
here is the CSV
Bigger photo http://i.imgur.com/cqcTN0h.png
<colgroup><col width="420"> <col width="184"> <col span="3" width="87"> <col width="199"> <col width="304"> <col span="3" width="87"></colgroup>
| Directives - Canonical | | | | | | | | | |
| Address | Occurrences | Meta Robots 1 | X-Robots-Tag 1 | Meta Refresh 1 | Canonical Link Element 1 | Canonical Link Element 2 | HTTP Canonical | rel=“next†| rel=“prev†|
| http://www.completetenders.com/services/bid-management/ | 3 | noodp | | | http://www.completetenders.com/services/bid-management/ | http://www.completetenders.com/services/bid-management/ | | | |
| http://www.completetenders.com/ | 2 | noodp | | | http://www.completetenders.com/ | | | | |
| http://www.completetenders.com/services/tendering-process/ | 3 | noodp | | | http://www.completetenders.com/services/tendering-process/ | http://www.completetenders.com/services/tendering-process/ | | | |
| http://www.completetenders.com/services/tender-writing/ | 3 | noodp | | | http://www.completetenders.com/services/tender-writing/ | http://www.completetenders.com/services/tender-writing/ | | | |
| http://www.completetenders.com/open-tenders/ | 2 | noodp | | | http://www.completetenders.com/open-tenders/ | | | | |all the best,
Tom
-
RE: Only One Canonical URL Tag
I just checked the URL you referenced you have duplicate canonical tags I checked it using screamingfrog.co.uk/seo-spider/
please look at the data below.
You need to remove one of them you should only have one.
so instead of this
- Canonical Link Element 1 http://www.completetenders.com/services/bid-management/
- Canonical Link Element 2 http://www.completetenders.com/services/bid-management/
you get
- Canonical Link Element 1 http://www.completetenders.com/services/bid-management/
See
URL Encoded Address http://www.completetenders.com/services/bid-management/
Content text/html; charset=UTF-8
Status Code 200
Status OK
Size 150645
Title 1 Bid Management / End-to-end Tendering Support
Meta Description 1 Professional Bid Management Consultants / Bid Solutions to Win Contracts / Everything You Need / Expert Advice on 07429 191305
H1-1 Bid Management
H2-1 Bid Management
H2-2 View More
Meta Robots 1 noodp
Canonical Link Element 1 http://www.completetenders.com/services/bid-management/
Canonical Link Element 2 http://www.completetenders.com/services/bid-management/
Word Count 723
Level 0
Inlinks 14
Outlinks 23 -
RE: Only One Canonical URL Tag
Look at your source code on these pages use command-F (on Mac) or control F (on PC) to search for the term "canonical" if you do not see it more than once on the page it should be no issue.
"the only difference I can see is the first one uses '...' and the second one uses "..."
The canonical link? If there are two canonical bags on the same page make sure to remove the one that is not pointing to your preferred page or make sure the canonical is self-referencing. But make certain there are no more than one canonical tag like this below
Look for
Also, I would use
- https://www.deepcrawl.com/ allowing free site crawls right now
- and/or
- https://www.screamingfrog.co.uk/seo-spider/ free up to 500 pages.
Just to confirm you do not have duplicate canonical's both tools will let you know for sure is will checking the source code of the page that is flagged. if it does not have duplicates I would report it to The Moz they are awesome at getting back to you and answering these types of issues if there is in fact only one canonical per a page.
I hope that helps,
Tom
-
RE: If I use schema markup for my google reviews, would it be smart to have Google review's on my home page?
What Miriam_ Is saying is 100% correct. Thank you for updating me, Miriam!_
I would recommend utilizing a system of getting your own testimonials/reviews reputation stacker is a great one so is white spark I have provided some URLs below you are paid services that allow you to cultivate your own unique testimonials which you can then use schema on however like it is stated below and Marion said you cannot use schema on reviews that are from websites other and yours. Meaning they had to show up there first.
- https://reputationstacker.com/ (great tool)
- https://whitespark.ca/reputation-builder/
- https://whitespark.ca/review-handout-generator/
- https://whitespark.ca/google-review-link-generator/
"The big change here is that when you include third-party syndicated reviews that are not “directly produced by your site,” you should not mark up those reviews with schema. Only “directly produced by your site, not reviews from third-party sites or syndicated reviews” should be marked up, according to these guidelines".
https://developers.google.com/search/docs/data-types/reviews#local-business-reviews
All the best,
Tom
-
RE: Google cache is for a 3rd parties site for HTTP version and correct for HTTPS
PS was your hosting company? If there managed host I would suggest you have them look into the issue requesting force HTTPS
-
RE: Google cache is for a 3rd parties site for HTTP version and correct for HTTPS
Please verify everything is in Google Webmaster tools for HTTPS
so there are four versions of your site in Webmaster tools pick the version you wish to index in your case will be www. or non-www. HTTPS then fetch as a Googlebot. look for errors.
- http://
- http://www
- https://
- https://www
https://varvy.com/tools/redirects/
screaming frog SEO spider free for The first 500 URLs it will show if there is a problem most likely be very helpful.
https://www.screamingfrog.co.uk/seo-spider/
or https://deepcrawl.com To figure out rather or not it's duplicated content and find possibly broken redirects or missing HSTS
this will confirm you do not have any and redirects.
After doing so if you do have bad redirects you will have to speak to your hosting company or at least share info with us about your server configuration sure that it is set to force HTTPS
See the guides below to make sure you don't have an error somewhere.
- https://www.keycdn.com/blog/http-to-https/
- https://moz.com/blog/seo-tips-https-ssl
- http://searchengineland.com/http-https-seos-guide-securing-website-246940
- https://support.google.com/webmasters/answer/6073543?hl=en
If you would like to force the redirect to HTTPS there are three third-party tools that will allow you to do this for any site
You still will want to use deep crawl or screaming frog to check your set up.
I hope this is of help, and the last URL is a free tool.
Tom
-
RE: Only One Canonical URL Tag
you are correct you should only have one canonical tag on a page Max
Because you're using WordPress Yoast SEO plug-in I can tell you that it automatically adds canonical URL for you when you are modifying thing that should be pointing somewhere else you have to go into the advanced area in the photo below.
So all of these URLs would show the same content:
- http://example.com/wordpress/seo-plugin/ you would want to point them all to the first URL via canonical
- http://example.com/wordpress/seo-plugin/?isnt=it-awesome
- http://example.com/wordpress/seo-plugin/?cmpgn=twitter
- http://example.com/wordpress/seo-plugin/?cmpgn=facebook
for more information on how to use that plug-in and remember only one per page and if the page is duplicate content pointed to the page that you want Google to assume is the owner of that content.
If you get stuck ask me or reference this below.
https://yoast.com/rel-canonical/
All the best,
Tom
-
RE: If I use schema markup for my google reviews, would it be smart to have Google review's on my home page?
The only reason I can think of why you would not want to is if you're in some sort of industry where it is not relevant but those are hard to think of. As far as putting schema on your reviews I strongly recommend it below are two generators of micro data and one of JSON-LD schema use one you like.
- https://webcode.tools/json-ld-generator/review
- https://webcode.tools/microdata-generator/review
- http://tools.seochat.com/tools/review-schema-generator/
- another generator for basic schema https://hallanalysis.com/json-ld-generator/
- and if you use WordPress https://wordpress.org/plugins/schema-app-structured-data-for-schemaorg/
** This is a great article on how to get the stars where your or .8 to show up underneath the snippet the Google SERPS**
https://whitespark.ca/blog/how-to-use-aggregate-review-schema-to-get-stars-in-the-serps/
here is an example of code you could use
All the best,
Tom
-
RE: Screaming Frog tool left me stumped
Hi Ruchy,
That's fantastic deep crawl is a great tool.
I was talking about two different types of cloaking one using an URL with a C name for hidden cloaking e.g. https://www.rebrandly.com/
Thank you for the kind words. Giving good answers and thumbs up or thumbs down definitely does make a difference I have outlined a lot of it below the official MozPoints URLhttp://moz.com/community/mozpoints
Yes marking anything as a giving thumbs-up is telling somebody they have helped you and in the event of thumbs up is their response was valuable and maybe contributed to answering your question.
In the case, of giving good answer person has answered your question giving as well as thumbs-up is the way anything is a way of saying thank you.
Welcome to the most community, and I look forward to seeing you here.
All the best,
Tom
-
RE: Worldwide and Europe hreflang implementation.
I would put them all on one domain I would not worry about people and your caring about the EU tag on the domain .com's are far more common over there then .eu
I would put it all under one Domain I would not break it up over to Domains using sub folders just target the rest of the world with English but you're basically making that all to your alternate ask for default which is the one design for if there is no proper language to fit the browser language.
Yes you absolutely can target The same URL with multiple tags. In your case because you're going to have so many hreflang tags I would recommend implementing them through the site map it tends to be faster although you'll need a tool like DeepCrawl or Screaming Frog Prod or to make sure that they're all right.
http://www.aleydasolis.com/en/international-seo-tools/hreflang-tags-generator/
you don't need to add a sub folder to the alternative xt tag.
Obviously you can use/DE for Germany and then use the German language/DE – DE and so forth until you target each country with the specific language that you want to target them with. It seems like you're not interested in selling outside do yo selling outside the Europe as each piece of content I'm sure you know this will have to be written by somebody that is native to the country that you are targeting.
I really think it's just as important do you have the correct content is well is the correct tags. But most of the time people do not use the subfolder for their ex you could theoretically do it if you were not going to use English at all
http://www.acronym.com/bebrilliant/seo/hreflang-sitemaps-free-tool/
I would use a single domain or I would use depending on your resources and what you can put into this a separate TLD neither one of those teal these offer any benefit for Ranking what I'm saying is.edu.com or not as powerful is .CO.UK in the United Kingdom or .de in Germany.
My thoughts would be it would save you a lot of time not to have to use to generic Domains for just the sake of aesthetics. When they have no added trust I honestly feel in Europe people do not think don't you is something more trustworthy than.com though this is only one person's opinion mine.
Make sure that if you're splitting up your domains you do not try to run them as one domain. With the tags as shown.
-
RE: Screaming Frog tool left me stumped
Hey just wanted to check was any answer helpfull to you?
Is there anything we might've missed or failed to help you solve the issue?
All the best,
Tom
-
RE: Worldwide and Europe hreflang implementation.
I think I have a good answer for you give me about four hours.
-
RE: Google & Site Architecture
Let me do a quick audit of this I will get back to you right away sorry about the long wait. When you talking about the inability to change navigation (level 3) Can I ask is it because you do not have Development or rights or is it a CMS issue?
Tom
-
RE: Google & Site Architecture
Becky I am so sorry for the long delay I will reply to you tomorrow by this time
Tom
-
RE: How to Remove Web Cache snapshot page & other language SEO Title in Google Search Engine?
If you add Content to your site and send the changed signaled to google it would be more likely to index your website and hopefully updat that URL. However I think is the first URL is The best bet.
-
RE: Screaming Frog tool left me stumped
Hi Ruchy,
Screaming Frog is an excellent tool. Dan is a great guy to update and the on that.
In my opinion, I have been able to find formation using https://www.deepcrawl.com/
The cost is higher deepcrawl is also a hosted platform.
Still, it can do remarkable things that no other tool that I know of can do.
There are built just for checking cloaking this one is free
http://www.seotools.com/seo-cloaking-checker/
If you have access to the DNS will want to examine this thoroughly.
I have laid out how you would find c name DNS records if you do not have access to the DNS as well.
One other method you might have to use to determine whether or not domain or subdomain has been cloaked as shown here: http://www.ghosturl.com/ if the cloaking happens in a manner described in the URL for this sentence.
So the scenario would be the cloaked URL would be something like
http://www.cloaked.example.com/?=ku2b4B30ijbasT47720sb534Nbq6
( please know I used example.com is the domain because I did not want to pointed that any live site not because of anything to do with cloaking)
Therefore they're using most likely a C name inside the DNS to pointed to http://www.example.com
You will need to use a tool that would be able to look at the IP or C name being created by the DNS.
The free tool I know if that can look for a C name or A record that should not be there is https://www. cloudflare.com the way you would do this is run the site through the DNS wizard that pulls the current DNS including all the records (it can miss some, but it does an excellent job of catching most of all DNS records. (9 times out of 10 will get all of them)
Because most tools make it impossible to see the C name when looking at DNS, it is important to remember there is a free service that is not designed for this purpose but can be used to discover the rogue cloaked URL.
https://www.cloudflare.com/a/sign-up (the free-tier offers the same ability to get DNS information)
http://i.imgur.com/Wm7W0aM.mp4
https://cl.ly/0A3k3d2j401n/Screen Recording 2017-01-26 at 07.09 AM.mov
http://i.imgur.com/lIN9Gsm.png
Hope This helps,
Tom
-
RE: How to Remove Web Cache snapshot page & other language SEO Title in Google Search Engine?
You can use
https://www.google.com/webmasters/tools/submit-url
Without logging into Webmaster tools
** I am guessing that is why when you asked you said without WebMaster tools **
** It is almost impossible to affect Google's cashing of a site without dealing with Google. You can block things but updating content like a new title tag or description can be done anonymously using the URL at the top . **
-
RE: Recovering from spam links on MY site
I would check the site for malware most of the time people hack sites and leave much more than just spun content.
You can check it out by using https://sitecheck.sucuri.net/
To keep this from happening in the future, I have provided security information URLs at the bottom of the reply. What security measures have you taken to determine that there is no malicious code causing this?
What security measures have you taken to determine that there is no malicious code causing this?
** What do your backlinks look like ( I know they sent external links from your site out with the spam content I'm talking about links going to your site) did they link to you as well?**
Regarding speaking to Google, there are methods of letting them know you don't want to show certain content on specific URLs as well as telling them that content is outdated. Unfortunately, there is not a one click fix for what you're dealing with.
You can remove the URLs from Google as long as they're not being used for something helpful now by going to the first URL below. The URLS below will help you with the spamming external links
https://support.google.com/webmasters/answer/1663419?hl=en
You can also communicate to Google that this content outdated through this method.
https://support.google.com/websearch/answer/6349986?hl=en
My best advice to you is doing a complete SEO audit as well as a security audit and prevent this stuff from happening in the future.
Use tools like Moz, DeepCawl, Screaming Frog SEO Spider, SEMrush etc. To keep tabs on all the fundamentals on your site.
Keeping this from happening in the future
if you want your site checked for malware every four hours and to have it removed you can add
- https://sucuri.net/website-antivirus/signup
- https://www.armor.com/security-solutions/armor-anywhere/
** for blocking malware and keeping this from happening in the future**
- https://www.incapsula.com/
- https://www.armor.com/security-solutions/armor-anywhere/
- https://www.stackpath.com/web-application-firewall/
- https://sucuri.net/website-firewall/
This will help prevent attacks like this from occurring by whitelisting IP addresses or using double authentication across the board you will be able to minimize the chance this happening again.
To understand exactly what the problem is I would need to know the domain name as well as a bit more information. I hope what I've given you is helpful.
I would be happy to take a quick look at your site you feel comfortable posting your domain, please post it if you do not you can send it to me via private message.
I hope this is of help,
Thomas
-
RE: Good CDN
New WAF/CDN's
Only one offers a free plan also
I must be upfront and TAGFEE I am a partner of Imperva the company that owns Incapula.
Incapula now has a free offer on their content delivery network, of course, there are two catches.
- If your site Is encrypted a.k.a. Using SSL or https:// you must choose a paid plan.
- You do not get access to their phenomenal rewrite rules which can speed up your site quite a bit. Still, if you have no or no intention to use https this is better than CloudFlare when it comes to speed plus much better when it comes to security, but you must pay for the safety on both networks
The second content delivery network is not free. But worth mentioning.
StackPath is a new CDN/WAF built on MaxCDN's network. However, it offers you so much more for the base price of $20 month compared to what that would cost on MaxCDN hundreds of dollars a month.
This CDN is unique because you get so much for the money. I bring it up only because people browsing this will hopefully find this useful.
Hopefully, this is of use people checking out this question.
All the best,
Tom
-
RE: GA Landing Page Inaccuracies
Could you give me a better example of exactly what you're doing to have this occur? It sounds like you might be using cookies to count the submission or the URL is a 404, but it's in the AdWords system, so it redirects it to the 404 thinking it is all good you have to change the rules in Google Analytics and AdWords.
People can also, unfortunately, influence Google analytics you have to put filters on and a lot of cases. Could you describe exactly what's occurring? I know you did a good job above but just give me a better picture of it if you don't mind.
Respectfully,
Thomas
-
RE: Issue with site not being properly found in Google
Moderator's Note: Attached images, along with select links in this thread have been edited and/or removed for privacy at the request of the OP.
--
I noticed your robots.txt is fixed but I would recommend two things to get your site back into the index faster based on the photographs below I am suggesting fetching your site as a Google bot as well as adding your XML site map to Webmaster tools.
Please do not forget to add all four versions of your website to webmaster tools if it has not been added
when I say that I mean add every URL below to Google Webmaster tools with and without www
target the site to the fourth or canonical URL. Choose the one with www.
here is a reference from Google
https://support.google.com/webmasters/answer/34592?hl=en&ref_topic=4564315
I would do two things I would add my site map to my robots.txt file because if you're going to use search tools it's going to help you.
You should set up your robots.txt just like this
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php[Sitemap: https://www.website.com/sitemap_index.xml]
you can reference
https://yoast.com/ultimate-guide-robots-txt/
Allow
directiveWhile not in the original “specification”, there was talk of an
allow
directive very early on. Most search engines seem to understand it, and it allows for simple, and very readable directives like this:Disallow: /wp-admin/ Allow: /wp-admin/admin-ajax.php
The only other way of achieving the same result without an
allow
directive would have been to specificallydisallow
every single file in thewp-admin
folder.because you don't want your login to be showing up in Google.
after which I would go into Webmaster tools/search console and fetch as a Google bot
Ask Google to re-crawl your URLs
If you’ve recently made changes to a URL on your site, you can update your web page in Google Search with the_Submit to Index_function of the Fetch as Google tool. Thisfunction allows you to ask Google to crawl and index your URL.
See
http://searchengineland.com/how-to-use-fetch-as-googlebot-like-seo-samurai-214292
https://support.google.com/webmasters/answer/6066468?hl=en
Ask Google to crawl and index your URL
- Click Submit to Index, shown next the status of a recent, successful fetch in the Fetches Table.
- Select** Crawl only this URL **to submit one individual URL to the Google for re-crawling. You can submit up to 500 individual URLs in this way within a 30 day period.
- Select** Crawl this URL and its direct links** to submit the URL as well as all the other pages that URL links to for re-crawling. You can submit up to 10 of requests of this kind within a 30 day period.
- Click Submit to let Google know that your request is ready to be processed.
adding your XML site map to Google Webmaster tools
[https://www.website.com/sitemap_index.xml]
will help Google determined that you are back online you should not see any real fallout from this. And submitting a complete XML site map gets a lot of images into Google images.
I hope this helps,
Tom
-
RE: Issue with site not being properly found in Google
Hi it seems your robots.txt file is blocking Google and all other bots that search the web and obey robots.txt basically the good ones. So if you would like your site to be seen and indexed by Google and other search engines you need to remove the forward slash "/"
Shown here in your robots.txt file
Block all web crawlers from all content
User-agent: * Disallow: /
Go here to see [
https://www.website.com/robots.txt]-
Please read https://moz.com/learn/seo/robotstxt
-
Use to make the file http://tools.seobook.com/robots-txt/generator/
it looks like you're using WordPress so if you're using Apache or Yoast SEO you can go in and set it to use this I added your xml sitemap https://www.brightonpanelworks.com.au/sitemap_index.xml
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php[Sitemap: https://www.website.com/sitemap_index.xml]
You can use tools like this to analyze & fix robots.txt & can allways see it by adding /robots.txt after the .com or tld.
I hope that helps,
Tom ```
-
-
RE: Google & Site Architecture
Think CRAWL BUDGET
the crawl budget is the number of requests made by Googlebot to your website in a particular period of time. In simple terms, it’s the number of opportunities to present Google the fresh content on your website.
See this to understand
https://www.deepcrawl.com/knowledge/best-practice/optimize-crawl-budget-tips-examples/
If you ever repeat a URL path more than twice, the URL will not be indexed. For example, this URL would not be indexed in Google.
Even if the repeated paths are broken up by another unique path, the URL will not be indexed. e.g.
This URL would not be indexed.
example.com/path/path/unique/path/
This is because Google thinks it has hit a URL trap.
URL traps occur most often when a relative link includes the same path as where the page is located. Relative URLs are added to the end of the paths of the URL which contains the link.
For example, if you had a page like example.com/path/page.html, which included a relative link back to itself using “/path/page1.html”, the actual URL of the link is example.com/path/path/page1.html. If this page is returned by the server, it will contain another relative link to “/path/page1.html”, which is actually the URL example.com/path/path/path/page1.html. And so ad infinitum.
See https://www.deepcrawl.com/knowledge/best-practice/never-repeat-pathnames-in-urls-more-than-twice/
Build Your Universal Navigation
- Identify why visitors come to your site. You probably have a pretty good idea of what people want already, but check your web analytics:
- What search terms do visitors use before they get to your site? Keywords used by incoming visitors tell you what your visitors were looking for before they clicked through to your site. Follow up to see which pages they visited - did they find what they were looking for?
- If you’re tracking internal site search, what search terms do visitors use once they’re on your site? On average, only 10% of visitors use site search. So, it’s safe to assume that most people only use site search if they have a hard time finding what they want with your navigation. What terms are visitors searching for? Do you have that page? Is it hidden?
- What pages on your site get the most traffic? If those are the pages that you want to get the most traffic, keep those in mind as you build your navigational structure to make sure they're easy for visitors to find. If they aren't particularly high conversion pages, what's a similar page that you can steer those visitors to?
- What are your top exit pages? If they’re locations or external contact information, that’s probably something a lot of your visitors are looking for. You should include that in your top navigation.
Divide your products/key pages into categories.
- Usability experts recommend “card sorting”: put your products on cards, lay them out on a flat surface so you can see them all, and cluster similar items together. There are also a few websites out there that will let you sort cards without taking up so much floor space:http://www.optimalworkshop.com/optimalsort.htm andhttp://uxpunk.com/websort/
https://www.distilled.net/blog/seo/site-navigation-for-seo/
Hope this helps,
Tom
- Identify why visitors come to your site. You probably have a pretty good idea of what people want already, but check your web analytics:
-
RE: Seo Yoast Plugin
You want to add your XML Sitemap. To Google Webmaster Tools
& the html Sitemap to the websites footer
see
https://www.distilled.net/blog/seo/site-navigation-for-seo/
Tom
-
RE: Staff??
Never look a gift horse in the mouth. Just kidding the legitimate Moz staff did a great job of letting you know it was an error with the system.
Keep up the good work all the best,
Tom
-
RE: Alternative to Moz Content?
if you were a Moz content user and paid for it obviously you get 50% off of BuzzSumo for six months I believe.
I wish they would keep it around me honest I like it I was going to recommend scribecontent.com but they are now only offering it to hosting subscribers.
I would use
http://buzzsumo.com/blog/50-of-content-gets-8-shares-or-less-why-content-fails-and-how-to-fix-it/
https://moz.com/blog/content-shares-and-links-insights-from-analyzing-1-million-articles
https://www.quora.com/Is-there-any-alternative-to-BuzzSumo
https://seosemtools.knoji.com/buzzsumo-vs-ahrefs-vs-moz-seo-tool-seo-tools-reviewed-and-compared/
-
RE: Alt text and itemprop description
I would use schema on the URL, so Google knows what you want to appear as your logo. However, the description is not as necessary.
script type="application/ld+json">
{"@context": "http://schema.org/",
"@type": "Organization",
"url": "http://www.yourcompany.com/",
"logo": "http://www.yourcompany.com/logo.png"
}If you use <a class="selected" data-selects="microdata">Microdata</a> schema you will have your description show up on your page you want that?
ACME Hotel Innsbruck
alt="hotel logo" />
A beautifully located business hotel right in the
heart of the alps. Watch the sun rise over the scenic Inn valley while
enjoying your morning coffee.
![](company logo.png)alt="Company name"/>
alt="image description" title="image tooltip"/>
Alt text is best when always used but you do not want to make it too long. Three words should suffice at most As far as a description goes under your logo that's a choice you in your designer have to speak about I would think it would look a little odd.
Keep in mind
"Each image should have an alt text. Not just for SEO purposes but also because blind and visually impaired people otherwise won’t know what the image is for. A title attribute is not required. It can be useful but in most cases, leaving it out shouldn’t be much of an issue."
- https://yoast.com/image-seo-alt-tag-and-title-tag-optimization/ (ALT TEXT)
- https://developers.google.com/search/docs/data-types/articles (news)
- https://www.logomaker.com/blog/2014/11/19/use-schema-org-markup-logo-design/
- http://webmasters.stackexchange.com/questions/58059/correct-usage-of-schema-org-for-logo
I hope this helps,
Tom
-
RE: Anyone seem anything from penguin yet?
I have seen a very definite change on one client site which uses an exact match domain.
With that said I believe what was occurring was double anchor text from internal linking and external linking carrying the domain name into the back link.
Honestly, this is only a hunch, but the site has been increasing in traffic for the past 2 1/2 years pretty steadily.
This was the first big down cycle and as Google has stated this will not affect the domain entirely, but it will change the pages hit by spam.
I'm going to run a couple of tests on dummy sites that get at least take 10K of traffic every month allowing for comment spam and link spam to it individual pages and watch the fallout.
I do agree with you about what Dr. Pete mentioned it delayed Google has to crawl all the sites depending on your crawl budget and even regional internal Google page rank it could affect some more quickly than others.
US sites will be the first to feel the peak of Penguin.
For anybody tuning in on the subject here are some good references.
- https://searchenginewatch.com/2016/09/23/penguin-4-0-is-finally-here-google-confirms/
- https://webmasters.googleblog.com/2016/09/penguin-is-now-part-of-our-core.html
- http://searchengineland.com/google-updates-penguin-says-now-real-time-part-core-algorithm-259302
I hope this is of help,
Tom
-
RE: Google Ecommerce Tracking
Hi Tim,
I am happy I could be of help. I ran into a similar issue and that was the fix it was good timing.
Let me know if there's anything else I can do.
All the best,
Tom