According to John Mueller, the answer is no (at least in the long term)
https://www.seroundtable.com/google-long-term-noindex-follow-24990.html
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
According to John Mueller, the answer is no (at least in the long term)
https://www.seroundtable.com/google-long-term-noindex-follow-24990.html
Hi Moz Community,
Are Bing/Yahoo crawlers different from Google’s crawler in terms of how they process client side JavaScript and especially content/data loaded by client side JavaScript?
Thanks,
Hi Moz Community,
I have a question about personalization of content, can we serve personalized content without being penalized for serving different content to robots vs. users? If content starts in the same initial state for all users, including crawlers, is it safe to assume there should be no impact on SEO because personalization will not happen for anyone until there is some interaction?
Thanks,
Hi Moz Community,
Is there a proper way to do SPA (client side rendered) and PWA without having a negative impact on SEO? Our dev team is currently trying to covert most of our pages to Angular single page application client side rendered. I told them we should use a prerendering service for users that have JS disabled or use server side rendering instead since this would ensure that most web crawlers would be able to render and index all the content on our pages even with all the heavy JS use. Is there an even better way to do this or some best practices?
In terms of the PWA that they want to add along with changing the pages to SPA, I told them this is pretty much separate from SPA's because they are not dependent. Adding a manifest and service worker to our site would just be an enhancement. Also, if we do complete PWA with JS for populating content/data within the shell, meaning not just the header and footer, making the body a template with dynamic JS as well would that effect our SEO in any way, any best practices here as well?
Thanks!
Hi Moz community,
Our tech team has recently decided to try switching our product pages to be JavaScript dependent, this includes links, product descriptions and things like breadcrumbs in JS. Given my concerns, they will create a proof of concept with a few product pages in a QA environment so I can test the SEO implications of these changes. They are planning to use Angular 5 client side rendering without any prerendering. I suggested universal but they said the lift was too great, so we're testing to see if this works.
I've read a lot of the articles in this guide to all things SEO and JS and am fairly confident in understanding when a site uses JS and how to troubleshoot to make sure everything is getting crawled and indexed.
https://sitebulb.com/resources/guides/javascript-seo-resources/
However, I am not sure I'll be able to test the QA pages since they aren't indexable and lives behind a login. I will be able to crawl the page using Screaming Frog but that's generally regarded as what a crawler should be able to crawl and not really what Googlebot will actually be able to crawl and index.
Any thoughts on this, is this concern valid?
Thanks!
Thanks for your help on this Nigel
Hey Nigel,
These parameters are already in my search console but Moz is still picking them up as duplicates.
Hi Logan,
I've seen your responses on several threads now on pagination and they are spot on so I wanted to ask you my question. We're an eCommerce site and we're using the rel=next and rel=prev tags to avoid duplicate content issues. We've gotten rid of a lot of duplicate issues in the past this way but we recently changed our site. We now have the option to view 60 or 180 items at a time on a landing page which is causing more duplicate content issues.
For example, when page 2 of the 180 item view is similar to page 4 of the 60 item view. (URL examples below) Each view version has their own rel=next and prev tags. Wondering what we can do to get rid of this issue besides just getting rid of the 180 and 60 item view option.
https://www.example.com/gifts/for-the-couple?view=all&n=180&p=2
https://www.example.com/gifts/for-the-couple?view=all&n=60&p=4
Thoughts, ideas or suggestions are welcome. Thanks!
Hi Nigel,
Thanks for the response and the post, I've actually read the article before and used the rel=next and rel=prev to fix some duplicate content issues because of pagination in the past.
Right now, the rel=next and rel=prev is not solving my duplication problems because pagination isn't the issue so the speak. The duplication is occurring because i have two page types (one at view 60 items and one at view 180 items - kind of like a filter) Each view (60 & 180) has their own set of pagination rules but it looks like page 4 of the 60 view is a duplicate of page 2 of the 180 view if that makes sense.
It becomes really tricky here to try and find a solution.
Hi Moz Community,
We're an eCommerce site so we have a lot of pagination issues but we were able to fix them using the rel=next and rel=prev tags. However, our pages have an option to view 60 items or 180 items at a time. This is now causing duplicate content problems when for example page 2 of the 180 item view is the same as page 4 of the 60 item view. (URL examples below) Wondering if we should just add a canonical tag going to the the main view all page to every page in the paginated series to get ride of this issue.
https://www.example.com/gifts/for-the-couple?view=all&n=180&p=2
https://www.example.com/gifts/for-the-couple?view=all&n=60&p=4
Thoughts, ideas or suggestions are welcome. Thanks
Hi Anthony,
Thanks for that response, that makes a lot of sense.
Best,
Zack
Hi Anthony,
Thanks for your reply, we have a very high turn over rate for products and many items go out of stock frequently. Would you recommend just noindexing the out of stock pages that don't have any traffic or links and are very old so they don't waste crawl budget? Normally it takes a while for Google to take 404 or 410 pages out of their index especially if the pages are old and don't get crawled very often.
Thanks
Zack
Hi Moz Community!
We're doing an audit of our e-commerce site at the moment and have noticed a lot of 404 errors coming from out of stock/discontinued product pages that we've kept 200 in the past. We kept these and added links on them for categories or products that are similar to the discontinued items but many other links of the page like images, blog posts, and even breadcrumbs have broken or are no longer valid causing lots of additional 404s.
If the product has been discontinued for a long time and gets no traffic and has no link equity would you recommend adding a noindex robots tag on these pages so we're not wasting time fixing all the broken links on these?
Any thoughts?Thanks
Hi Mozers,
I want to keep the HTTP xml sitemape live on my http site to keep track of indexation during the HTTPS migration. I'm not sure if this is doable since once our tech. team forces the redirects every http page will become https.
Any ideas? Thanks
Hi Christian,
Thanks for the reply. HTTPS rel canonical were added to live pages, as I expected this is why some are showing up in the search results. It's a problem through for GA and Search console tracking since we haven't made the switch server side yet and currently http pages don't redirect to their https version yet. So we're seeing no sessions for our http versions.
If I change the rel=canonical back to http on the live site I'm guessing the non secure pages will show up again after being crawled?
Thanks!
Hi Moz Community,
Recently our tech team has been taking steps to switch our site from http to https. The tech team has looked at all SEO redirect requirements and we're confident about this switch, we're not planning to roll anything out until a month from now.
However, I recently noticed a few https versions of our landing pages showing up in search. We haven't pushed any changes out to production yet so this shouldn't be happening. Not all of the landing pages are https, only a select few and I can't see a pattern. This is messing up our GA and Search Console tracking since we haven't fully set up https tracking yet because we were not expecting some of these pages to change.
HTTPS has always been supported on our site but never indexed so it's never shown up in the search results. I looked at our current site and it looks like landing page canonicals are already pointing to their https version, this may be the problem.
Anyone have any other ideas?
Does anyone have insight into the session percentage lift for their blog or their site after making the move from a subdomain blog to a subfolder? I'm seeing a lot of people talk about improvements in rankings for keywords on their blog and site but haven't seen anyone list out session numbers to go with that data.
Thanks
I recently filtered query information by week and day. The impression and click totals were different depending on whether I looked at totals by a full weeks or by day.
So for example, the impression and click totals when I choose a date range of monday-sunday are different when I look at impressions and clicks that same week by day and then add up the click and impression numbers to get a weekly total. At first i was expecting a slight difference since I know the data is heavily sampled but the totals were very different.
Any explanations for this?
Thanks
Currently we have two versions of a category page on our site (listed below)
Version A: www.example.com/category
• lives only in the SERPS but does not live on our site navigation
• has links
• user experience is not the best
Version B: www.example.com/category?view=all
• lives in our site navigation
• has a rel=canonical to version A
• very few links and doesn’t appear in the SERPS
• user experience is better than version A
Because the user experience of version B is better than version A I want to take out the rel=canonical in version B to version A and instead put a rel=canonical to version B in version A. If I do this will version B show up in the SERPS eventually and replace version A? If so, how long do you think this would take? Will this essentially pass page rank from version A to version B
Hi all,
I've been looking around at some of our competitors websites and I've been noticing huge amounts of keyword stuffing throughout the pages and also grouped within the bottom of the page. From what I've been taught it's not a good thing to do and you can be penalized for it. What's anyone else's take on keyword stuffing and how it's looked upon in 2017? Is there a max amount of keywords you should have on your page?
Here are a few URL's to the websites I'm talking about and their webpage.
https://www.walmart.com/cp/personalized-gifts/133224 - Keyword stuffing in the bottom group text for the word "personalized"
http://www.personalcreations.com/unique-groomsmen-gifts-pgrmsmn - Keyword stuffing in bottom group text for "groomsmen"
http://www.groovygroomsmengifts.com/ - keyword stuffing throughout page for "groomsmen"
Hi Andy,
Thanks for the quick reply, we did not get any errors or warning from our search console when this was implemented. We added the star ratings markup to our product pages back in early Nov or late Oct of 2016. We also have price and availability markup on our product pages.
We did take a look at our one of our product pages using the testing tool and everything seems to be fine. I've heard from a few others that it's really up to Google whether or not they choose to show the snippet features in search but wanted to know if anyone had any advice.
Would anyone know why after adding schema.org markup to a websites products, reviews and ratings why it still isn't showing up in Google SERPs? Does Google pick and choose who's rich snippets get displayed on a random basis or some other criteria?
Anyone interested in a reciprocal knowledge sharing opportunity? Does your team have experience with BloomReach Organic Services? We’re considering testing it out and are unsure of the long-term payoff given the substantial monthly fees for the SEO add-on. We're a pure e-commerce shop already using BloomReach’s merchandising solution so the technical integration is behind us.
What do you guys think?
What's more accurate? GA queries data or Moz/SEMRush keyword data for rankings?
Any thoughts appreciated.
Hi Everett,
Thanks for your response here, I've looked at both pages and our /gifts/birthday-gifts page has way more impressions, higher CTR and higher conversion than our /gifts/birthday-gifts/birthday-gifts version of the page.
So would you still recommend not changing anything or consolidating both pages?
Thanks!
Hi Moz Community,
Recently I've been seeing multiple pages from my eCommerce site pop up in the SERPS for a couple of queries. Usually I would count this as a good thing but since both pages that generally pop up are so similar I'm starting to wonder if we would rank better with just one page.
My example is the query "birthday gifts" Both of the URL's below show up in the search results one after the other on the first page. The URL on the top is our family page and the one below it is our subcat page, you can find both in the top nav. of our site.
www.uncommongoods.com/gifts/birthday-gifts/birthday-gifts (family)
www.uncommongoods.com/gifts/birthday-gifts (subcat)
Both of these pages have different PA's and the subcat page that currently lives in our site nav is actually: **www.uncommongoods.com/gifts/birthday-**gifts?view=all. ****This url doesn't show up in the serps and is rel=canonicaled to the subcat page without the parameter listed above. We use this page in the nav because we think it's a better user experience than the actual subcat page.
If we were to condense all three pages into one would we rank higher?
Any thoughts here would be appreciated.
Thanks
I've heard a lot about STAT, what's your experience with them and in your option how is their ranking data better than GA's?
Thanks
Yep, trying to incorporate them in a rankings report but I find it to be slightly different from Moz rankings.
Hi Moz Community,
I wanted to know how reliable the average position data is for queries in Google Analytics search console report. I know this report is fairly new this year and the numbers are calculated a bit differently than they were in the old search engine optimization report.
I want to know what the biggest differences are between this search console report vs. the old SEO report in GA. I'm also pretty confused about how GA reports on the average position. Obviously it's an average position of whatever date range your choose. But for instance, if your site shows multiples landing pages for one search query will it roll that into the average or just take the landing page that ranks higher? Does the position average take into account video or photo serp results and is this the average across mobile, desktop and tablet?
This number has always been a guess since it's sampled data but I want to know how accurate it is. I read this article in 2014 (linked below) but I'm not sure if it all still applies now that that data might be presented differently.
Any answers or discussions would be great here.
Thanks
Hi Laura,
Thanks for your response, I know that adding canonical tags can lower our indexed pages count. The only thing I'm worried about is the fact that the drop in indexed pages have been steady since Nov of last year and we just implemented the tags last month.
We have been seeing lower conversions with less traffic some of which has to do with us not ranking as well for a few key-terms as we did last year.
Hey Moz Community,
I've been seeing a steady decrease in search console of pages being indexed by Google for our eCommerce site. This is corresponding to lower impressions and traffic in general this year. We started with around a million pages being indexed in Nov of 2015 down to 18,000 pages this Nov. I realized that since we don't have around 3,000 or so products year round this is mostly likely a good thing.
I've checked to make sure our main landing pages are being indexed which they are and our sitemap was updated several times this year, although we're in the process of updating it again to resubmit. I also checked our robots.txt and there's nothing out of the ordinary. In the last month we've recently gotten rid of some duplicate content issues caused by pagination by using canonical tags but that's all we've done to reduce the number of pages crawled. We have seen some soft 404's and some server errors coming up in our crawl error report that we've either fixed or are trying to fix.
Not really sure where to start looking to find a solution to the problem or if it's even a huge issue, but the drop in traffic is also not great. The drop in traffic corresponded to lose in rankings as well so there could be correlation or none.
Any ideas here?
Hi Moz Community,
Our e-commerce site is trying to gauge the opportunity of certain queries for specific countries. I'm trying to use the search console data presented in GA to do this. I'm looking at the top queries filtered by each country and also the top landing pages for each country as well.
The non filtered data for queries and landing pages is completely different than by country and some if it looks wrong. For instance, our most popular query by impressions shows 0 query impressions in the US once filtered by country. Our site is based in the US so this doesn't make any sense, the same is true for landing pages.
Is the queries and landing page data in GA under search console a combination of all countries? Since our target is set to the USA in search console is this data technically US based?
How is this data so off?
Thanks for answering!
Hey Guys,
I recently noticed that our Christmas Gifts landing page was ranking twice in the Google serps for the query "Christmas Gifts."
One of these pages is an old url that has already been 301 redirected to the new url page which is also showing up in the search results. In the results, the following shows up in position 2 & 3 for the Christmas Gifts query:
<cite class="_Rm">www.uncommongoods.com/gifts/christmas/christmas-gifts</cite> <cite class="_Rm">www.uncommongoods.com/occasions/christmas-gifts/christmas-gifts</cite>The url with "occasions" in it has already been 301 redirected to the url above it. Not sure why this is still showing up. I know it takes Google some time to index 301s and sometimes they show old urls, but it's been a few months since the old "occasions" url was redirected.The title tags for these pages are different but they are actually the same page. The new "gifts" version of the url was made live in the Navigation of our site just last week and before that it was hidden from our Navigation. Would this be the reason it's now showing up in search?Any ideas on why this might be happening? ThanksExplanations?
Hi Moz Community,
We've been implementing new canonical tags for our category pages but I have a question about pages that are found via search and our filtering options. Would we still need a canonical tag for pages that show up in search + a filter option if it only lists one page of items? Example below.
www.uncommongoods.com/search.html/find/?q=dog&exclusive=1
Thanks!
Thanks for being so thorough with your answer!
I'll have our tech team take a look at the markup. We used to have our review content embedded as part of the html code and I've heard that increases crawl frequency and was also easy for search engines to understand. Now I think we might be using AJAX which apparently causes confusion for crawl bots.
Ok, so you're saying we should have the right markup for our aggregate reviews but that even with the correct markup Google might ignore this anyway? I was told that our reviews helped boost our rankings in the past when we implemented them years ago. But then again this could be the fact that it was new then and we've had reviews on our site for ages now.
Thanks!
Hi Britney,
Thanks for your response, here's an example of an item page:
http://www.uncommongoods.com/product/intersection-of-love-photo-print
I want to make sure that the new way we've set up the reviews don't interfere with Google being able to crawl our review content. We used to have our review text coded in our page code and I was certain Google could crawl our reviews section to gather out user generated content. Now that our page is using JavaScript it might be harder for Google to crawl....When I Google any snippet from a review it shows up in search so it seems like this isn't really a problem?
Hi Moz community,
I've been trying to do some on-site work and noticed that our product pages reviews may not be totally optimized. It used to be that all of the text from the reviews appeared in the actual code of the page, but now none of that text appears, so it may not be getting crawled. The change was most likely released when we had an item page redesign. However, when I Google a review snippet, it does seem to come up, so maybe Google is crawling that data despite it not being SEO optimized.
Is this really an issue if the review snippets are showing up in search, there's been a lot of talk that Google is now better at crawling javascript.
Thanks
Hi MOZ community,
I'm hoping you guys can help me with this.
Recently our site switched our landing pages to include a 180 item and 60 item version of each category page. They are creating duplicate content problems with the two examples below showing up as the two duplicates of the original page.
http://www.uncommongoods.com/fun/wine-dine/beer-gifts?view=all&n=180&p=1
http://www.uncommongoods.com/fun/wine-dine/beer-gifts?view=all&n=60&p=1
The original page is
http://www.uncommongoods.com/fun/wine-dine/beer-gifts
I was just going to do a rel=canonical for these two 180 item and 60 item pages to the original landing page but then I remembered that some of these landing pages have page 1, page 2, page 3 ect. I told our tech department to use rel=next and rel=prev for those pages. Is there anything else I need to be aware of when I apply the canonical tag for the two duplicate versions if they also have page 2 and page 3 with rel=next and rel=prev?
Thanks
Hey Moz Users,
Has anyone tried using the WordPress plugin for AMP pages on their blog yet? Here's the link to it: https://wordpress.org/plugins/amp/.
The implementation seems pretty straightforward but since there will be an AMP and a mobile friendly version of the posts on my blog I'm worried it will create a lot of duplicate content issues. I've seen a lot of articles pointing to a rel canonical tag that can be used to fix this situation. Not sure if I'm going to have an AMP version of all the posts on my blog, so this seems like it would be a pain to place the tag manually on specific pages with the AMP version only.
Has anyone tried this plugin and what have you done to fix this duplicate content issue?
Thanks
The page wasn't a time sensitive query at all, it's a product page which we traditionally rank well for. I've checked out links to the page as well and i haven't seen anything out of the ordinary. I'm hoping it's testing related with the new Penguin release like you've stated.
Thanks!
Hi all,
Two weeks ago i noticed that one of our pages which normally ranks in the top 5 of search results dropped out of the top 50 results. I checked to make sure there were no Google penalties and checked to make sure the page was crawlable. Everything seemed fine and after a few hours our page went back into the number one position. I assumed it was a Google Flux.
This number one ranking lasted about a week, today I see my page has dropped out of the top 50 yet again and hasn't come back up. again there are no penalties and there doesn't seem to be issues with the page. I'm hoping it comes back up to the top by tomorrow.
What could be causing such a big dip multiple times?
Any idea why this only happens on mobile?
Hi Moz Community,
I was searching for "Gifts for men" in Google Search on my phone and saw a few results in the 3rd (Nordstrom), 4th (Etsy) and 5th(Grommet) place that had their brand name in the area under the title tag where the green url is usually listed on desktop.
One example of the green text under the title tag is Nordstrom which lookes like this:
Nordstrom > Shop > Gifts
Whereas the first result from UncommonGoods looks like this in the green text:
www.uncommongoods.com > by recipient
I'm trying to figure out what markup Nordstrom, Etsy, ect used on their site to get their brand name to show up not as a url but as a brandname
Anyone know the answer to this?
Thanks!
Thanks John!
I checked a few Etsy category and item pages and found the og: tag.
Great answer and I'm glad you found this interesting and useful.
Hi,
I've been thinking of placing our brand name in the front of our title tag for brand recognition purposes. While doing research I came across a few sites that seem to have their brand name on every title tag, regardless of whether of not their title tag was too long and getting cut off by Google.
Ex: Personalized Cutting Broads & Humidors...-Etsy
The title tag for the example above was for a store in Etsy that sells personalized cutting broads, which was what i searched for. Normally a title tag that is too long gets cut off by Google and your brand name no longer shows if you've positioned it at the end of your title tag.
Is there a way to get your brand name to show up at the end of every title tag even if your title is long and gets cut off by Google? Obviously, I could just place the brand name at the front of my title tag, but if I wanted something like my example above is that possible?
Thanks You
Thanks Oleg,
The link you sent for the webmaster page had been deprecated since Oct 2015, does your recommendation still hold?
I would like to make a change to the way our main navigation is currently rendered on our e-commerce site. Currently, all of the content that appears when you click a navigation category is rendering on page load. This is currently a large portion of every page visit’s bandwidth and even the images are downloaded even if a user doesn’t choose to use the navigation.
I’d like to change it so the content appears and is downloaded only IF the user clicks on it, I'm planning on using AJAX. As that is the case it wouldn’t not be automatically on the site(which may or may not mean Google would crawl it). As we already provide a sitemap.xml for Google I want to make sure this change would not adversely affect our SEO.
As of October this year the Webmaster AJAX crawling doc. suggestions has been depreciated. While the new version does say that its crawlers are smart enough to render AJAX content, something I've tested, I'm not sure if that only applies to content injected on page load as opposed to in click like I'm planning to do.
Here is the message:
"Googlebot found an extremely high number of URLs on your site: http://www.uncommongoods.com/"
Should i try to do anything about this? We are not having any indexation issues so we think Google is still crawling our whole site. What could be some possible repercussions of ignoring this?
Thanks Mozzers!
-Zack
We have a bit of a conundrum. Webmaster tools is telling us that they are crawling too many URLs:
Googlebot found an extremely high number of URLs on your site: http://www.uncommongoods.com/
In their list of URL examples, all of the URLs have tons of parameters. We would probably be ok telling Google not to index any of the URLs with parameters. We have a great URL structure. All of our category and product pages have clean links (no parameters) The parameters come only from sorts and filters. We don't have a need for Google to index all of these pages.
However, Google Analytics is showing us that over the last year, we received a substantial amount of search revenue from many of these URLs (800+ of them converted)
So, Google is telling us they are unhappy. We want to make Google happy by ignoring all of the paramter URLs, but we're worried this will kill the revenue we're seeing.
Two questions here:
1. What do we have to lose by keeping everything as-is. Google is giving us errors, but other than that what are the negative repercussions?
2. If we were to de-index all of the parameter URLs via Webmaster tools, how much of the revnenue would likely be recovered by our non-parameter URLs?
I've linked to a screenshot from Google Analytics