Unsolved Next JS and Missing content
-
Hello
We recently migrated our page to next JS which is supposed to be great for SEO
On almost all our pages we are getting the same errorsMissing Canonical Tag
Missing Title
Missing or Invalid H1
Missing Description
We don't understand this because we have all of that content on every page. We believe that maybe NextJs is having a incompatibility with Moz. Has anyone had any experience with this?
-
@tom-capper
Thanks for your response.Because we are using Nextjs, it has a built in pre rendering (one of the main draws towards it)
When you mention fallback functionality, can you be more specific about what you mean?
Thanks
-
Hi
Moz Pro is not a JavaScript capable crawler. So it will be seeing a raw HTML version of these pages.
That said, search engines also have varying and often limited JavaScript capability, and most SEOs would advise you to have decent fallback functionality - perhaps through pre rendering, for example.
You can tell through Google's cache, Search Console, and how you're appearing in SERPs, roughly how Google sees your pages. But keep in mind that Google's handling without any non-JS fallback may be buggy, and other search engines and crawlers (including social networks, for example) will only do worse.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Feedback on Content Ideation / "Skyscraper" Spreadsheet Template
Hi All - I've been getting a ton of use out of the MOZ API for discovering the popularity of content - which I'm using for content ideation or to implement the Skyscraper concept. I built a spreadsheet template that combines MOZ with some other APIs to apply this to new topics of my choosing, and my friends encouraged me to clean it up a bit and share with the broader community. So, here it is - fire away! I'd love any and all feedback about the spreadsheet - it's a prototype still so it could stand to pull back more results. For example: would you want to include Domain Authority in the results? Focus more or less on the social sharing elements - or let you choose the thresholds? Would love to know if there other methodologies for which you'd be interested in seeing spreadsheet templates produced. Cheers! skyscraper-template.png
Moz Pro | | paulkarayan0 -
Duplicate content nightmare
Hey Moz Community I ran a crawl test, and there is a lot of duplicate content but I cannot work out why. It seems that when I publish a post secondary urls are being created depending on some tags and categories. Or at least, that is what it looks like. I don't know why this is happening, nor do I know if I need to do anything about it. Help? Please.
Moz Pro | | MobileDay0 -
Why do I see a duplicate content errors when rel="canonical" tag is present
I was reviewing my first Moz crawler report and noticed the crawler returned a bunch of duplicate page content errors. The recommendations to correct this issue are to either put a 301 redirect on the duplicate URL or use the rel="canonical" tag so Google knows which URL I view as the most important and the one that should appear in the search results. However, after poking around the source code I noticed all of the pages that are returning duplicate content in the eyes of the Moz crawler already have the rel="canonical" tag. Does the Moz crawler simply not catch whether that tag is being used? If I have that tag in place, is there anything else I need to do in order to get that error to stop showing up in the Moz crawler report?
Moz Pro | | shinolamoz0 -
My domain has a * next to it i my dashboard what does this mean?
When two of my site appears in my Dashboard they have a * before them? e.g *my-debt.co.uk Whats does this mean?
Moz Pro | | BitsCube0 -
"no urls with duplicate content to report"
Hi there, i am trying to clean up some duplicate content issues on a website. The crawl diagnostics says that one of the pages has 8 other URLS with the same content. When i click on the number "8" to see the pages with duplicate content, i get to a page that says "no urls with duplicate content to report". Why is this happening? How do i fix it?
Moz Pro | | fourthdimensioninc0 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but these pages are noindex
Hello guys, our site is nearly perfect - according to SEOmoz campaign overview. But, it shows me 5200 Errors, more then 2500 Pages with Duplicate Content plus more then 2500 Duplicated Page Titles. All these pages are sites to edit profiles. So I set them "noindex, follow" with meta robots. It works pretty good, these pages aren't indexed in the search engines. But why the SEOmoz tools list them as errors? Is there a good reason for it? Or is this just a little bug with the toolset? The URLs which are listet as duplicated are http://www.rimondo.com/horse-edit/?id=1007 (edit the IDs to see more...) http://www.rimondo.com/movie-edit/?id=10653 (edit the IDs to see more...) The crawling picture is still running, so maybe the errors will be gone away in some time...? Kind regards
Moz Pro | | mdoegel0 -
SEOmoz Bot indexing JSON as content
Hello, We have a bunch of pages that contain local JSON we use to display a slideshow. This JSON has a bunch of<a links="" in="" it. <="" p=""></a> <a links="" in="" it. <="" p="">For some reason, these</a><a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p=""></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">One example page this is happening on is: http://www.trendhunter.com/trends/a2591-simplifies-product-logos . Searching for the string '<a' yields="" 1100+="" results="" (all="" of="" which="" are="" recognized="" as="" links="" for="" that="" page="" in="" seomoz),="" however,="" ~980="" these="" json="" code="" and="" not="" actual="" on="" the="" page.="" this="" leads="" to="" a="" lot="" invalid="" our="" site,="" super="" inflated="" count="" on-page="" page. <="" span=""></a'></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">Is this a bug in the SEOMoz bot? and if not, does google work the same way?</a>
Moz Pro | | trendhunter-1598370 -
Solving duplicate content errors for what is effectively the same page.
Hello,
Moz Pro | | jcarter
I am trying out your SEOMOZ and I quite like it. I've managed to remove most of the errors on my site however I'm not sure how to get round this last one. If you look at my errors you will see most of them revolve around things like this: http://www.containerpadlocks.co.uk/categories/32/dead-locks
http://www.containerpadlocks.co.uk/categories/32/dead-locks?PageSize=9999 These are essentially the same pages because the category for Dead Locks does not contain enough products to view over more than one resulting in the fact that when I say 'View all products' on my webpage, the results are the same. This functionality works with categories with more than the 20 per page limit. My question is, should I be either: Removing the link to 'show all products' (which adds the PageSize query string value) if no more products will be shown. Or putting a no-index meta tag on the page? Or some other action entirely? Looking forward to your reply and you showing how effective Pro is. Many Thanks,
James Carter0