Getting Posts Indexed
-
On a Wordpress site I'm working on you can get to any product from home in 2 clicks but I'm a llittle concerned about the URL which looks like this: domain/categoryname/subcategoryname/productpage
Will I have trouble getting my products indexed?
-
Hi Wayne, indexing and indexing speed may depend a bit on the total number of pages/URLs in the site combined with your current site authority, but basically the URL structure shouldn't give you any problems. Same as Istvan, I recommend installing the Yoast SEO plugin and also create an HTML sitemap (where all your clicks will be only 1 click away from home).
Good luck!
-
3 layers isnt too bad.As long as you have decent domain authority, indexation should be okay.
Just make sure you that all category pages are structured nicely, and link to the pages in the subcategory.
-
You shouldn't have a problem with the indexing there. Still I would advice you to install Yoast plugin and use it for indexation (you can tell there, what not to index).
Good luck,
Istvan
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No index tag robots.txt
Hi Mozzers, A client's website has a lot of internal directories defined as /node/*. I already added the rule 'Disallow: /node/*' to the robots.txt file to prevents bots from crawling these pages. However, the pages are already indexed and appear in the search results. In an article of Deepcrawl, they say you can simply add the rule 'Noindex: /node/*' to the robots.txt file, but other sources claim the only way is to add a noindex directive in the meta robots tag of every page. Can someone tell me which is the best way to prevent these pages from getting indexed? Small note: there are more than 100 pages. Thanks!
Technical SEO | | WeAreDigital_BE
Jens0 -
Homepage not indexed - seems to defy explanation
Hey folks Hoping to get some more eyes on a specific problem I am seeing with a clients site. Site: http:www.ukjuicers.com We have checked everything we can think of and the usual suspects here are not present: Canonical URL is in place Site is shown as indexed in search console No Crawl, DNS, Connectivity or server errors No robots.txt blocking - verified in search console No robots meta tags or directives Fetch as Google works Fetch & render works site command returns all other pages info command does not return the homepage homepage is cached and cache has been updated since this issue started: http://webcache.googleusercontent.com/search?q=cache:www.ukjuicers.com homepage is indexed in yahoo and Bing all variations redirect to the www.ukjuicers.com domain (.co.uk, .com, www, sans www etc) The only issue I found after some extensive digging was some issues with the HTTP and HTTPS versions of the site both being available and both specifying the canonical version as themselves. So, http site used canonicals with http and https site used canonicals with https. So, a conflict there with the canonical exacerbating the problem it is there to solve. The HTTPS site is not indexed though and we have set this up in webmaster tools and now the web developer has set redirects to ensure all versions even the https now 301 redirect to the http://www.ukjuicers.com page so these canonical issues have been ironed out. But... it's still not indexing the homepage. The practical implications of this are quite scary - the site used to be somewhere between 1st and 4th for keywords like 'juicers', 'juicer' etc. Now they are bottom of page 1 or top of page 2 with an internal page. They were jostling with the big boys (amazon, argos, john lewis etc) but now they are right at the bottom of the second page. It's a strange one - i have seen all manor of technical problems over the years but this one seems to defy sensible explanation. The next step is to do a full technical SEO audit of the site but I am always of the opinion that with many eyes all bugs are shallow so if anyone has any input or experience with odd indexation problems like this would love to get your input. Cheers
Technical SEO | | Marcus_Miller
Marcus0 -
Why seomoz.org still in Google index?
I searched in Google, the number of URLs indexed left in the seomoz.org domain since it changed to moz.comI am surprised that after all this time more than 15,000 URLs indexed:https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=site%3Aseomoz.org%20inurl%3Aseomoz.org If I clicked on any of the results it will be redirect (301) to the new domain, so it is working, but Google still keep these URLs in the index.
Technical SEO | | Yosef
What could be the reason?Will not cause duplicated content issue on moz.com?0 -
Pages Indexed Not Changing
I have several sites that I do SEO for that are having a common problem. I have submitted xml sitemaps to Google for each site, and as new pages are added to the site, they are added to the xml sitemap. To make sure new pages are being indexed, I check the number of pages that have been indexed vs. the number of pages submitted by the xml sitemap every week. For weeks now, the number of pages submitted has increased, but the number of pages actually indexed has not changed. I have done searches on Google for the new pages and they are always added to the index, but the number of indexed pages is still not changing. My initial thought was as new pages are added to the index, old ones are being dropped. But I can't find evidence of that, or understand why that would be the case. Any ideas on why this is happening? Or am I worrying about something that I shouldn't even be concerned with since new pages are being indexed?
Technical SEO | | ang1 -
Getting a Vanity (Clean) URL indexed
Hello, I have a vanity (clean looking) URL that 302 redirects to the ugly version. So in other words http://www.site.com/url 302 >>> http://www.site.com/directory/directory/url.aspx What I'm trying to do is get the clean version to show up in search. However, for some reason Google only indexes the ugly version. cache:http://www.site.com/directory/directory/url.aspx is showing the ugly URL as cached and cache:http://www.site.com/url is showing not cached at all. Is there some way to force Google to index the clean version? Fetch as Google for the clean URL only returns a redirect status and canonicalizing the ugly to the clean would seem to send a strange message because of the redirect back to the ugly. Any help would be appreciated. Thank you,
Technical SEO | | Digi12340 -
Sitemap do they get cleared when its a 404
Hi, Sitemap do they get cleared when its a 404. We have a drupal site and a sitemap that has 60K links and i want to know if in these 4 years we deleted 100's of links and do they have them automatically cleared from Sitemap or we need to build the sitemap again? Thanks
Technical SEO | | mtthompsons0 -
Problem with indexing
Hello, we've changed our CMS recently, everything seems to work well, but for some reason google, and other crawlers can't see or index other pages than main. There is no restriction in robots, nor any other visible issue. Please help if you can. Website: http://www.design-glassware.com/
Technical SEO | | divan0 -
Why googlebot indexing one page, not the other?
Why googlebot indexing one page, not the other in the same conditions? In html sitemap, for example. We have 6 new pages with unique content. Googlebot immediately indexes only 2 pages, and then after sometime the remaining 4 pages. On what parameters the crawler decides to scan or not scan this page?
Technical SEO | | ATCnik0