Is use of javascript to simplify information architecture considered cloaking?
-
We are considering using javascript to format URLs to simplify the navigation of the googlebot through our site, whilst presenting a larger number of links for the user to ensure content is accessible and easy to navigate from all parts of the site. In other words, the user will see all internal links, but the search engine will see only those links that form our information hierarchy.
We are therefore showing the search engine different content to the user only in so far as the search engine will have a more hierarchical information architecture by virture of the fact that there will be fewer links visible to the search engine to ensure that our content is well structured and discoverable.
Would this be considered cloaking by google and would we be penalised?
-
Pagination is just links. Google can follow the links.
How you set up and offer your pages is important, especially for areas with a lot of pages.
If you have 40 pages of content then I would recommend a structure that offers pages something like "1,2,3,...20...40". If you don't offer a middle selection then that content will probably never be seen.
-
Does the googlebot follow pagination of search results? All our product pages are on the third tier, but their discovery would rely on google following pagination if we cannot use our original approach to infroamtion architecture (ie use javascript to channel the google bot to discover our tier 3 pages)
Thanks for your help!
-
Search engines will determine how deep to crawl a site based on it's importance. You can use the Domain Authority and Page Authority metrics to measure this factor.
In general, you want your content to be a maximum of 3 clicks from your landing page. If you have buried your content deeper, consider either flattening out your architecture or adding links to the buried content. It is very helpful to build external links to the deeper content which will help search engines discover those pages.
-
Ryan is right... you shouldn't do this. If you want to help the crawlers find their way through your site, you could submit a sitemap?
-
Hi Ryan
We use a navigation bar in the header which means that there are a large number of on page links and there is no clear way to determine our information architecture from our internal link structure. i.e. many pages at different levels in our information architecture can be accessed from every page on the site.
Is this an issue? Or will the URL structure be sufficient for the search engines to categorise our content? How can we help the search engine discover content at level 3 in our hierarchy if we insist on using a navigation bar in the header which we believe gives a good user experience?
Thanks!!
-
I have to agree with Ryan. Yes it's cloaking. ... And if you get caught, you could and most likely would be penalized.
-
The actions you describing define cloaking and would be penalized.
If that process were allowed then it would be severely abused. Sites would remove links that were less desirable such as to their privacy page. Sites might also add links.
Search engines insist upon seeing the same content that a user would see.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using # in parameters?
I am trying to understand why a website would use # instead of a ? for its parameters? I have put an example of the URL below: http://www.warehousestationery.co.nz/office-supplies/adhesives-tapes-and-fastenings#prefn1=brand&prefn2=colour&prefv1=Command&prefv2=Clear Any help would be much appreciated.
Technical SEO | | CaitlinDW1 -
Do you get penalized in search results when you use a heading tag, but it's not technically a heading (used for emphasis)?
Do you get penalized in search results when you use a heading tag, but it's not technically a heading? My clients are using heading tags for text they want to emphasize and make stand out. Does this affect search rankings for SEO?
Technical SEO | | jthompson05130 -
Should you use the canonicalization tag when the content isn't exactly a duplicate?
We have a site that pull data from different sources with unique urls onto a main page and we are thinking about using the canonicalization tag to keep those source pages from being indexed and to give any authority to the main page. But this isn’t really what canonicalization is supposed to be used for so I’m unsure of if this is the right move.
Technical SEO | | Fuel
To give some more detail: We manage a site that has pages for individual golf courses. On the golf course page in addition to other general information we have sections on that page that show “related articles” and “course reviews”.
We may only show 4 or 5 on each of those courses pages per page, but we have hundreds of related articles and reviews for each course. So below “related articles” on the course page we have a link to “see more articles” that would take the user to a new page that is simply a aggregate page that houses all the article or review content related to that course.
Since we would rather have the overall course page rank in SERPs rather than the page that lists these articles, we are considering canonicalizing the aggregate news page up to the course page.
But, as I said earlier, this isn’t really what the canonicalization tag is intended for so I’m hesitant.
Has anyone else run across something like this before? What do you think?0 -
Canonical URL Tag: Confusing Use Case
We have a webpage that changes content each evening at mid-night -- let's call this page URL /foo. This allows a user to bookmark URL /foo and obtain new content each day. In our case, the content on URL /foo for a given day is the same content that exists on another URL on our website. Let's say the content for November 5th is URL /nov05, November 6th is /nov06 and so on. This means on November 5th, there are two pages on the website that have almost identical content -- namely /foo and /nov05. This is likely a duplication of content violation in the view of some search engines. Is the Canonical URL Tag designed to be used in this situation? The page /nov05 is the permanent page containing the content for the day on the website. This means page /nov05 should have a Canonical Tag that points to itself and /foo should have a Canonical Tag that points to /nov05. Correct? Now here is my problem. The page at URL /foo is the fourth highest page authority on our 2,000+ page website. URL /foo is a key part of the marketing strategy for the website. It has the second largest number of External Links second only to our home page. I must tell you that I'm concerned about using a Cononical URL Tag that points away from the URL /foo to a permanent page on the website like /nov05. I can think of a lot of things negative things that could happen to the rankings of the page by making a change like this and I am not sure what we would gain. Right now /foo has a Canonical URL Tag that points to itself. Does anyone believe we should change this? If so, to what and why? Thanks for helping me think this through! Greg
Technical SEO | | GregSims0 -
302 redirect used, submit old sitemap?
The website of a partner of mine was recently migrated to a new platform. Even though the content on the pages mostly stayed the same, both the HTML source (divs, meta data, headers, etc.) and URLs (removed index.php, removed capitalization, etc) changed heavily. Unfortunately, the URLs of ALL forum posts (150K+) were redirected using a 302 redirect, which was only recently discovered and swiftly changed to a 301 after the discovery. Several other important content pages (150+) weren't redirected at all at first, but most now have a 301 redirect as well. The 302 redirects and 404 content pages had been live for over 2 weeks at that point, and judging by the consistent day/day drop in organic traffic, I'm guessing Google didn't like the way this migration went. My best guess would be that Google is currently treating all these content pages as 'new' (after all, the source code changed 50%+, most of the meta data changed, the URL changed, and a 302 redirect was used). On top of that, the large number of 404's they've encountered (40K+) probably also fueled their belief of a now non-worthy-of-traffic website. Given that some of these pages had been online for almost a decade, I would love Google to see that these pages are actually new versions of the old page, and therefore pass on any link juice & authority. I had the idea of submitting a sitemap containing the most important URLs of the old website (as harvested from the Top Visited Pages from Google Analytics, because no old sitemap was ever generated...), thereby re-pointing Google to all these old pages, but presenting them with a nice 301 redirect this time instead, hopefully causing them to regain their rankings. To your best knowledge, would that help the problems I've outlined above? Could it hurt? Any other tips are welcome as well.
Technical SEO | | Theo-NL0 -
Using same IP for differenct country TLD versions
Hi Will having a websites in several languages hosted on the same IP be a problem SEO wise if they are using different country TLD's? In this case shopdomain.de, at and co.uk on the exact same server IP
Technical SEO | | AndersDK0