Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Bing Indexation and handling of X-ROBOTS tag or AngularJS
-
Hi MozCommunity,
I have been tearing my hair out trying to figure out why BING wont index a test site we're running.
We're in the midst of upgrading one of our sites from archaic technology and infrastructure to a fully responsive version.
This new site is a fully AngularJS driven site. There's currently over 2 million pages and as we're developing the new site in the backend, we would like to test out the tech with Google and Bing.We're looking at a pre-render option to be able to create static HTML snapshots of the pages that we care about the most and will be available on the sitemap.xml.gz
However, with 3 completely static HTML control pages established, where we had a page with no robots metatag on the page, one with the robots NOINDEX metatag in the head section and one with a dynamic header (X-ROBOTS meta) on a third page with the NOINDEX directive as well. We expected the one without the meta tag to at least get indexed along with the homepage of the test site.
In addition to those 3 control pages, we had 3 pages where we had an internal search results page with the dynamic NOINDEX header. A listing page with no such header and the homepage with no such header.
With Google, the correct indexation occured with only 3 pages being indexed, being the homepage, the listing page and the control page without the metatag.
However, with BING, there's nothing. No page indexed at all. Not even the flat static HTML page without any robots directive.
I have a valid sitemap.xml file and a robots.txt directive open to all engines across all pages yet, nothing.
I used the fetch as Bingbot tool, the SEO analyzer Tool and the Preview Page Tool within Bing Webmaster Tools, and they all show a preview of the requested pages. Including the ones with the dynamic header asking it not to index those pages.
I'm stumped. I don't know what to do next to understand if BING can accurately process dynamic headers or AngularJS content.
Upon checking BWT, there's definitely been crawl activity since it marked against the XML sitemap as successful and put a 4 next to the number of crawled pages.
Still no result when running a site: command though. Google responded perfectly and understood exactly which pages to index and crawl.
Anyone else used dynamic headers or AngularJS that might be able to chime in perhaps with running similar tests?
Thanks in advance for your assistance....
-
Thank you for the update Kavit.
-
Hi Everett and Fellow Mozzers,
I have been away overseas so wasn't able to put up an update.
Eventually, managed to get a hold of someone at BING within the tech team who told me that the reason that they didn't index the pages was simply because of popularity.
It isn't enough to have unique content, design and structure on your site, it is also vital to have traffic, links and mentions as external signals.
We also got word that dynamic sites and pre-render content will be acceptable for BING so we're resting easier at night these days.
Development on the site continues as per schedule and we will be launching the proper site this year on a highly authoritative domain which should yield very different results to the test we put together.
Hopefully, this will help someone else who is on a similar pathway.
Everett, I would like to thank you again for taking the time to read, reply and help us with our analysis.
Thanks!
-
Hi Everett,
Thank you for the analysis and deeper insights.
I did make the changes to the test pages bar the design template.
We added the unique titles, meta descriptions and meta keywords.
We added completely unique content to all three pages with no other instances of this content appearing on the web at all.
The pages are now also interlinked and also linked from the top of the homepage so none of them are orphan pages.
sitemaps have been updated and resubmitted.
The latest version has been out a week so far, but no response from BING as yet.
Thanks,
Kavit.
-
Hello Kavit,
I would suggest putting unique Title tags, meta descriptions and content on those pages. They are very thin as it is, and all of the content is boilerplate.
There are 57,100,000 results on Bing for: "Search for an Australian Business, Government Department or Person" which is the content on the home page you shared.
There are 60,600 results on Bing for: ""There was a table set out under a tree in front of the house, and the March Hare and the Hatter were having tea at it" which is the content on this page: http://wp-seospike-weblbl.naws-sensis.com.au/bing-seo-control/no-metatag.html .
And so on. I can see why Bing wouldn't want to add yet another thin, duplicate, orphan page to their index. My advice would be to build out those test pages with a design template and to put original content, title tags and meta descriptions on all of them. Then repeat your test.
-
Hi Everett,
Thank you for taking the time out to read and respond.
The URL we have setup for testing is: wp-seospike-weblbl.naws-sensis.com.au
We have 3 control pages (all flat HTML pages) that we setup and put online for bing to crawl:
http://wp-seospike-weblbl.naws-sensis.com.au/bing-seo-control/no-metatag.html - no robots metatag and allowed to crawl and index.
http://wp-seospike-weblbl.naws-sensis.com.au/bing-seo-control/metatag.html - page with a noindex metatag not to be crawled and indexed
http://wp-seospike-weblbl.naws-sensis.com.au/bing-seo-control/metatag-header.html - X-Robots meta tag NOINDEX
http://wp-seospike-weblbl.naws-sensis.com.au - homepage with no robots exclusion
Ideally, I expected the homepage and the no-metatag page to be indexed at least.
I am familiar with the builtvisible documentation that they've put out.
My main pain point is that even the flat HTML pages are getting ignored, so I can't even test the deeper AngularJS developed pages since my control group is not delivering results as it should.
a site command on the above domain on bing shows no results.
Thanks again!
-
Is there any chance of getting a URL for the domain in question?
Have you read this yet?
https://builtvisible.com/javascript-framework-seo/What are the URLs like that you're asking Bing to index? Which is closest?
Hashbang
http://www.IWishJSFramworkWebsitesWouldGoAway/#!Escaped Fragment
http://www.IWishJSFramworkWebsitesWouldGoAway/?escaped_fragment=Base URL using Angular's $location service to construct URLs without the #! via the HTML5 History API http://www.IWishJSFramworkWebsitesWouldGoAway/
I know this doesn't answer your question, but hopefully it will get the discussion started.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Homepage Slider How to Handle H1 and H2's
Working on a site with a slider on the homepage, I dislike them but owner wants to keep in place. Currently, the slider has 4 slides with different images but the same text, so the slider has 4 slides with 4 identical H2 tags and accompanying text. There is no H1 tag on the page at all. It seems to me that a better solution would be to change the first slide to be H1 (with the target keyword) and rework the text in the other slides as H2 tags to appeal to the user. This does mean that the H1 and H2 tags in the slider would be styled the same. Is this a sensible approach?
Web Design | | GrouchyKids1 -
Have Your Thoughts Changed Regarding Canonical Tag Best Practice for Pagination? - Google Ignoring rel= Next/Prev Tagging
Hi there, We have a good-sized eCommerce client that is gearing up for a relaunch. At this point, the staging site follows the previous best practice for pagination (self-referencing canonical tags on each page; rel=next & prev tags referencing the last and next page within the category). Knowing that Google does not support rel=next/prev tags, does that change your thoughts for how to set up canonical tags within a paginated product category? We have some categories that have 500-600 products so creating and canonicalizing to a 'view all' page is not ideal for us. That leaves us with the following options (feel it is worth noting that we are leaving rel=next / prev tags in place): Leave canonical tags as-is, page 2 of the product category will have a canonical tag referencing ?page=2 URL Reference Page 1 of product category on all pages within the category series, page 2 of product category would have canonical tag referencing page 1 (/category/) - this is admittedly what I am leaning toward. Any and all thoughts are appreciated! If this were in relation to an existing website that is not experiencing indexing issues, I wouldn't worry about these. Given we are launching a new site, now is the time to make such a change. Thank you! Joe
Web Design | | Joe_Stoffel1 -
Reason for robots.txt file blocking products on category pages?
Hi I have a website with thosands of products. On the category pages, all the products are linked to with the code “?cgid” in the URL. But “?cgid” is also blocked in the robots.txt file for some reason. So I'm thinking it's stopping all my products getting crawled by Google. Am I right here? Is there any reason why a website would want to limit so many URL's? I'm only here a week and the sites getting great traffic, so don't want to go breaking it!!! Thanks
Web Design | | Frankie-BTDublin0 -
How to prevent development website subdomain from being indexed?
Hello awesome MOZ Community! Our development team uses a sub-domain "dev.example.com" for our SEO clients' websites. This allows changes to be made to the dev site (U/X changes, forms testing, etc.) for client approval and testing. An embarrassing discovery was made. Naturally, when you run a "site:example.com" the "dev.example.com" is being indexed. We don't want our clients websites to get penalized or lose killer SERPs because of duplicate content. The solution that is being implemented is to edit the robots.txt file and block the dev site from being indexed by search engines. My questions is, does anyone in the MOZ Community disagree with this solution? Can you recommend another solution? Would you advise against using the sub-domain "dev." for live and ongoing development websites? Thanks!
Web Design | | SproutDigital0 -
Problems preventing Wordpress attachment pages from being indexed and from being seen as duplicate content.
Hi According to a Moz Crawl, it looks like the Wordpress attachment pages from all image uploads are being indexed and seen as duplicate content..or..is it the Yoast sitemap causing it? I see 2 options in SEO Yoast: Redirect attachment URLs to parent post URL. Media...Meta Robots: noindex, follow I set it to (1) initially which didn't resolve the problem. Then I set it to option (2) so that all images won't be indexed but search engines would still associate those images with their relevant posts and pages. However, I understand what both of these options (1) and (2) mean, but because I chose option 2, will that mean all of the images on the website won't stand a chance of being indexed in search engines and Google Images etc? As far as duplicate content goes, search engines can get confused and there are 2 ways for search engines
Web Design | | SEOguy1
to reach the correct page content destination. But when eg Google makes the wrong choice a portion of traffic drops off (is lost hence errors) which then leaves the searcher frustrated, and this affects the seo and ranking of the site which worsens with time. My goal here is - I would like all of the web images to be indexed by Google, and for all of the image attachment pages to not be indexed at all (Moz shows the image attachment pages as duplicates and the referring site causing this is the sitemap url which Yoast creates) ; that sitemap url has been submitted to the search engines already and I will resubmit once I can resolve the attachment pages issues.. Please can you advise. Thanks.0 -
Is it bad to have /index.php at the end of a uri?
Is it bad for SEO if traffic is directed to "http://www.example.com/someuri/index.php" instead of "http://www.example.com/someuri/" and would it be works setting up a redirect rule at htaccess level?
Web Design | | NoisyLittleMonkey1 -
Indexing Dynamic Pages
Hi, I am having an issues among others, regarding indexing dynamic pages. Our website, www.me-by-melia, was just put live and I am concerned the bottom naviagtion pages (http://www.me-by-melia.com/#store, http://www.me-by-melia.com/#facebook, etc) will not be indexed and create duplicate pages. Also, when you open these pages in a new tab, it takes you to homepage. The website was created in HTML5. Please advise.
Web Design | | Melia0 -
How to Add canonical tags on .ASPX pages?
What is the proper way (or is it possible) to add canonical tags on website pages that end in .aspx? If you add a canonical tag to the Master Page it will put that exact canonical tag on every page, which is bad. Is there a different version of the tag to put on individual pages? And one to put on the home page without the Master Page error?
Web Design | | Ryan-Bradley0