Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Bing Indexation and handling of X-ROBOTS tag or AngularJS
-
Hi MozCommunity,
I have been tearing my hair out trying to figure out why BING wont index a test site we're running.
We're in the midst of upgrading one of our sites from archaic technology and infrastructure to a fully responsive version.
This new site is a fully AngularJS driven site. There's currently over 2 million pages and as we're developing the new site in the backend, we would like to test out the tech with Google and Bing.We're looking at a pre-render option to be able to create static HTML snapshots of the pages that we care about the most and will be available on the sitemap.xml.gz
However, with 3 completely static HTML control pages established, where we had a page with no robots metatag on the page, one with the robots NOINDEX metatag in the head section and one with a dynamic header (X-ROBOTS meta) on a third page with the NOINDEX directive as well. We expected the one without the meta tag to at least get indexed along with the homepage of the test site.
In addition to those 3 control pages, we had 3 pages where we had an internal search results page with the dynamic NOINDEX header. A listing page with no such header and the homepage with no such header.
With Google, the correct indexation occured with only 3 pages being indexed, being the homepage, the listing page and the control page without the metatag.
However, with BING, there's nothing. No page indexed at all. Not even the flat static HTML page without any robots directive.
I have a valid sitemap.xml file and a robots.txt directive open to all engines across all pages yet, nothing.
I used the fetch as Bingbot tool, the SEO analyzer Tool and the Preview Page Tool within Bing Webmaster Tools, and they all show a preview of the requested pages. Including the ones with the dynamic header asking it not to index those pages.
I'm stumped. I don't know what to do next to understand if BING can accurately process dynamic headers or AngularJS content.
Upon checking BWT, there's definitely been crawl activity since it marked against the XML sitemap as successful and put a 4 next to the number of crawled pages.
Still no result when running a site: command though. Google responded perfectly and understood exactly which pages to index and crawl.
Anyone else used dynamic headers or AngularJS that might be able to chime in perhaps with running similar tests?
Thanks in advance for your assistance....
-
Thank you for the update Kavit.
-
Hi Everett and Fellow Mozzers,
I have been away overseas so wasn't able to put up an update.
Eventually, managed to get a hold of someone at BING within the tech team who told me that the reason that they didn't index the pages was simply because of popularity.
It isn't enough to have unique content, design and structure on your site, it is also vital to have traffic, links and mentions as external signals.
We also got word that dynamic sites and pre-render content will be acceptable for BING so we're resting easier at night these days.
Development on the site continues as per schedule and we will be launching the proper site this year on a highly authoritative domain which should yield very different results to the test we put together.
Hopefully, this will help someone else who is on a similar pathway.
Everett, I would like to thank you again for taking the time to read, reply and help us with our analysis.
Thanks!
-
Hi Everett,
Thank you for the analysis and deeper insights.
I did make the changes to the test pages bar the design template.
We added the unique titles, meta descriptions and meta keywords.
We added completely unique content to all three pages with no other instances of this content appearing on the web at all.
The pages are now also interlinked and also linked from the top of the homepage so none of them are orphan pages.
sitemaps have been updated and resubmitted.
The latest version has been out a week so far, but no response from BING as yet.
Thanks,
Kavit.
-
Hello Kavit,
I would suggest putting unique Title tags, meta descriptions and content on those pages. They are very thin as it is, and all of the content is boilerplate.
There are 57,100,000 results on Bing for: "Search for an Australian Business, Government Department or Person" which is the content on the home page you shared.
There are 60,600 results on Bing for: ""There was a table set out under a tree in front of the house, and the March Hare and the Hatter were having tea at it" which is the content on this page: http://wp-seospike-weblbl.naws-sensis.com.au/bing-seo-control/no-metatag.html .
And so on. I can see why Bing wouldn't want to add yet another thin, duplicate, orphan page to their index. My advice would be to build out those test pages with a design template and to put original content, title tags and meta descriptions on all of them. Then repeat your test.
-
Hi Everett,
Thank you for taking the time out to read and respond.
The URL we have setup for testing is: wp-seospike-weblbl.naws-sensis.com.au
We have 3 control pages (all flat HTML pages) that we setup and put online for bing to crawl:
http://wp-seospike-weblbl.naws-sensis.com.au/bing-seo-control/no-metatag.html - no robots metatag and allowed to crawl and index.
http://wp-seospike-weblbl.naws-sensis.com.au/bing-seo-control/metatag.html - page with a noindex metatag not to be crawled and indexed
http://wp-seospike-weblbl.naws-sensis.com.au/bing-seo-control/metatag-header.html - X-Robots meta tag NOINDEX
http://wp-seospike-weblbl.naws-sensis.com.au - homepage with no robots exclusion
Ideally, I expected the homepage and the no-metatag page to be indexed at least.
I am familiar with the builtvisible documentation that they've put out.
My main pain point is that even the flat HTML pages are getting ignored, so I can't even test the deeper AngularJS developed pages since my control group is not delivering results as it should.
a site command on the above domain on bing shows no results.
Thanks again!
-
Is there any chance of getting a URL for the domain in question?
Have you read this yet?
https://builtvisible.com/javascript-framework-seo/What are the URLs like that you're asking Bing to index? Which is closest?
Hashbang
http://www.IWishJSFramworkWebsitesWouldGoAway/#!Escaped Fragment
http://www.IWishJSFramworkWebsitesWouldGoAway/?escaped_fragment=Base URL using Angular's $location service to construct URLs without the #! via the HTML5 History API http://www.IWishJSFramworkWebsitesWouldGoAway/
I know this doesn't answer your question, but hopefully it will get the discussion started.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Homepage Slider How to Handle H1 and H2's
Working on a site with a slider on the homepage, I dislike them but owner wants to keep in place. Currently, the slider has 4 slides with different images but the same text, so the slider has 4 slides with 4 identical H2 tags and accompanying text. There is no H1 tag on the page at all. It seems to me that a better solution would be to change the first slide to be H1 (with the target keyword) and rework the text in the other slides as H2 tags to appeal to the user. This does mean that the H1 and H2 tags in the slider would be styled the same. Is this a sensible approach?
Web Design | | GrouchyKids1 -
Have Your Thoughts Changed Regarding Canonical Tag Best Practice for Pagination? - Google Ignoring rel= Next/Prev Tagging
Hi there, We have a good-sized eCommerce client that is gearing up for a relaunch. At this point, the staging site follows the previous best practice for pagination (self-referencing canonical tags on each page; rel=next & prev tags referencing the last and next page within the category). Knowing that Google does not support rel=next/prev tags, does that change your thoughts for how to set up canonical tags within a paginated product category? We have some categories that have 500-600 products so creating and canonicalizing to a 'view all' page is not ideal for us. That leaves us with the following options (feel it is worth noting that we are leaving rel=next / prev tags in place): Leave canonical tags as-is, page 2 of the product category will have a canonical tag referencing ?page=2 URL Reference Page 1 of product category on all pages within the category series, page 2 of product category would have canonical tag referencing page 1 (/category/) - this is admittedly what I am leaning toward. Any and all thoughts are appreciated! If this were in relation to an existing website that is not experiencing indexing issues, I wouldn't worry about these. Given we are launching a new site, now is the time to make such a change. Thank you! Joe
Web Design | | Joe_Stoffel1 -
NO Meta description pulling through in SERP with react website - Requesting Indexing & Submitting to Google with no luck
Hi there, A year ago I launched a website using react, which has caused Google to not read my meta descriptions. I've submitted the sitemap and there was no change in the SERP. Then, I tried "Fetch and Render" and request indexing for the homepage, which did work, however I have over 300 pages and I can't do that for every one. I have requested a fetch, render and index for "this url and linked pages," and while Google's cache has updated, the SERP listing has not. I looked in the Index Coverage report for the new GSC and it says the urls and valid and indexable, and yet there's still no meta description. I realize that Google doesn't have to index all pages, and that Google may not also take your meta description, but I want to make sure I do my due diligence in making the website crawlable. My main questions are: If Google didn't reindex ANYTHING when I submitted the sitemap, what might be wrong with my sitemap? Is submitting each url manually bad, and if so, why? Am I simply jumping the gun since it's only been a week since I requested indexing for the main url and all the linked urls? Any other suggestions?
Web Design | | DigitalMarketingSEO1 -
How to prevent development website subdomain from being indexed?
Hello awesome MOZ Community! Our development team uses a sub-domain "dev.example.com" for our SEO clients' websites. This allows changes to be made to the dev site (U/X changes, forms testing, etc.) for client approval and testing. An embarrassing discovery was made. Naturally, when you run a "site:example.com" the "dev.example.com" is being indexed. We don't want our clients websites to get penalized or lose killer SERPs because of duplicate content. The solution that is being implemented is to edit the robots.txt file and block the dev site from being indexed by search engines. My questions is, does anyone in the MOZ Community disagree with this solution? Can you recommend another solution? Would you advise against using the sub-domain "dev." for live and ongoing development websites? Thanks!
Web Design | | SproutDigital0 -
What is the best way to handle annual events on a website?
Every year our company has a user conference with between 300 - 400 attendees. I've just begun giving the event more of a presence on our website. I'm wondering, what is the best way to handle highlights from previous years? Would it be to create an archive (e.g. www.companyname.com/eventname/2015) while constantly updating the main landing page to promote the current event? We also use an event website (cvent) to handle our registrations. So once we have an agenda for the current years event I do a temporary redirect from the main landing page to the registration website. I don't really like this practice and I feel like it might be better to keep all of the info on the main domain. Wondering if anybody has any opinions or feedback on that process as well. Just looking for best practices or what others have done and have had success with.
Web Design | | Brando161 -
Https pages indexed but all web pages are http - please can you offer some help?
Dear Moz Community, Please could you see what you think and offer some definite steps or advice.. I contacted the host provider and his initial thought was that WordPress was causing the https problem ?: eg when an https version of a page is called, things like videos and media don't always show up. A SSL certificate that is attached to a website, can allow pages to load over https. The host said that there is no active configured SSL it's just waiting as part of the hosting package just in case, but I found that the SSL certificate is still showing up during a crawl.It's important to eliminate the https problem before external backlinks link to any of the unwanted https pages that are currently indexed. Luckily I haven't started any intense backlinking work yet, and any links I have posted in search land have all been http version.I checked a few more url's to see if it’s necessary to create a permanent redirect from https to http. For example, I tried requesting domain.co.uk using the https:// and the https:// page loaded instead of redirecting automatically to http prefix version. I know that if I am automatically redirected to the http:// version of the page, then that is the way it should be. Search engines and visitors will stay on the http version of the site and not get lost anywhere in https. This also helps to eliminate duplicate content and to preserve link juice. What are your thoughts regarding that?As I understand it, most server configurations should redirect by default when https isn’t configured, and from my experience I’ve seen cases where pages requested via https return the default server page, a 404 error, or duplicate content. So I'm confused as to where to take this.One suggestion would be to disable all https since there is no need to have any traces to SSL when the site is even crawled ?. I don't want to enable https in the htaccess only to then create a https to http rewrite rule; https shouldn't even be a crawlable function of the site at all.RewriteEngine OnRewriteCond %{HTTPS} offor to disable the SSL completely for now until it becomes a necessity for the website.I would really welcome your thoughts as I'm really stuck as to what to do for the best, short term and long term.Kind Regards
Web Design | | SEOguy10 -
Lots of Listing Pages with Thin Content on Real Estate Web Site-Best to Set them to No-Index?
Greetings Moz Community: As a commercial real estate broker in Manhattan I run a web site with over 600 pages. Basically the pages are organized in the following categories: 1. Neighborhoods (Example:http://www.nyc-officespace-leader.com/neighborhoods/midtown-manhattan) 25 PAGES Low bounce rate 2. Types of Space (Example:http://www.nyc-officespace-leader.com/commercial-space/loft-space)
Web Design | | Kingalan1
15 PAGES Low bounce rate. 3. Blog (Example:http://www.nyc-officespace-leader.com/blog/how-long-does-leasing-process-take
30 PAGES Medium/high bounce rate 4. Services (Example:http://www.nyc-officespace-leader.com/brokerage-services/relocate-to-new-office-space) High bounce rate
3 PAGES 5. About Us (Example:http://www.nyc-officespace-leader.com/about-us/what-we-do
4 PAGES High bounce rate 6. Listings (Example:http://www.nyc-officespace-leader.com/listings/305-fifth-avenue-office-suite-1340sf)
300 PAGES High bounce rate (65%), thin content 7. Buildings (Example:http://www.nyc-officespace-leader.com/928-broadway
300 PAGES Very high bounce rate (exceeding 75%) Most of the listing pages do not have more than 100 words. My SEO firm is advising me to set them "No-Index, Follow". They believe the thin content could be hurting me. Is this an acceptable strategy? I am concerned that when Google detects 300 pages set to "No-Follow" they could interpret this as the site seeking to hide something and penalize us. Also, the building pages have a low click thru rate. Would it make sense to set them to "No-Follow" as well? Basically, would it increase authority in Google's eyes if we set pages that have thin content and/or low click thru rates to "No-Follow"? Any harm in doing this for about half the pages on the site? I might add that while I don't suffer from any manual penalty volume has gone down substantially in the last month. We upgraded the site in early June and somehow 175 pages were submitted to Google that should not have been indexed. A removal request has been made for those pages. Prior to that we were hit by Panda in April 2012 with search volume dropping from about 7,000 per month to 3,000 per month. Volume had increased back to 4,500 by April this year only to start tanking again. It was down to 3,600 in June. About 30 toxic links were removed in late April and a disavow file was submitted with Google in late April for removal of links from 80 toxic domains. Thanks in advance for your responses!! Alan0 -
Duplicate Content for index.html
In the Crawl Diagnostics Summary, it says that I have two pages with duplicate content which are: www.mywebsite.com/ www.mywebsite.com/index.html I read in a Dream Weaver tutorial that you should name your home page "index.html" and then you can let www.mywebsite.com automatically direct the user to index.html. Is this a bug in SEOMoz's crawler or is it a real problem with my site? Thank you, Dan
Web Design | | superTallDan0