Why did Google Index a weird version of my blog post?
-
i wrote a page - https://domain.com/how-to-do-xyz/
but when doing an inurl search, i see that it is indexed by google as - https://secureservercdn.net/58584.883848.9834983/myftpupload/how-to-do-xyz/ (not actual url)
and when i view that page, it is a weirdly formatted version of the page with many design elements missing.
this is a wordpress site. Why would this be?
thanks,
Ryan
-
Thanks Paul. It is just this page that the issue is with. it is also a brand new site that isnt ranking. so hopefully it resolves itself eventually.
Thanks,
Ryan
-
That's the staging URL for GoDaddy-hosted sites, Ryan. It likely means that when the site was being developed under the hosting account's temporary URL, it got indexed by the search engines.
It's also possible there's a domain resolution issue on the hosting server. Are there any other pages showing this problem?
The reason its display is broken is because the CSS and JavaScript on the page will be getting called from the wrong URL, so they can't resolve and format the page content.
Paul
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Bing Indexation and handling of X-ROBOTS tag or AngularJS
Hi MozCommunity, I have been tearing my hair out trying to figure out why BING wont index a test site we're running. We're in the midst of upgrading one of our sites from archaic technology and infrastructure to a fully responsive version.
Web Design | | AU-SEO
This new site is a fully AngularJS driven site. There's currently over 2 million pages and as we're developing the new site in the backend, we would like to test out the tech with Google and Bing. We're looking at a pre-render option to be able to create static HTML snapshots of the pages that we care about the most and will be available on the sitemap.xml.gz However, with 3 completely static HTML control pages established, where we had a page with no robots metatag on the page, one with the robots NOINDEX metatag in the head section and one with a dynamic header (X-ROBOTS meta) on a third page with the NOINDEX directive as well. We expected the one without the meta tag to at least get indexed along with the homepage of the test site. In addition to those 3 control pages, we had 3 pages where we had an internal search results page with the dynamic NOINDEX header. A listing page with no such header and the homepage with no such header. With Google, the correct indexation occured with only 3 pages being indexed, being the homepage, the listing page and the control page without the metatag. However, with BING, there's nothing. No page indexed at all. Not even the flat static HTML page without any robots directive. I have a valid sitemap.xml file and a robots.txt directive open to all engines across all pages yet, nothing. I used the fetch as Bingbot tool, the SEO analyzer Tool and the Preview Page Tool within Bing Webmaster Tools, and they all show a preview of the requested pages. Including the ones with the dynamic header asking it not to index those pages. I'm stumped. I don't know what to do next to understand if BING can accurately process dynamic headers or AngularJS content. Upon checking BWT, there's definitely been crawl activity since it marked against the XML sitemap as successful and put a 4 next to the number of crawled pages. Still no result when running a site: command though. Google responded perfectly and understood exactly which pages to index and crawl. Anyone else used dynamic headers or AngularJS that might be able to chime in perhaps with running similar tests? Thanks in advance for your assistance....0 -
Facebook is now only allowing owners of FB pages (not admins) to create keys for a WP blog post syndication. Is there a way around this?
I hired a contractor to configure a WP plugin to syndicate FB, G+, Twitter and standard WP posts. He is using NextScripts: Social Networks Auto-Poster. He came back to me saying that FB is now only allowing direct owners (not admins) of FB pages to create keys. This means I have to give my client's personal FB access to a third party contractor. I'm not comfortable asking my client to do this. Does anybody know of a way around this? Is there a way to create a FB key with just admin access? Thanks
Web Design | | RosemaryB0 -
Recovering organic traffic and Google rankings post-site-crash
Hi everyone, we had a client's Wordpress website go down about 2 weeks ago and since then organic traffic has basically plummeted. We haven't identified exactly what caused the crash, but it happened twice in one week. We spent a lot of time optimizing the site for organic SEO, improving load times, improving user experience, improving the website content, improving CTR, etc. Then one morning we get a notification from our uptime monitoring service that the site was down, and upon further inspection we believe it may have been compromised. The child theme that the website was using, all of the files were deleted and/or blank. We reverted the website to a previous backup, which fixed the problem. Then, a few days later, the same exact thing happened, only this time the child theme files were missing after the backup was restored. We've since re-installed and reconfigured the child theme, changed all passwords (Wordpress, FTP, hosting, etc.), and we're looking into changing hosting providers in the very near future. The site uses the Yoast Wordpress SEO plugin, which has recently been reported as having some security flaws. Maybe that was the cause of the problem. Regardless, the primary focus right now is to recover the organic traffic and Google rankings that we've worked so hard to improve over the past few months up until this disaster occurred. The client is in a very competitive niche and market, so I'm pretty frustrated that this has happened after we were making such great progress, Since the website went down, organic search traffic has decreased by 50%. The site and all internal pages are loading properly again (and have been since the second time the website went down), but Google Webmaster Tools is still reporting a number of pages as "not found" witht he crawl dates as early as this past weekend. We've marked all errors as "fixed", and also re-submitted the Sitemaps in Google Webmaster Tools. The website passes the "mobile-friendly" tests, received A and B grades in GTMMetrix (for whatever that's worth), and still has the same original Google Maps rankings as before. The organic traffic, however, and organic rankings on Google have seen a pretty dramatic decrease. Does anyone have any recommendations when it comes to recovering a website's authority and organic traffic after it's experienced some downtime?
Web Design | | georgetsn0 -
/index.php/ What is its purpose and does it hurt SEO?
Hello Moz Forum, I am still in the process of cleaning up the lack of attention to detail and betrayal set by our soon to be ex-SEO company. You can see a previous question I ask regarding betrayal SEO. I am analyzing every page on our website and i am noticing this /index.php/ in most of our URLs. We want to leave our expression engine cms and convert to wordpress. I have been reading about index.php but most of it is over my head for now. What does concern me is the "layman's" findings i am seeing through analytics. Our main domain has two URLs. one that ends in .com and the other ends in .com/index.php/ The one that ends in .com has a higher page rank than the ladder. And there are other internal pages with the same two variations. Can someone please explain to me what is /index.php/ ? what are the benefits of it? what are the cons? What will happen to my site once we move to wordpress? As always, your comments and suggestions are greatly appreciated.
Web Design | | CamiloSC0 -
Any way of showing missed sales in Google Analytics?
Sit down, this might get a little complicated... I was approached by a design company to do some SEO work a client of theirs. Usually, this stuff is white label but I have direct contact with the client as the design agency felt it was easier for me to do this. The website is performing really well and looking at the sales funnel, I'm getting people wanting to buy. BUT, and here's the problem, everything falls apart because of the way the check out works. It's appalling. The customer has to register to buy a product, there's no guest check out or anything. The checkout button is right below the fold and you'd miss it completely if you didn't actually search for it. Basically, it's losing the client money. Last month alone there were 300~ people entering the conversion funnel and NONE of them complete it. I've been talking with the design company and they basically saying that it's too much work for them to change it, it's a signed off project blah blah. UI reports have been conducted and sent to them but still nothing. I have the client asking (a great client, obviously wondering why there is a lack of return on his investment) why he isn't making money. He's asking me because I'm the guy thats meant to be getting him the cash back. I keep saying to the design agency the problems and that it's never going to make money. The potential is massive. But thats my problem. Is there ANY way in GA to calculate the missed sales? I know that I can view the total amount made when the customer successfully checks out but I need figures to present that I'm leading the horse to water, but the check out system is preventing it from drinking. tl;dr I need to show client/design agency missed sales due to poorly built checkout system. Cheers!
Web Design | | jasonwdexter0 -
Does Google take email server IP blacklists into account?
This is just a hypothetical, but would Google use information from email server blacklists to determine the quality of a website? The reason is that we're planning to code in an e-mail queuing system for our next CMS, and we would put SPF and DKIM in place. We wouldn't be sending any bulk e-mails (we use Constant Contact for this), but we might be sending personalised follow up e-mails, unpaid order emails and that sort of thing. There's no reason to think we'll be blacklisted, but from experience I know that these email blacklist directories quite often give false positives when an e-mail server is incorrectly configured. So the risk is that we might get blacklisted by mistake when we start using this new feature. Would Google take this into account as part of the algorithm? And if so, would the damage be permanent? (I.e. does getting removed from the blacklist mean Google will stop thinking we're a low quality / spammy site)
Web Design | | OptiBacUK0 -
Next Google Index..?
Hi Guys, Does anybody have an idea when the next Google index is due roughly and if there is anyway I can tell approx when these are due to happen and how would I know? Thanks In advance, Craig Fenton IT
Web Design | | craigyboy0