New Web Page Not Indexed
-
Quick question with probably a straightforward answer...
We created a new page on our site 4 days ago, it was in fact a mini-site page though I don't think that makes a difference...
To date, the page is not indexed and when I use 'Fetch as Google' in WT I get a 'Not Found' fetch status...
I have also used the'Submit URL' in WT which seemed to work ok...
We have even resorted to 'pinging' using Pinglar and Ping-O-Matic though we have done this cautiously!
I know social media is probably the answer but we have been trying to hold back on that tactic as the page relates to a product that hasn't quite launched yet and we do not want to cause any issues with the vendor!
That said, I think we might have to look at sharing the page socially unless anyone has any other ideas?
Many thanks
Andy
-
Yep, I guess they probably wish they could change it! Have a look at their corporate website and you'll see that they just drop the ampersand from their domain name.
-
Thanks Doug, we had a suspicion it might the the & which is a bit of a problem considering who the vendor is!
-
Screaming Frog cannot crawl the page - 404 Not Found so I guess I need to look into this a little deeper...
Thanks for your responses!
-
Thanks Andy, I got the PM. Looking at the URL, I would suggest that your problems are likely to be the result of the & character being used in the URL.
From: http://www.faqs.org/rfcs/rfc1738.html
"...only alphanumerics, the special characters "$-_.+!*'(),", and reserved characters used for their reserved purposes may be used unencoded within a URL."
The & is one such reserved character and has a special meaning in URLs. You'll see it being used to separate URL parameters.
Hope this helps!
-
Have you looked into your robots or made sure its not no indexed as fetch as Google should work it it cant access the page then it may be a deeper issue. It's not behind any thing that may prevent robots crawling it. You can also try screaming frog to see if a crawler can in fact crawl the page.
Once you know that bots can crawl the page then you can work out why from there.
-
Go here : https://www.google.com/webmasters/tools/submit-url?pli=1
Submit your URL and it will get Indexed within few Seconds
-
Thanks Doug, sending PM...
-
Andy, it's tough to say without looking at the url/page in question. Can you tell us what the page is? If you're not happy to do so in public, then feel free to send me a private message with the URL and I'll take a look.
Have you tried something like http://web-sniffer.net/ to check the page and the response code that's being returned?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Insane traffic loss and indexed pages after June Core Update, what can i do to bring it back?
Hello Everybody! After June Core Update was released, we saw an insane drop on traffic/revenue and indexed pages on GSC (Image attached below) The biggest problem here was: Our pages that were out of the index were shown as "Blocked by robots.txt", and when we run the "fetch as Google" tool, it says "Crawl Anomaly". Even though, our robots.txt it's completely clean (Without any disallow's or noindex rules), so I strongly believe that the reason that this pattern of error is showing, is because of the June Core Update. I've come up with some solutions, but none of them seems to work: 1- Add hreflang on the domain: We have other sites in other countries, and ours seems like it's the only one without this tag. The June update was primarily made to minimize two SERP results per domain (or more if google thinks it's relevant). Maybe other sites have "taken our spot" on the SERPS, our domain is considerably newer in comparison to the other countries. 2- Mannualy index all the important pages that were lost The idea was to renew the content on the page (title, meta description, paragraphs and so on) and use the manual GSC index tool. But none of that seems to work as well, all it says is "Crawl Anomaly". 3- Create a new domain If nothing works, this should. We would be looking for a new domain name and treat it as a whole new site. (But frankly, it should be some other way out, this is for an EXTREME case and if nobody could help us. ) I'm open for ideas, and as the days have gone by, our organic revenue and traffic doesn't seem like it's coming up again. I'm Desperate for a solution Any Ideas gCi46YE
Intermediate & Advanced SEO | | muriloacct0 -
Site move-Redirecting and Indexing dynamic pages
I have an interesting problem I would like to pick someone else’s brain. Our business has over 80 different products, each with a dedicated page (specs, gallery, copy etc.) on the main website. Main site itself, is used for presentation purpose only and doesn’t offer a direct path to purchase. A few years ago, to serve a specific customer segment, we have created a site where customers can perform a quick purchase via one of our major strategic partners. Now we are looking to migrate this old legacy service, site and all its pages under the new umbrella (main domain/CMS). Problem #1 Redirects/ relevancy/ SEO equity Ideally, we could simply perform 1:1 - 301 redirect from old legacy product pages to the relevant new site products pages. The problem is that Call to action (buy), some images and in some cases, parts of the copy must be changed to some degree to accommodate this segment. The second problem is in our dev and creative team. There are not enough resources to dedicate for the creation of the new pages so we can perform 1:1 301 redirects. So, the potential decision is to redirect a visitor to the dynamic page URL where parent product page will be used to apply personalization rules and a new page with dynamic content (buy button, different gallery etc.) is displayed to the user (see attached diagram). If we redirect directly to parent URL and then apply personalization rules, URL will stay the same and this is what we are trying to avoid (we must mention in the URL that user is on purchase path, otherwise this redirect and page where the user lands, can be seen as deceptive). Also Dynamic pages will have static URLs and unique page/title tag and meta description. Problem #2 : Indexation/Canonicalization The dynamic page is canonicalized to the parent page and does have nearly identical content/look and feel, but both serve a different purpose and we want both indexed in search. Hope my explanation is clear and someone can chip in. Any input is greatly appreciated! vCm2Dt.jpg
Intermediate & Advanced SEO | | bgvsiteadmin1 -
No Index No follow instead of Rel canoncical on product pages
Hi all, we handle our product pages no with rel canonical now, we have 1 url that is indexed http://www.prams.net/cam-combi-family the other colours have different urls like http://www.prams.net/cam-combi-family-3-in-1-pram-reversible-seat-car-seat-grey-d which canonicalize to the indexed page. Google still crawls all those pages. For crawl budget reasons we want to use "no index, no follow" instead on these pages (the pages for the other colours)? Google would then crawl fewer pages more often? Does this make sense? Are their any downsides doing it? Thanks in advance Dieter
Intermediate & Advanced SEO | | Storesco1 -
22 Pages 7 Indexed
So I submitted my sitemap to Google twice this week the first time everything was just peachy, but when I went back to do it again Google only indexed 7 out of 22. The website is www.theinboundspot.com. My MOZ Campaign shows no issues and Google Webmaster shows none. Should I just resubmit it?
Intermediate & Advanced SEO | | theinboundspot1 -
Date of page first indexed or age of a page?
Hi does anyone know any ways, tools to find when a page was first indexed/cached by Google? I remember a while back, around 2009 i had a firefox plugin which could check this, and gave you a exact date. Maybe this has changed since. I don't remember the plugin. Or any recommendations on finding the age of a page (not domain) for a website? This is for competitor research not my own website. Cheers, Paul
Intermediate & Advanced SEO | | MBASydney0 -
Links from non-indexed pages
Whilst looking for link opportunities, I have noticed that the website has a few profiles from suppliers or accredited organisations. However, a search form is required to access these pages and when I type cache:"webpage.com" the page is showing up as non-indexed. These are good websites, not spammy directory sites, but is it worth trying to get Google to index the pages? If so, what is the best method to use?
Intermediate & Advanced SEO | | maxweb0 -
A challenge! What off-page strategies would you employ first when ranking a brand new small business website?
I'd love to hear what you guys do. I think the game has changed now in the era of Penguin. Gone are the days where you can build a few quick links and suddenly your new site is ranking fairly well. Obviously, the first steps are to get the on page SEO right - titles, good keyword use, etc. Eventually, the goal may be to build some content that will attract natural links. But, we all know that is going to take time. So, here's the scenario: You've got a new client with a website in a fairly non-competitive niche, say, a piano mover in Seattle. The site is indexed, with no external backlinks. You're happy with the current on page SEO. The site is currently ranking on page 3 for "Seattle Piano Mover". What would you do next?
Intermediate & Advanced SEO | | MarieHaynes0 -
Best way to stop pages being indexed and keeping PageRank
If for example on a discussion forum, what would be the best way to stop pages such as the posting page (where a user posts a topic or message) from being indexed AND not diluting PageRank too? If we added them to the Disallow on robots.txt, would pagerank still flow through the links to those blocked pages or would it stay concentrated on the linking page? Your ideas and suggestions will be greatly appreciated.
Intermediate & Advanced SEO | | Peter2640