Error 404 Search Console
-
Hi all,
We have a number of 404 https status listed in Search Console even treated, not decrease. What happened:
- We launched a website with the urls www.meusite.com/url-abc.
- We launched these urls sitemap.
- Google has indexed.
... For some reason, the urls were changed four days later by some developer in my equipe. So
- I asked the redirection of URLs "old" already indexed to the new (of: / url-abc to / url-xyz) all correspondingly.
- I submit the sitemap with new urls.
- We fixed the internal links.
- And than marked as fixed in the Search Console.
But it does not work!
Has anyone had a similar experience?
Thanks for any advice!
-
Thanks for the answer Bernadette Coleman.
Yes, it can be a Google issues. But the biggest problem is that the Google Adwords team, are trying to make ads with some urls , and some Urls are being blocked by Google Adwords.
Note: The URLs that are reproved are not on the console search sampling, do not have 404 http status or redirects. Are the final URLs with 200 https Status
- Google Support answer - Disapproved ads
It seems that the issue is with the URLs. When our system crawled, they resulted in 404 violations
Hence, I request you to check Google Webmaster tools to identify the issue. Also, I request you to speak with your development team to identify the issue.
Once the issue has been fixed, I kindly ask you to re-submit the ads in the account, which will cause our system to re-review them.About the 404 errors on console. I had already taken the 404 urls sampling console and tracked screaming frog:
246 urls Status 404
303 urls Status 200
459 urls status 30170% of URLs were corrected, were marked as corrected on console, but Google insists back them to the list.
-
This could be a Google issue. It takes some time for Google to "forget" about URLs they know about, so they may continue to crawl old URLs.
If you have redirected these URLs and they are not showing a 404 error, then you shouldn't have anything to worry about. I would still mark them as fixed in Google Search Console and then see if they come back again. I would also test those URLs randomly using the Googlebot user agent.
One thing you can do, however, is to crawl those URLs yourself using Screaming Frog or another similar spider tool. Make sure you have the user agent set as Googlebot just to make sure that you're seeing what Google might potentially see. When you crawl, you should see the redirects. If not, then you will need to look into why you're seeing a 404 error.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Mapping ALL search data for a broad topic
Hi All As our company becomes a bigger and bigger entity I'm trying to figure out how I can create more autonomy. One of the key areas that needs fixing is briefing the writers on articles based on keywords. We're not just trying to go after the low hanging fruit or the big money keywords but actually comprehensively cover every topic and provide actual good quality up to date info (surprisingly rare in a competitive niche) and eventually cover pretty much every topic there is. We generally work on a 3 tier system on a folder level, topics and then sub-topics. The challenge is getting an agency to: a) be able to pull all of the data without being knowledgeable in our specific industry. We're specialists and, thus, target people that need specialist expertise as well as more mainstream stuff (the stuff that run of the mill people wouldn't know about). b) know where it all fits topically as we kind of organise the content on a heirarchy basis. And we generally cover multiple smaller topics within articles. Am I asking for the impossible here? It's the one area of the business I feel the most nervous about creating autonomy with. Can we become be as extensive and comprehensive as a wiki-type website without having somebody within the business that knows it providing the keyword research. I did a searh for all data using the main two seed keywords for this subject on ahrefs and it came up with 168000 lines of spreadsheet data. Obviously this went way beyond the maximum I was allowed to export. Interested in feedback and, if any agencies are up for the challenge, do let me know! I've been using moz pro for a long time but have never posted and apologise if what I'm describing is being explained badly here. Requirements Keywords to cover all (broad niche) related queries in the UK, no relevant uk (broad niche) keywords will be missed Organised in a way that can be interpreted as article brief and folder structure instructions. Questions How would you ensure you cover every single keyword? Assuming no specialist X knowledge, how will you be able to map content and know which search queries belong in which topics and in what order. Also (where there is keyword leakage from other regions) how will you know which are UK terms and which aren’t? With minimal X knowledge – how will you know whether you’ve missed an opportunity or not (what you don’t know you don’t know) What specific resources will you require from us in order for this to work? What format will the data be provided to us in - how will you present the finished work so that it can be turned into article briefs?
Intermediate & Advanced SEO | | d.bird0 -
Google Search Operators Acting Strange
Hi Mozers, I'm using search operators for a count of how many pages have been indexed for each section of the site. I was able to download the first 1000 pages from Google Search Console but there are more than 1000 pages indexed, so I'm using operators for a count (even if I can't get the complete list of indexed URLs). [Although, if there is a better way, PLEASE let me know!] Anyway, in terms of search operators: from my understanding, the more general the URL, the more results should come up. However, when I put in the domain site:www.XXX it gives me FEWER results than when I put in site:www.XXX/. When I add the backslash to the end of the domain, it gives me MORE results. And when I put in site:www.AAA/BBB/CC it gives me MORE results than when I put in site:www.AAA/BBB. What's with this? Yael
Intermediate & Advanced SEO | | yaelslater1 -
200 for Site Visitors, 404 for Google (but possibly 200?)
A 2nd question we have about another site we're working with... Currently if a visitor to their site accesses a page that has no content in a section, it shows a message saying that there is no information currently available and the page shows 200 for the user, but shows 404 for Google. They are asking us if it would be better to change the pages to 200's for Google and what impact that might have considering there would be different pages displaying the same 'no information here' message.
Intermediate & Advanced SEO | | Prospector-Plastics0 -
Internal Search / Faceted Navigation
Hi there, I'm working on an e-learning site with the following content pages: main page, category pages, course pages, author pages, tag pages. We will also have an internal search for users to search by keyword for courses & authors & categories. Is it still recommend to "noindex, follow" and disallow in robots.txt internal search results? Or for a site like this, is it better to use faceted navigation? It seems that faceted navigation is mostly for e-commerce sites. What is the latest thinking on SEO best practices for internal search result pages?
Intermediate & Advanced SEO | | mindflash0 -
Block search engines from URLs created by internal search engine?
Hey guys, I've got a question for you all that I've been pondering for a few days now. I'm currently doing an SEO Technical Audit for a large scale directory. One major issue that they are having is that their internal search system (Directory Search) will create a new URL everytime a search query is entered by the user. This creates huge amounts of duplication on the website. I'm wondering if it would be best to block search engines from crawling these URLs entirely with Robots.txt? What do you guys think? Bearing in mind there are probably thousands of these pages already in the Google index? Thanks Kim
Intermediate & Advanced SEO | | Voonie0 -
Image Search algo changes
Just seen the image algo changes that have apparently been put into place in the last 3 months My spam tester image site didnt seem to feel anything. Did anyone feel image search change (for better or worse?) And on what dates? Cheers Stephen
Intermediate & Advanced SEO | | firstconversion0 -
Search Engine Blocked by robots.txt for Dynamic URLs
Today, I was checking crawl diagnostics for my website. I found warning for search engine blocked by robots.txt I have added following syntax to robots.txt file for all dynamic URLs. Disallow: /*?osCsid Disallow: /*?q= Disallow: /*?dir= Disallow: /*?p= Disallow: /*?limit= Disallow: /*review-form Dynamic URLs are as follow. http://www.vistastores.com/bar-stools?dir=desc&order=position http://www.vistastores.com/bathroom-lighting?p=2 and many more... So, Why should it shows me warning for this? Does it really matter or any other solution for these kind of dynamic URLs.
Intermediate & Advanced SEO | | CommercePundit0 -
How do I link a profile picture to my search results?
How do you get an icon which links to a google profile to display in google search results? Link this.... https://skitch.com/edwardrobertshaw/f12f8/central.ly-google-search
Intermediate & Advanced SEO | | ed1234560