On our site by mistake some wrong links were entered and google crawled them. We have fixed those links. But they still show up in Not Found Errors. Should we just mark them as fixed? Or what is the best way to deal with them?
-
Some parameter was not sent. So the link was read as :
null/city, null/country instead cityname/city
-
If you look at those links in Google Search Console (crawl errors), you'll see that there is a date there. If there are some that show up with an older date (than yesterday, for example), you can mark as fixed. If those errors are still there, then they'll show up again.
-
It's not a big issue. Just mark as fixed - if they 404 & no links exist them they will disappear from the reporting (every site has 404 errors). Even if you don't do anything it's not a problem;
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Massive drop off in Google crawl stats
Hi Could i get a second opinion on the following please. ON a client site we seem to have had a massive drop off in google crawling in the past few weeks, this is linked with a drop in search impressions and a slight reduction in penalty. There are no warning messages in WMT to say the site is in trouble, and it shouldn't be, however cannot get to the bottom of what is going on. In Feb the Kilobytes downloaded per day was between 2200 and about 3800, all good there. However in the past couple of weeks it has peaked at 62 and most days are not even over 3! Something odd has taken place. For the same period, the Pages crawled per day has gone from 50 - 100 down to under 3. At the same time the site speed hasn't changed - it is slow and has always been slow (have advised the client to change this but you know how it is....) Unfortunately I am unable to give the site url out so i understand that may impact on any advice people could offer. Ive attached some screen shots from WMT below. Many thanks for any assistance. stats.png
Technical SEO | | daedriccarl0 -
Help Crawl friendliness for large site
After watching Rand's video I am trying to think of the best way to make my large site more crawl friendly. Background I have a large site with over 100k product skus and so when you get to a particular page of products there are tons of different refinements and options that help you sort the products. Most of these are noindex followed, but I was wondering if I should be nofollowing the internal links as well in order to keep bots out of those pages and going to the pages that I want them to go too. Is this a good way to handle it? Also, does anyone have good recommendations of links to posts that deal with helping the crawl friendliness of a large site? Thanks!
Technical SEO | | Gordian0 -
During my last crawl suddenly no errors or warnings were found, only one, a 403 error on my homepage.
There were no changes made and all my old errors dissapeard, i think something went wrong. Is it possible to start another crawl earlyer then scheduled?
Technical SEO | | KnowHowww0 -
Google having trouble accessing my site
Hi google is having problem accessing my site. each day it is bringing up access denied errors and when i have checked what this means i have the following Access denied errors In general, Google discovers content by following links from one page to another. To crawl a page, Googlebot must be able to access it. If you’re seeing unexpected Access Denied errors, it may be for the following reasons: Googlebot couldn’t access a URL on your site because your site requires users to log in to view all or some of your content. (Tip: You can get around this by removing this requirement for user-agent Googlebot.) Your robots.txt file is blocking Google from accessing your whole site or individual URLs or directories. Test that your robots.txt is working as expected. The Test robots.txt tool lets you see exactly how Googlebot will interpret the contents of your robots.txt file. The Google user-agent is Googlebot. (How to verify that a user-agent really is Googlebot.) The Fetch as Google tool helps you understand exactly how your site appears to Googlebot. This can be very useful when troubleshooting problems with your site's content or discoverability in search results. Your server requires users to authenticate using a proxy, or your hosting provider may be blocking Google from accessing your site. Now i have contacted my hosting company who said there is not a problem but said to read the following page http://www.tmdhosting.com/kb/technical-questions/other/robots-txt-file-to-improve-the-way-search-bots-crawl/ i have read it and as far as i can see i have my file set up right which is listed below. they said if i still have problems then i need to contact google. can anyone please give me advice on what to do. the errors are responce code 403 User-agent: *
Technical SEO | | ClaireH-184886
Disallow: /administrator/
Disallow: /cache/
Disallow: /components/
Disallow: /includes/
Disallow: /installation/
Disallow: /language/
Disallow: /libraries/
Disallow: /media/
Disallow: /modules/
Disallow: /plugins/
Disallow: /templates/
Disallow: /tmp/
Disallow: /xmlrpc/0 -
Cross links between sites
hi, We have several ecommerce sites and we cross linked 3 of them by mistake. We realize that the sites were linked through WMT, We have shut down 2 of the sites about 2 months ago, but WMT still shows the links coming from those 2 sites. how do we make sure that google will see the sites are shut down. Is there a better of way resolving this issue. We are no longer using those sites, so do not need them to be active. whats the best solution to show google that the links are no longer there. Crawler shows that it was able to crawl the site 45 days after it is shut down. thanks nick
Technical SEO | | orion680 -
Why is my site jumping around in google search ?
Hi I've been trying to get my page up in google results and I was wondering why the constant fluctuation. For example, on one day the pages is nr. 26, the next day it's nr. 65 then jumps back on say 30 and then in a few more days it's going back to 50. What's the logic behind that ? Thanks Cezar
Technical SEO | | sparts1 -
Noindex,follow - linked pages not showing
We have a blog on our site where the homepage and category pages have "noindex,follow" but the articles have "index,follow". Recently we have noticed that the article pages are no longer showing in the Google SERPs (but they are in Bing!) - this was done by using the "site:" search operator. Have double-checked our robots.txt file too just in case something silly had slipped in, but that's as it should be... Has anyone else noticed similar behaviour or could suggest things I could check? Thanks!
Technical SEO | | Nobody15569050351140 -
Google not visiting my site
Hi my site www.in2town.co.uk which is a lifestyle magazine has gone under a major refit. I am still working on it but it should be ready by the end of this week or sooner but one problem i have is, google is not visiting the site. I took a huge gamble to redo the site, even though before the refit i was getting a few thousand visitors a day, i wanted to make the site better as i was getting google webmaster errors. But now it seems google is not visiting the site. for example i am using sh404sef and i have put friendly url in the site and on the home page it has its name and meta tag but when you look at google it is not giving the site a name. Also it has not visited the site since october 13th Can anyone advise how to encourage google to visit the site please.
Technical SEO | | ClaireH-1848860