Why is a link considered active, but is no longer on the page?
-
How come links sometimes show up in OSE or Yahoo Site Explorer and then when you go to the page, they're not there anymore? Why is a link indexed or considered active but is no longer on the page?
-
I think Open Site Explorer uses the Linkscape link index which is updated monthly. So it is very possible that the link was taken down or moved to a different page (common with blogs when the link falls off the homepage) since the last time Linkscape was updated.
Another tool I use to check links is SEOHeap which is free and you can quickly check whether a link is active by clicking a button next to the link data.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is it necessary to have unique H1's for pages in a pagination series (i.e. blog)?
A content issue that we're experiencing includes duplicate H1 issues within pages in a pagination series (i.e. blog). Does each separate page within the pagination need a unique H1 tag, or, since each page has unique content (different blog snippets on each page), is it safe to disregard this? Any insight would be appreciated. Thanks!
Algorithm Updates | | BopDesign0 -
Do orphan pages take away link juice?
Hi, Just wondering about this whether the orphan pages take away any link juice? We been creating lot of them these days only to link from external sites as landing pages on our site. So, not linking from any part of our website; just linking from other websites. Also, will they get any link juice if they are linked from our own blog-post? Thanks
Algorithm Updates | | vtmoz1 -
Would there be any benefit to creating multiple pages of the same content to target different titles?
Obviously, the duplicated pages would be canonical, but would there be a way of anchoring a page land by search term entry? For example: If you have a site that sells cars you could use this method but have a page that has (brand) cars for sale, finance options, best car for a family, how far will the (brand) car go for on a full tank and so on? Then making all the information blocks h2's but using the same H2s for the duplicated page titles. Then it gets complicated, If someone searches "best car for a family" and the page title for the duplicated page is clicked how would you anchor this user to the section of the page with this information? Could there be a benefit to doing this or would it just not work?
Algorithm Updates | | Evosite10 -
Duplicate Content on Product Pages with Canonical Tags
Hi, I'm an SEO Intern for a third party wine delivery company and I'm trying to fix the following issue with the site regarding duplicate content on our product pages: Just to give you a picture of what I'm dealing with, the duplicate product pages that are being flagged have URLs that have different Geo-variations and Product-Key Variations. This is what Moz's Site Crawler is seeing as Duplicate content for the URL www.example.com/wines/dry-red/: www.example.com/wines/dry-red/_/N-g123456 www.example.com/wines/dry-red/_/N-g456789 www.example.com/wines/California/_/N-0 We have loads of product pages with dozens of duplicate content and I'm coming to the conclusion that its the product keys that are confusing google. So we had the web development team put the canonical tag on the pages but still they were being flagged by google. I checked the of the pages and found that all the pages that had 2 canonical tags I understand we should only have one canonical tag in the so I wanted to know if I could just easily remove the second canonical tag and will it solve the duplicate content issue we're currently having? Any suggestions? Thanks -Drew
Algorithm Updates | | drewstorys0 -
Which is the best way - to have all FAQ pages at one place, or splitted in different sections of the website?
Hi all, We have a lot of FAQ sections on our website, splitted in different places, depending on products, technologies, etc. If we want to optimize our content for Google's Featured Snippets, Voice Search and etc. - what is the best option: to combine them all in one FAQ section? or it doesn't matter for Google that this type of content is not in one place? Thank you!
Algorithm Updates | | lgrozeva0 -
When was the last algorithm update? One of my pages has dropped significantly this week
One of my pages dropped 22 places last week and I'm not sure why - can any body give me some suggestions to why this might have happened?
Algorithm Updates | | lindsayjhopkins0 -
Google showing different pages for same search term in uk and usa
Hi Guys, I have an interesting question and think Google is being a bit strange.. Can anyone tell me why when I input the term design agency in Google.co.uk it shows one page, but when i tyupe in the same search term in Google.com (worldwide search) it shows another page.. Any ideas guys? Is this not bit strange?? Any help here be much appreciated.. Thanks Gareth
Algorithm Updates | | GAZ090 -
Stop google indexing CDN pages
Just when I thought I'd seen it all, google hits me with another nasty surprise! I have a CDN to deliver images, js and css to visitors around the world. I have no links to static HTML pages on the site, as far as I can tell, but someone else may have - perhaps a scraper site? Google has decided the static pages they were able to access through the CDN have more value than my real pages, and they seem to be slowly replacing my pages in the index with the static pages. Anyone got an idea on how to stop that? Obviously, I have no access to the static area, because it is in the CDN, so there is no way I know of that I can have a robots file there. It could be that I have to trash the CDN and change it to only allow the image directory, and maybe set up a separate CDN subdomain for content that only contains the JS and CSS? Have you seen this problem and beat it? (Of course the next thing is Roger might look at google results and start crawling them too, LOL) P.S. The reason I am not asking this question in the google forums is that others have asked this question many times and nobody at google has bothered to answer, over the past 5 months, and nobody who did try, gave an answer that was remotely useful. So I'm not really hopeful of anyone here having a solution either, but I expect this is my best bet because you guys are always willing to try.
Algorithm Updates | | loopyal0