Unsolved Finding broken urls
-
Hello,
This feels like it should be straightforward but I'm not having much luck.
I have a pdf url which is no longer in use and takes the user to a 404 page.
I want to find all instances of when this pdf url is used on my site so I can update the copy and remove the link.
Please note - a redirect is not the right solution for this issue.
Hope you can help.
Thanks, Mary
-
Hi Mary,
To find all instances of the outdated PDF URL on your site, you can try these steps:
Site Search: Use Google to search for the URL across your site by typing: site:URL o. This will show any pages where the URL is mentioned.
Internal Search Tools: If your CMS has a search function, use it to search for the URL directly within your site's content.
Code Search: If you have access to your site's code repository, perform a search in the code files for the outdated URL to locate every instance where it's used.
Hope this helps! Let me know if you need further assistance. Also checkout my Website (Click Here)
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
301 Redirects from example.com to store.example.com and then removing store.example.com subdomain
Hi I'm trying to wrap my head around the best approach for migrating our website. We're migrating from our example.com (joomla) site to our existing store.example.com (shopify) site... with the plan to finish the redirects/migration then remove the subdomain from shopify and use example.com moving forward. I've never done this and asking here to see if any harm will come from re-directing example.com URLs to store.example.com URL's then changing the store.example.com URL's to example.com. Right now my plan would run like this: redirect example.com URL's to store.example.com remove subdomain on store.example.com use example.com moving forward. wonder what happens next? Is there going to be any issues here, possible harm to the URL's?
Technical SEO | | Minarets0 -
What is the difference between "document" and "object" moved redirect errors?
What is the difference between "document" and "object" moved redirect errors? I'm used to see "object moved" as a redirect chain issue that needs to be fixed, but this week my report contained a "document moved" redirect chain issue. And it's on our homepage. Looks like it might be a HTTP versus an HTTPS issue.
Reporting & Analytics | | Kate_Nadeau0 -
How can I make a list of all URLs indexed by Google?
I have a large site with over 6000 pages indexed but only 600 actual pages and need to clean up with 301 redirects. Haven't had this need since Google stopped displaying the url's in the results.
SEO Tactics | | aplusnetsolutions0 -
How to sunset language subdomains that we don't want to support anymore?
We have a primary domain www.postermywall.com. We have used subdomains for offering the same site in different languages, like es.postermywall.com, fr.postermywall.com etc. There are certain language subdomains that have low traffic and are expensive to get translated. We have decided to sunset 3 subdomains that match that criteria. What is the best way of going about removing those subdomains? Should we just redirect from those subdomains to www.postermywall.com? Would that have any negative impact on our primary domain in Google's eye etc.? Anything other than a redirect that we should be considering?
Technical SEO | | 250mils0 -
On-Page Grader Url is inaccessible
Hi everybody. I'm trying to use on -page grader for https://www.upscaledinnerclub.com and get "Sorry, but that URL is inaccessible." Robots.txt are empty, another thread on MOZ was talking about DNS check - it's all good. So, I can't figure out why this is happening. Also I am trying the same for another website https://www.regexseo.com - the same story. Common thing is that they both are on Google App Engine. And at first i thought that was the problem. Bu then i checked this one : https://www.logitinc.com/ and it's working, even though this website is on GAE as well. None of these website have robots.txt or any differences in setup or settings. Any thoughts?
Moz Bar | | DmitriiK0 -
How to find and reduce more of 100 internal links on Prestashop
hi, i'm working on a site endrena.com who has more of 466 links, but i don't know what they are, i know only the links from menu and subcategories but they are not 466. the moz onpage grader say me the category receive a A grade and it don't say me any advice like more link but if i use the rankranger onpage it found more of 466 links! Can you help me to find all this links? 166m3gz.png
Moz Bar | | seopalermo0 -
Ajax #! URL support?
Hi Moz, My site is currently following the convention outlined here: https://support.google.com/webmasters/answer/174992?hl=en Basically since pages are generated via Ajax we are setup to direct bots that replace the #! in a url with ?escaped_fragment to cached versions of the ajax generated content. For example, if the bot sees this url: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 it will replace it will instead access the page: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 In which case my server serves the cached html instead of the live page. This is all per Googles direction and is indexing fine. However the MOZ bot does not do this. It seems like a fairly straight-forward feature to support. Rather than ignoring the hash, you look to see if it is a #! and then try to spider the url replaced with ?escaped_fragment. Our server does the rest. If this is something MOZ plans on supporting in the future I would love to know. If there is other information that would be great. Also, pushstate is not practical for everyone due to limited browser support, etc. Thanks, Dustin Updates: I am editing my question because it won't let me respond to my own question. It says I need to sign up for MOZ analytics. I was signed up for Moz Analytics?! Now I am not? I responded to my invitation weeks ago? Anyway, you are misunderstanding how this process works. There is no site-map involved. The bot reads this URL on the page: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 And when it is ready to spider the page for content it, it spider's this URL instead: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 The server does the rest, it is simply telling Roger to recognize the #! format and replace it with ?escaped_fragment Though I obviously do not know how Roger is coded but it is a simple string replacement. Thanks.
Moz Bar | | oneactlife0