Pull meta descriptions from a website that isn't live anymore
-
Hi all, we moved a website over to Wordpress 2 months ago. It was using .cfm before, so all of the URLs have changed. We implemented 301 redirects for each page, but we weren't able to copy over any of the meta descriptions.
We have an export file which has all of the old web pages. Is there a tool that would allow us to upload the old pages and extract the meta descriptions so that we can get them onto the new website? We use the Yoast SEO plugin which has a bulk meta descriptions editor, so I'm assuming that the easiest/most effective way would be to find a tool that generates some sort of .csv or excel file that we can just copy and paste? Any feedback/suggestions would be awesome, thanks!
-
You can pull the meta descriptions with Screaming Frog from the Wayback Machine if your site is archived. If you want to do this, let me know and I'll help you with the settings.
-
I would do it one better and crawl from a local web server, just to be sure. But in all reality, a password protected directory is probably more accessible, in this instance.
-
Note Ray-pp suggests you use a private directory... Make sure to keep it out of the serps
-
Thanks Ray, we've used the Screaming From Spider for some time now, I've flirted with the idea of re-uploading the web files. This may be our best option, thanks.
-
Hi George,
If you can upload the old pages to a private directory, you can then use Screaming Frog SEO tool to crawl all of the pages and retrieve the meta descriptions. That would allow you to easily export much of the on-page SEO, include your meta information.
Screaming Frog SEO spider is a mus have tool for SEOs - check it out if you haven't already!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does creating too many parent pages damage my website's SEO?
I need to know how to keep my website structure well organised and ensure Google still recognises the key pages. I work for a travel company which needs to give customers various pieces of information on our website and this needs to be well organised in terms of structure. For example, customers need information on airport pick-ups and drop-offs for each of our destinations but this isn't something that needs to rank on Google. Logically for site structure would be to create a parent page: thedragontrip.com/transfers/india Is creating parent pages for unimportant content a bad idea?
Intermediate & Advanced SEO | | nicolewretham1 -
Do I need to remove pages that don't get any traffic from the index?
Hi, Do I need to remove pages that don't get any traffic from the index? Thanks Roy
Intermediate & Advanced SEO | | kadut1 -
H1 tag found on page, but saying doesn't match keyword
We've run a on-page grader test on our home page www.whichledlight.com with the keyword 'led bulbs' it comes back with saying there is a H1 tag, although the content of the keyword apperently doesn't contain 'led bulbs... which seems a bit odd because the content of the tag is 'UK’s #1 Price Comparison Site for LED Bulbs` I've used other SEO checkers and some say we don't even have a H1 tag, or H2, H3 and so on for any page. Screaming Frog seems to think we have a H1 tag though, and can also detect the content of the tag. Any ideas? ** Update ** The website is a single page app (EmberJS) so we use prerender to create snapshots of the pages.
Intermediate & Advanced SEO | | TrueluxGroup
We were under the impression that MOZ can crawl these prerendered pages fine, so were a bit baffled as to why it would say we have a H1 tag, but think the contents of the tag still doesn't match our keyword.0 -
Website Indexing Issues - Search Bots will only crawl Homepage of Website, Help!
Hello Moz World, I am stuck on a problem, and wanted to get some insight. When I attempt to use Screaming Spider or SEO Powersuite, the software is only crawling the homepage of my website. I have 17 pages associated with the main domain i.e. example.com/home, example.com/sevices, etc. I've done a bit of investigating, and I have found that my client's website does not have Robot.txt file or a site map. However, under Google Search Console, all of my client's website pages have been indexed. My questions, Why is my software not crawling all of the pages associated with the website? If I integrate a Robot.txt file & sitemap will that resolve the issue? Thanks ahead of time for all of the great responses. B/R Will H.
Intermediate & Advanced SEO | | MarketingChimp100 -
Faulty title, meta description and version (https instead of http) on homepage
Hi there, I am working on a client (http://minibusshuttle.com/) whose homepage is not indexed correctly by Google. In details, the title & meta description are taken from another website (http://planet55.co.uk/). In addition, homepage is indexed as https instead of http. The rest of the URIs are correctly indexed (titles, meta descriptions, http etc). planet55.co.uk used to be hosted on the same server as minibusshuttle.com and an SSL certificate was activated for that domain. I have tried several times to manually "fetch by Google" the homepage, to no avail. The rest of the pages are indexed/refreshed normally and Google responds very fast when I perform any kind of changes there. Any suggestions would be highly appreciated. Kind regards, George
Intermediate & Advanced SEO | | gpapatheodorou0 -
Will adding 1000's of outbound links to just a few website impact rankings?
I manage a large website that hosts 1000's of business listings that comprise an area that covers 7 state counties. Currently a category page (such as lodging) hosts a group of listings which then link to it's own page. From these pages links are present directly to the business it represents. The client is proposing that we change all listings to link to the representative county website and remove the individual pages. This essentially would create 1000's of external links to 7 different websites and remove 1000's of pages from our site.
Intermediate & Advanced SEO | | Your_Workshop
Does anyone have thoughts on how adding 1000's of links (potentially upwards of 3000) to only 7 websites (that I would deem relevant links) would affect SEO? I know if 1000's of links are added pointing to 1000's of websites the site can be considered a link farm, but I can't find any info online that speaks of a case like this.0 -
We're currently not using schemas on our website. How important is it? And are websites across the globe using it?
Schemas looks like an important thing when it comes to structuring your website and ensuring the crawl bots get all the details. I've been reading a lot of articles around the web and most of them are saying that schemas are important but very few websites are using it. Why so? Are the schemas on schema.org there to stay or am I wasting my time?
Intermediate & Advanced SEO | | Shreyans920 -
Is a different location in page title, h1 title, and meta description enough to avoid Duplicate Content concern?
I have a dynamic website which will have location-based internal pages that will have a <title>and <h1> title, and meta description tag that will include the subregion of a city. Each page also will have an 'info' section describing the generic product/service offered which will also include the name of the subregion. The 'specific product/service content will be dynamic but in some cases will be almost identical--ie subregion A may sometimes have the same specific content result as subregion B. Will the difference of just the location put in each of the above tags be enough for me to avoid a Duplicate Content concern?</p></title>
Intermediate & Advanced SEO | | couponguy0