How can I get the most out of uploading a print magazine to my client's website?
-
Hi Mozers,
My client is just about to launch a print magazine for her watch business.
There is so much valuable content in the magazine and we want to feature it on the website both for SEO purposes and also for those who prefer to read articles online instead of reading a physical magazine.
My question is: what is the best method of displaying the magazine to get the most from search rankings and also to capitalise on the beautiful imagery from the magazine.
The best option that I can think of is to upload the magazine as a flipbook and create a separate page on the website to display each article so that search engine crawlers can index the content.
I do understand that this could be problematic if users are only spending time reading the flipbook and not so much time on the article pages.
Do you guys have any suggestions about how to get the most out of this opportunity for my client?
THANK YOU IN ADVANCE.
Meaghan
-
Thank you Tim,
Your response certainly makes much sense and I have noted it in my strategy for this clients.
Much appreciated!
Meaghan
-
Hello Meaghan.
I am not wholly sure how flipbook is crawled and if this would be of benefit or not (any further comments may determine more) But I do know that PDFs can be crawled for content etc, as such maybe upload the PDF as an accessible document, providing the copy has remained as text and is not now image based.
Secondly, have you considered adding each of the magazine articles to a blog of some kind on the site, that way your will most likely benefit from the addition of new quality writtern content for its users. You do not necessarily need to add it all of the offline magazine this would possibly be detrimental to the effectiveness of the publication, but you could certainly hand pick some of the best to feature. Alternativily you could opt for an online subscription base and put all the articles online as well as in print, but have only a few teaser articles accessible to the public.
Having your content online will then be make it bit more approachable and also sharable to the masses via social platforms etc, this could potentially help your subscriber and visitor base grow leading to new opportunities.
Hope that helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404's being re-indexed
Hi All, We are experiencing issues with pages that have been 404'd being indexed. Originally, these were /wp-content/ index pages, that were included in Google's index. Once I realized this, I added in a directive into our htaccess to 404 all of these pages - as there were hundreds. I tried to let Google crawl and remove these pages naturally but after a few months I used the URL removal tool to remove them manually. However, Google seems to be continually re/indexing these pages, even after they have been manually requested for removal in search console. Do you have suggestions? They all respond to 404's. Thanks
Technical SEO | | Tom3_151 -
How google bot see's two the same rel canonicals?
Hi, I have a website where all the original URL's have a rel canonical back to themselves. This is kinda like a fail safe mode. It is because if a parameter occurs, then the URL with the parameter will have a canonical back to the original URL. For example this url: https://www.example.com/something/page/1/ has this canonical: https://www.example.com/something/page/1/ which is the same since it's an original URL This url https://www.example.com/something/page/1/?parameter has this canonical https://www.example.com/something/page/1/ like i said before, parameters have a rel canonical back to their original url's. SO: https://www.example.com/something/page/1/?parameter and this https://www.example.com/something/page/1/ both have the same canonical which is this https://www.example.com/something/page/1/ Im telling you all that because when roger bot tried to crawl my website, it gave back duplicates. This happened because it was reading the canonical (https://www.example.com/something/page/1/) of the original url (https://www.example.com/something/page/1/) and the canonical (https://www.example.com/something/page/1/) of the url with the parameter (https://www.example.com/something/page/1/?parameter) and saw that both were point to the same canonical (https://www.example.com/something/page/1/)... So, i would like to know if google bot treats canonicals the same way. Because if it does then im full of duplicates 😄 thanks.
Technical SEO | | dos06590 -
Problems with WooCommerce Product Attribute Filter URL's
I am running a WordPress/WooCommerce site for a client, and Moz is picking up some issues with URL's generated from WooCommerce product attribute filters. For example: ..co.uk/womens-prescription-glasses/?filter_gender=mens&filter_style=full-rim&filter_shape=oval How do I get Google to ignore these filters?
Technical SEO | | SushiUK
I am running Yoast Premium, but not sure if this can solve the issue? Product categories are canonicalised to the root category URL. Any suggestions very gratefully appreciated. Thanks Bob0 -
Our client's site was owned by former employee who took over the site. What should be done? Is there a way to preserve all the SEO work?
A client had a member of the team leave on bad terms. This wasn't something that was conveyed to us at all, but recently it came up when the distraught former employee took control of the domain and locked everyone out. At first, this was assumed to be a hack, but eventually it was revealed that one of the company starters who unhappily left the team owned the domain all along and is now holding it hostage. Here's the breakdown: -Every page aside from the homepage is now gone and serving a 404 response code -The site is out of our control -The former employee is asking for a $1 million ransom to sell the domain back -The homepage is a "countdown clock" that isn't actively counting down, but claims that something exciting is happening in 3 days and lists a contact email. The question is how we can save the client's traffic through all this turmoil. Whether buying a similar domain and starting from square one and hoping we can later redirect the old site's pages after getting it back. Or maybe we have a legal claim here that we do not see even though the individual is now the owner of the site. Perhaps there's a way to redirect the now defunct pages to a new site somehow? Any ideas are greatly appreciated.
Technical SEO | | FPD_NYC0 -
Link's that are an internal site search?
Hi hope your're all well. I sell Red, Blue, Green Widgets within each color I have many sub types, the subtypes change all the time,and a sub type has many variations in itself. I'd like to set up links that direct customers to popular searches of sub types say: widgets.com/red/blue-spots....search string... Will Google crawl these search links and see that there is good content behind it? How does Google handle links that are also a site search? Can it be bad and should I "no follow" them? Hope someone can give me some direction on these, many thanks in advance!
Technical SEO | | Thea880 -
What is Google's Penguin effect on SEO?
I want to know about Google's Penguin. Specially, how it works to protect spam links <seo>or other jobs. </seo> How I can protect this problem. Kind Regards John
Technical SEO | | JohnDooley0 -
What can I do if Google Webmaster Tools doesn't recognize the robots.txt file?
I'm working on a recently hacked site for a client and and in trying to identify how exactly the hack is running I need to use the fetch as Google bot feature in GWT. I'd love to use this but it thinks the robots.txt is blocking it's acces but the only thing in the robots.txt file is a link to the sitemap. Unde the Blocked URLs section of the GWT it shows that the robots.txt was last downloaded yesterday but it's incorrect information. Is there a way to force Google to look again?
Technical SEO | | DotCar0 -
My clients website od 100% Flash...Will my SEO efforts be wasted?
My client has a Photography website and has an al carte design. It's in Flash and in 3 years only 3 pages of say 50 are indexed. Are my efforts being wasted?Should I redesign the site in an SEO friendly language? Or, if I build out a huge link building campaign it doesn't matter if I am in Flash or not? Thanks
Technical SEO | | AnthonyGrillo0