Competitior 'scraped' entire site - pretty much - what to do?
-
I just discovered a competitor in the insurance lead generation space has completely copied my client's site's architecture, page names, titles, even the form, tweaking a word or two here or there to prevent 100% 'scraping'.
We put a lot of time into the site, only to have everything 'stolen'. What can we do about this? My client is very upset. I looked into filing a 'scraper' report through Google but the slight modifications to content technically don't make it a 'scraped' site.
Please advise to what course of action we can take, if any.
Thanks,
Greg -
5 Steps:
- Take screenshots of ALL webpages
- Get a report on exactly how many pages were scraped and have evidence (usually Googling the site titles is very effective)
- Take screenshots of the meta data: Right click, click on view source, and take screenshots
- Once all is recorded send the website owner a Cease and Desist letter informing them to take everything offline and manually take off the pages from search indexes
- If they don't comply at that point any IP lawyer will help if you have all the documentation. Some will take the work pro-Bono because there's huge money to be won, especially if you did all the work for them already.
Do NOT issue Cease and Desist letters without the screenshots. Usually what these guys will do is to change the appearance and add content to the meta tags and at that point they will claim it was not plagiarized while still hurting you. It will not stand up in court.
However, if you documented the scraping the only option the website owner will have is to take the plagiarized content offline completely. Any edits they do at that point is considered a scraping/plagiarism because you documented the offense.
We've been able to prosecute 13 companies already. One company we publicly called out on Twitter during a popular chat leading to the company's downfall in 4 weeks.
FIGHT FOR YOUR CONTENT!
-
Hi again Greg,
Just one more option that is available to you if you happen to have a Wordpress blog on the site (or have the option of rebuilding the entire site using Wordpress).
You could install the Bad Behavior plugin for Wordpress. The plugin is part of Project Honeypot, which tracks millions of bad ip addresses and gathers information from the plugin and feeds it back to the honeypot. Bad Behavior also works against link spam, email and content harvesters and other malicious sites.
Sha
-
Thanks for all the details Rami.
-
Hi Ryan,
As long as others are benefiting and not bothered I am happy to answer your questions.
When setting up Distil you are able to allocate a specific record (subdomain) or the entire zone (domain) to be delivered through our cloud. This allows you to segregate what traffic you would like us to serve and what content you would like to handle through other delivery mechanisms. Distil honors all no cache and cache control directives allowing you to easily customize what content we cache even if we are serving your entire site. Additioanlly we do not cache any dynamic file types ensuring that fresh content always functions properly. Location based content will continue to function correctly because our service continues to pass the end user's IP through the host headers.
Clients are able to reduce their infrastructure after migrating onto our platform however it is important to note that you cannot downgrade to a $5 shared hosting and expect the same results. Distil is able to reduce your server load by 50%-70% but the remaining 30%-50% will still be handled by your backend so you need ensure any hosting you use can still handle that.
Our specialty is dealing with bots and all of our security measures surrounding that protection are automated. Any security concerns outside of that scope will be handled reactively with each individual client.
Our service is constantly adapting to ensure that we provide a holistic solution and we go far beyond the suggestions mentioned above. Distil is set up to adapt intelligently on its own as it uncovers new bots and we also are always adding new algorithms to catch bots. I do not want to say we are bot proof but we will catch well over 95% of bots and will quickly adapt to catch and stop any new derivates.
Similar to most other cloud or CDN type services Google Analytics will not be impacted at all.
Amazon offers cloud computing where as Distil offers a managed security solution in the cloud. We utilize several cloud providers, including Amazon, for our infrastructure but what makes Distil unique is the software running on that infrastructure. Amazon simply provides the computing power, we provide the intelligence to catch and stop malicious bots from scraping your website and ensure your content is protected.
Rami Essaid
www.distil.it -
Greg,
There is only one thing that helps you to move forward with your client. Rewrite your texts and upgrade or tweak you site to better UX. That way the scraped site will look like cheap copy. Have done that in past. I know it's not fair but thats how you can put this behind you.
PS. Rapid linkbuilding to forums and blogs will get one banned
-
Thank you for the additional details Rami. If you are willing to share further information, I do have a few follow up questions.
-
Do you serve 100% of the content to users? Or do users still visit the site? I am interested to understand how dynamic content would be affected. Will location based content where information changes based on a user's IP still function properly or is there likely to be issues? Will "fresh" content still function properly such as a new blog article which is receiving many comments, or a forum discussion.
-
Since you are caching the target site, how much does the target site's speed optimization still play? If a client's site is on a shared server vs a dedicated server, would it still be a concern for speed?
-
You mentioned dealing with security concerns. Are your actions taken proactively? Or does a client need to recognize there is an issue and contact your company?
-
Specific to the original question asked in this Q&A, can some bots get past your system? Or do you believe it to be bot-proof? I am specifically referring to bad bots, not those of major search engines.
-
How would Google Analytics and other tools which monitor site traffic be impacted by your service? I am trying to determine if your service is "normal" cloud service or if there are differences.
-
What differences are there between the services you offer and the regular Amazon cloud service?
Thanks again for your time.
-
-
Hi Ryan,
Thanks for catching my typo and your interest. I am happy to answer your questions publicly and will definitely add your questions to the FAQ section we are currently working on.
The company is at distil.it and yes we are an american company located in San Fran despite the Italian TLD.
We do not host your files permanently on our servers, instead our service is layered on top of a standard host. We do however cache your content on our edge nodes exactly like a CDN to accelerate your site. This feature is already included in the pricing model.
With the enterprise plan we will work with clients to responde to specific threats that an organization may face. This could mean blocking certain countries from accessing your site, blocking certain IP ranges, or dealing with DoS attacks.
Although we can respond to most security concerns, there are still some security threats outside our scope.
Our page optimization and acceleration techniques are recognized by Pagespeed and YSlow and the results are measurable. With one case study we improved our customer's page load time by 55%. There are still other optimization tricks that we do not handle such as combining images into CSS sprites, or setting browser caching.
We try to accomodate our customers the best we can. Basic redirects like the one you mention would not be hard and we would happily do this for regular customer within reason.
Pricing for the service is based on bandwidth used and there is no extra cost for storage.For your specific scenario though we may not be a complete solution since our service is not currently optimized for video delivery.
Please feel free to ask any additional questions, we are happy to answer and help!
Rami
-
Hi Rami.
Sharing information about a relevant and useful service isn't advertising, it's educational and informative. You could have used a random name and mentioned the service, but you shared the information in a transparent, quality manner and I for one appreciate it.
I believe your signature is missing a character and you meant to use www.distil.it.
After reading about your product, I have some follow up questions. I can send the questions to your privately, but I think others would benefit from the responses so I will ask here if it is ok. I would humbly suggest adding this information to your site where appropriate or possibly in a FAQ section. If the information is already on your site and I missed it, I apologize.
-
It sounds like your solution offers cloud hosting. Is that correct? If so, is your hosting complete? In other words, do I maintain my regular web host or is your service in addition to my regular host?
-
It sounds like your Cloud Acceleration service is a CDN. Is that correct? Is this service an extra cost on top of the costs listed on your pricing page?
-
The Enterprise solution offers "Custom Security Algorithms". Can you share more details about what is involved?
-
Would it be fair to say your service handles 100% of security settings?
-
You mentioned caching, compression and minification. Would it be fair to say your service handles 100% of optimization settings? Along these lines, is your solution offered in such a manner to where your results are recognized by PageSpeed and YSlow? I always value results over any tool, but some clients latch onto certain tools and it would offer additional value if the tools recognized the results.
-
While your site ccTLD is .it, your contact number listed on your home page appears in the San Francisco area. Are you a US-based company?
-
You mention "the best support in the industry". For your regular (i.e. non-premium
) users, if a non-technical client requested basic changes such as to direct URLs which did not end in a slash to the equivalent URL which did end in a slash throughout their site, do you make these changes for them? How far are you able to assist customers? (I know it's a dangerous question to answer on some levels for you, but inquiring minds would like to know). -
I did not notice any pricing related to space on disk. I have a client who provides many self-hosted videos and the site is 30 GB. Are there any pricing or other issues related to the physical size of a site?
Your solution intrigues me because it addresses a wide array of hosting issues ranging from site speed to security to content scraping. I am anxious to learn more.
-
-
Thanks. Rami:
Your solution and offer are fascinating. And no worries about the shameless plug pitfall.
The issue for me is clients who may not quite fit into the category of being victims of the scraping/complete sleaze bag racket.
Rather. they are industry leaders who are often victimized by leading content farms (and you know who I mean!) Some poor schmuck gets 15 bucks spending 15 minutes lifting our content without attribution or links by paraphrases it..
Ironically, said content farms claim to have turned over a new leaf, hired reputable journalists as so-called "editors-in-chief" and now want to "partner" with our leading SMEs.
As they used to say in 19th century Russian novels "What is to be done?"
-
hmmm...I like to pick my battles.
Scumbags are scumbags and will always find a way to win in the short term.
I like to live by two things my grandma taught me a long time ago...
"What goes around comes around" and "revenge is a dish best served cold"
As to there being an easy way out - you're an SEO Ryan! You know the deal.
Sha
-
Hi All,
To follow up on Ryan's last post "offer an anti-bot copyright protection program ", that is exactly what we have created at Distil. We are the the first turnkey cloud solution that safeguards your revenue and reputation by protecting your web content from bots, data mining, and other malicious traffic.
I do not mean to shamelessly advertise but it seems relevant to mention our service. If anyone is interested in testing the solution please feel free to message me and I will be happy to extend a no obligation 30 day trial.
Rami Founder, CEO
www.distil.it -
Well darn, so there is no easy way out! I think this is a fantastic opportunity for you. You can create Sha Enterprises and offer an anti-bot copyright protection program which would protect sites.
-
Hi Ryan,
In this case Greg already knows the site has been scraped and duplicated. Blocking the scraper and serving the image via the bot-response php script is simply a "gift" to the duplicate site if they return to update their stolen content as they often do.
It is entirely possible to put the solution in place for well known scrapers such as Pagegrabber etc, but there are thousands of them, the people using them can easily change the name when they have been outed and anyone can write their own.
I understand that everyone wants a "list", but even if you Google "user agent blacklist" and find one, there will be problems. Adding thousands of rules to your .htaccess will eventually cause processing issues, the list will constantly be out of date etc.
As I explained at the outset, the key is to be aware of what is happening on your server and respond where necessary. Unfortunately, this is not a "set and forget" issue. In my experience though, bots will likely be visible in your logs long before they have scraped your entire site.
Sha
-
Love it!
-
I love the idea if we can figure out a way to get it to work. It would require someone stealing your code, you discovering the theft, putting the steps in place and then the bad site coming back for more.
-
I guess the use of bot-response.php and bot-response.gif is the gentle internet version of a public shaming campaign.
Sometimes it's a matter of picking your battles, but engineering enough of a win to make your client feel better without launching into an all-out war that could end up costing way more than you're willing to pay.:)
Sha
-
I agree you have to be very careful.
I am only suggesting this approach might be considered in certain circumstances.
Public shaming is an intermediate step somewhere between sending a friendly note, a C&D letter, and suing, provided:
- the other company's identity is known
- the other company cares about its reputation
I am not a lawyer. Nor do I play one on the Internet.
The other company might claim "tortious interference" in its business. (That was the claim against CBS in the tobacco case.) But it's a stretch. A truthful story in a mainstream media outlet poses little risk, IMHO. Any competent attorney could make the case that the purpose of the story was to inform the public. As for libel, you have to prove "actual malice" or "regardless disregard for the truth" an almost impossible standard to meet of proving your were lying and knew you were lying.
But who wants to go to court? One company I worked for had copyright infringement issues. Enthusiastic fans were using the name and logo without consent. A friendly email was usually all it took for them to either cease and desist or become official affiliates.
But these were basically good people who infringed out of ignorance.
It's different if you're dealing with dirtbags.
-
I love the idea, but there are two concerns I have about this approach. In order for this to work, the company has to be known. Usually known companies don't participate in content scraping.
Also, if you do launch a successful public shaming campaign, you could possibly open yourself up to legal damages. I know you are thinking "What? They stole from me!" You are taking action with the express purpose of harming another business. You need to be extremely careful.
There have been multiple court cases where a robber successfully sued a home or business owner when they were injured during a robbery. Of course we can agree that sounds insane, but it has really happened and this situation is much more transparent. The other company can claim you stole the content from them, and then you smeared the company. I can personally civil court cases are not set up so the good guy always wins or for principles to be upheld. Each side makes a legal case, the costs can quickly run into tens of thousands of dollars, and the side with the most money will often win. Be very careful before taking this approach.
-
Thanks. Very helpful.
-
It is a formal legal notification sent to the company involved. I perform research of the site information, contact information and domain registration information to determine the proper party involved. I also send the C&D via registered mail with proof of delivery. After the document has been delivered, I also sent the document to the site's "Contact Us" address. I take every step reasonably possible to ensure the document is received by the right party within the company, and I can document the date/time of receipt.
The letter provides the following:
-
identifies the company which owns the copyrighted or trademarked material
-
offers a means to contact the copyright and trademark owner
-
identifies the copyright / trademark owner has become aware of the infringement
-
provides proof of ownership such as the copyright number, trademark number, etc.
-
identifies the location of the infringing content
-
identifies my client has suffered harm as a result of the infringement. "Harm" can range from direct damages such as decreased sales, decreased website traffic, etc. or potential damage such as confusion in the marketplace.
Once the above points are established, the Cease and Desist demand is made.I also provide a follow up date by which the corrective action needs to be completed. Finally the specific next steps are covered with the following statement:
"This contact represents our goodwill effort to resolve this matter quickly and decisively. If further action is required please be advised of statute 15 U.S.C. 1117(a), sets out the remedies available to the prevailing party in trademark infringement cases. They are: (1) defendant’s profits, (2) any damages sustained by the plaintiff, (3) the costs of the action, and (4) in exceptional cases, reasonable attorney’s fees."
There are a couple additional legal stipulations added as required by US law. The C&D is then signed, dated and delivered.
This letter works in a high percentage of cases. When it fails, a slightly modified version is sent to the web host. If that fails, then the next recourse is requesting Google directly to remove the site or content from their index.
If all fails, you can sue the offending company. If you do go to court, the fact you went through the above process and did everything possible to avoid court action will clearly benefit your case. I have never gone to that last step and I am not an attorney but perhaps Sarah can comment further?
-
-
What does the C & D letter say? What is the threat? All the subsequent steps? Or do you just keep it vague and menacing (eg. "any and all remedies, including legal remedies")
-
Excellent answers.
On top of everything else, how about some out of the box thinking: public shaming.
It's a risky strategy, so it needs careful consideration.
But it's pretty clear your client is the victim of dirty pool.
We're talking truth and justice and virtue here, folks. Forces of darkness vs. forces of light.
If I were still a TV news director, and someone on my staff suggested this as a story idea, I'd jump all over it.
And the company that copied the site would not emerge looking good.
-
Hi Ryan,
The major problem is that any experienced programmer can easily write their own script to scrape a site. So there could be thousands of "bad bots" out there that have not been seen before.
There are a few recurring themes that appear amongst suspicious User Agents that are easy to spot - generally anything that has a name including words like grabber, siphon, leach, downloader, extractor, stripper, sucker or any name with a bad connotation like reaper, vampire, widow etc. Some of these guys just can't help themselves!
The most important thing though is to properly identify the ones that are giving you a problem by checking server logs and tracing where they originate from using Roundtrip DNS and WhoIs Lookups.
Matt Cutts wrote a post a long time ago on how to verify googlebot and of course the method applies to other search engines as well. The doublecheck is to then use WhoIs to verify that the IP address you have falls within the IP range assigned to Google (or whichever search engine you are checking).
If you are experienced at reading server logs it becomes fairly easy to spot spikes in hits, bandwidth etc which will alert you to bots. Depending which server stats package you are using, some or all of the bots may already be highlighted for you. Some packages do a much better job than others. Some provide only a limited list.
If you have access to a programmer who is easy to get along with, the best way to get your head around this is to sit down with them for an hour and walk through the process.
Hope that helps,
Sha
PS - I'm starting to think you sleep less than I do!
-
Wow! Amazing information on the bots Sha. I never knew about this approach. My thoughts were just about how bad bots would ignore the robots.txt file and not much else a site owner can do.
I have to think there are a high number of "bad" bots out there using various names which often change. It also seems likely the IP addresses of these bad bots change frequently. By any chance do you, or anyone else, know of some form of "bad bots" list which is updated?
It seems like too much work for any normal site owner to compile and maintain a list of this nature.
I know...this is a stretch but hey, it doesn't hurt to ask, right?!
-
Hi Greg,
Awesome information there from Ryan!
Implementing the authorship markup is important in that it basically "outs" anyone who has already stolen your content by telling Google that they are not the original author. With authorship markup properly implemented, it really doesn't matter how many duplicates there are out there, Google will always see those sites as imposters, since no-one else has the ability to verify their authorship with a link back from your Google profile
It is possible to block scrapers from your server (blacklist) using IP address or User Agent if you are able to identify them. Identification is not very difficult if you have access to server logs as there will be a number of clues in the log data. These include excessive hits, bandwidth used, requests for java and css files and high numbers of 401(unauthorized) and 403 (forbidden) HTTP error codes.
Some scrapers are also easily identifiable by User Agent (name). Once the IP address or user agent is known, instructions can be given to the server to block it and if you wish, to serve content which will identify the site as having been scraped.
If you are not able to specifically identify the bot(s) responsible, it is also possible to use alternatives like whitelisting bots that you know are OK. This needs to be handled carefully as ommissions from the whitelist could mean that you have actually banned bots that you want to crawl the site.
If using a LAMP setup (Apache server), then instructions are added to the .htaccess file using PHP. For a Windows server, you use a database or text file with filesystemobject to redirect them to a dead end page. Ours is a LAMP Shop, so I am much more familiar with the .htaccess method.
If using .htaccess you have the choice of returning a 403 FORBIDDEN HTTP error, or using the bot-response.php script to serve an image which identifies the site as scraped
If using bot-response.php, the gif image should be made large enough to break the layout in the scraped site if they serve the content somewhere else. Usually a very large gif that reads something like: "Content on this page has been scraped from yoursite.com. If you are the webmaster please stop trying to steal our content".will do the job.
There is one VERY BIG note of caution if you are thinking of blocking bots from your server. You really need to be an experienced tech to do this. It is NOT something that should be attempted if you don't understand exactly what you are doing and what precautions need to be taken beforehand. There are two major things to consider:
- You can accidentally block the bots that you want to crawl your site. (Major search engines use many different crawlers to do different jobs. They do not always appear as googlebot, slurp etc)
- It is possible for people to create fake bots that appear to be legitimate. If you don't identify these you will not solve the scraping problem.
The authenticity of bots can be verified using Roundtrip DNS Lookups and WhoIs Lookups to check the originating domain and IP address range.
It is possible to add a disallow statement for "bad bots" to your robots.txt file, but scrapers will generally ignore robots.txt by default, so this method is not recommended.
Phew! Think that's everything covered.
Hope it helps,
Sha
-
Does the canonical tag work after the fact?
The canonical tag only works if the scraping site is dumb enough or lazy enough not to correct it. Fortunately, this applies in many circumstances.
Also, the scraping might have been a one time thing, but often they will continue to scrap your site for updates and new content. It depends. If they return for new content, then yes it would apply.
My suggestion would be to copyright your home page immediately. Additionally, add a new page to your site and copyright it. Then you have two pages on your site which are copyrighted which offers you a lot more protection then you presently offer.
One item I forgot to mention, Google Authorship. Use it.
http://googlewebmastercentral.blogspot.com/2011/06/authorship-markup-and-web-search.html
http://www.google.com/support/webmasters/bin/answer.py?answer=1408986
-
Thanks - I am going to get started on these. Does the canonical tag work after the fact?
Thanks,
Greg -
Hi Greg.
Having a site scraped is unfortunately common. It is a frustrating experience which takes time and effort to address. Below are some suggestions:
-
going forward you can copyright at least some pages within your site. Even if you do not wish to copyright every page, by having some pages copyrighted you will have very clear legal rights if your entire site is scraped.
-
add the canonical tag to each page, along with various clues throughout the site to indicate it really belongs to you. Generally speaking, these operations are a bit lazy which is why they steal from others rather then create their own content. If they do not recognize the canonical tag then you might receive all the SEO credit for the second site, and either way Google will understand to index your site as the primary source of the content.
-
You might rename a random image to mysite.com.jpg as a suggestion. There are numerous other means by which you can drop indicators the content is really yours. The reason this step is helpful is clearly the site which stole your content fall into the no ethics category. They clearly know what they are doing and likely have used this practice before and will do so again. As part of the process, they often will deny everything and may even claim you stole the site from them.These clues can assist proving you are the true owner.
-
you should contact the offending site via registered mail with a "Cease and Desist" notification. Be certain to provide a deadline. I use 10 days as a timeline.
-
If the C&D does not work, contact their web host with a DMCA notice. If the host is reputable, they will honor the DMCA and take down the site. The problem is the host is required to contact the site and share your claim with the site owner. The site owner can respond with a statement saying the content is theirs, and then there is nothing further the host can do UNLESS you have a registered copyright or you have a helpful host who is willing to consider your evidence (i.e. the clues you left) and help you (their non-customer) over their paying customer. Some hosts are good this way.
-
You can always take legal action and sue the website and host in court. Again, the copyright is very important in court as it provides you with a significant advantage. Some sites will actually defend themselves in court with the intention to delay the trial as long as possible and drive up your expenses to literally tens of thousands of dollars so you give up.
The above process will work in a lot of cases, but not all. When it doesn't work, you have to take other approaches. Sometimes the site is owned, operated and hosted in a foreign country. Sometimes the country does not have enforceable copyright laws. In these cases, and in the others above, you can file the complaint with Google and they have the ability to remove the offending site from their index.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Having problem with multiple ccTLD sites, SERP showing different sites on different region
Hi everyone, We have more than 20 websites for different region and all the sites have their specific ccTLD. The thing is we are having conflict in SERP for our English sites and almost all the English sites have the same content I would say 70% of the content is duplicating. Despite having a proper hreflang, I see co.uk results in (Google US) and not only .co.uk but also other sites are showing up (xyz.in, xyz.ie, xyz.com.au)The tags I'm using are below, if the site is for the US I'm using canonical and hreflang tag :https://www.xyz.us/" />https://www.xyz.us/" hreflang="en-us" />and for the UK siteshttps://www.xyz.co.uk/" />https://www.xyz.co.uk/" hreflang="en-gb" />I know we have ccTLD so we don't have to use hreflang but since we have duplicate content so just to be safe we added hreflang and what I have heard/read that there is no harm if you have hreflang (of course If implemented properly).Am I doing something wrong here? Or is it conflicting due to canonicals for the same content on different regions and we are confusing Google so (Google showing the most authoritative and relevant results)Really need help with this.Thanks,
Intermediate & Advanced SEO | | shahryar890 -
Too much duplicate text?
Last December we started losing traffic to our website https://www.spec.lt (This is in Lithuanian).
Intermediate & Advanced SEO | | anonimas
The thing we did was to every single company page we added QR Code. For example: https://www.spec.lt/imone/onninen-uab (at the bottom of this page). We added some text that goes with it. As you can see here http://imgur.com/a/beaYm
The only difference between those texts is the company name. Can this be the reason why google reduced our positions ? (We didn't lose any of traffic in categories/search/articles - only in company pages). A lot of companies that are new or bancrupt have little to no text at all. Except for this text about QR code, like here for example - https://www.spec.lt/imone/mazoji-bendrija-transportas-2017 Can this be the reason? Or any other on page errors that you see.
Thank you0 -
Web Site Ranking
Hi Folks, I made some changes on my website www.gemslearninginstitute.com and published it two days ago. It was ranking on Google first page for a few keywords. I did not touch the pages which were ranking on first page. Since then I am not seeing the website ranking on the Google. Does it take a few days to rank again? How can I ensure that next time if I update the website or publish some blog on my website then it should not effect the ranking. Secondly, if I would like to rank in three different cities then do I need to create separate pages for each city or how should I proceed with this. Thanks
Intermediate & Advanced SEO | | fslpso0 -
Why does old "Free" site ranks better than new "Optimized" site?
My client has a "free" site he set-up years ago - www.montclairbariatricsurgery.com (We'll call this the old site) that consistently outranks his current "optimized" (new) website - http://www.njbariatricsurgery.com/ The client doesn't want to get rid of his old site, which is now a competitor, because it ranks so much better. But he's invested so much in the new site with no results. A bit of background: We recently discovered the content on the new site was a direct copy of content on the old site. We had all copy on new site rewritten. This was back in April. The domain of the new site was changed on July 8th from www.Bariatrx.com to what you see now - www.njbariatricsurgery.com. Any insight you can provide would be greatly appreciated!!!
Intermediate & Advanced SEO | | WhatUpHud0 -
How do I get rel='canonical' to eliminate the trailing slash on my home page??
I have been searching high and low. Please help if you can, and thank you if you spend the time reading this. I think this issue may be affecting most pages. SUMMARY: I want to eliminate the trailing slash that is appended to my website. SPECIFIC ISSUE: I want www.threewaystoharems.com to showing up to users and search engines without the trailing slash but try as I might it shows up like www.threewaystoharems.com/ which is the canonical link. WHY? and I'm concerned my back-links to the link without the trailing slash will not be recognized but most people are going to backlink me without a trailing slash. I don't want to loose linkjuice from the people and the search engines not being in consensus about what my page address is. THINGS I"VE TRIED: (1) I've gone in my wordpress settings under permalinks and tried to specify no trailing slash. I can do this here but not for the home page. (2) I've tried using the SEO by yoast to set the canonical page. This would work if I had a static front page, but my front page is of blog posts and so there is no advanced page settings to set the canonical tag. (3) I'd like to just find the source code of the home page, but because it is CSS, I don't know where to find the reference. I have gone into the css files of my wordpress theme looking in header and index and everywhere else looking for a specification of what the canonical page is. I am not able to find it. I'm thinking it is actually specified in the .htaccess file. (4) Went into cpanel file manager looking for files that contain Canonical. I only found a file called canonical.php . the only thing that seemed like it was worth changing was changing line 139 from $redirect_url = home_url('/'); to $redirect_url = home_url(''); nothing happened. I'm thinking it is actually specified in the .htaccess file. (5) I have gone through the .htaccess file and put thes 4 lines at the top (didn't redirect or create the proper canonical link) and then at the bottom of the file (also didn't redirect or create the proper canonical link) : RewriteEngine on
Intermediate & Advanced SEO | | Dillman
RewriteCond %{HTTP_HOST} ^([a-z.]+)?threewaystoharems.com$ [NC]
RewriteCond %{HTTP_HOST} !^www. [NC]
RewriteRule .? http://www.%1threewaystoharems.com%{REQUEST_URI} [R=301,L] Please help friends.0 -
Organic 'not provided data' - strip out brand?
I cannot strip out brand data on the 'not provided' keywords in Google analytics. Is this not possible anymore? I understand we cannot get specific keywords but can we no longer strip out brand on organic traffic in Google analytics for keywords that are 'not provided' ?
Intermediate & Advanced SEO | | pauledwards0 -
How much (%) of the content of a page is considered too much duplication?
Google is not fond of duplication, I have been very kindly told. So how much would you suggest is too much?
Intermediate & Advanced SEO | | simonberenyi0 -
Site speed - query
When you say site speed, does it mean speed of loading of each of the pages of the website or speed of home page loading. What do site speed tools measure ?
Intermediate & Advanced SEO | | seoug_20050