Spam report duplicate images
-
Should i do a spam report if a site competitor as copied my clinical cases images and placed as their own clinical cases.
That site also does not have privacy policy or medical doctor on that images.
-
If it were me, I would do a spam report and send a DMCA notice to both the host and to the contact on the site. I would press the issue as hard as I could. Basically contact anyone who will listen and can do something.
-
Have you contacted them to ask them to remove your images - informing them that they don't have permission to use them?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content Product Descriptions - Technical List Supplier Gave Us
Hello, Our supplier gives us a small paragraph and a list of technical features for our product descriptions. My concern is duplicate content. Here's what my current plan is: 1. To write as much unique content (rewriting the paragraph and adding to it) as there is words in the technical description list. Half unique content half duplicate content. 2. To reword the technical descriptions (though this is not always possible) 3. To have a custom H1, Title tag and meta description My question is, is the list of technical specifications going to create a duplicate content issue, i.e. how much unique content has to be on the page for the list that is the same across the internet does not hurt us? Or do we need to rewrite every technical list? Thanks.
White Hat / Black Hat SEO | | BobGW0 -
Mobile SERP Thumbnail Image Control
Is there any way we can control the image that is selected next to the mobile serps? What google selects for the mobile serp thumbnail on a few of our serps is not conducive to high CTR.
White Hat / Black Hat SEO | | gray_jedi1 -
Is Syndicated (Duplicate) Content considered Fresh Content?
Hi all, I've been asking quite a bit of questions lately and sincerely appreciate your feedback. My co-workers & I have been discussing content as an avenue outside of SEO. There is a lot of syndicated content programs/plugins out there (in a lot of cases duplicate) - would this be considered fresh content on an individual domain? An example may clearly show what I'm after: domain1.com is a lawyer in Seattle.
White Hat / Black Hat SEO | | ColeLusby
domain2.com is a lawyer in New York. Both need content on their website relating to being a lawyer for Google to understand what the domain is about. Fresh content is also a factor within Google's algorithm (source: http://moz.com/blog/google-fresh-factor). Therefore, fresh content is needed on their domain. But what if that content is duplicate, does it still hold the same value? Question: Is fresh content (adding new / updating existing content) still considered "fresh" even if it's duplicate (across multiple domains). Purpose: domain1.com may benefit from a resource for his/her local clientale as the same would domain2.com. And both customers would be reading the "duplicate content" for the first time. Therefore, both lawyers will be seen as an authority & improve their website to rank well. We weren't interested in ranking the individual article and are aware of canonical URLs. We aren't implementing this as a strategy - just as a means to really understand content marketing outside of SEO. Conclusion: IF duplicate content is still considered fresh content on an individual domain, then couldn't duplicate content (that obviously won't rank) still help SEO across a domain? This may sound controversial & I desire an open-ended discussion with linked sources / case studies. This conversation may tie into another Q&A I posted: http://moz.com/community/q/does-duplicate-content-actually-penalize-a-domain. TLDR version: Is duplicate content (same article across multiple domains) considered fresh content on an individual domain? Thanks so much, Cole0 -
Common passwords used for spam accounts?
This is a bit of a longshot. I know that many of the spam forum accounts, blog posts etc that have in the past been used for SEO are generated automatically. Does anyone know of any common passwords that are often used when setting up these accounts? I only ask as, trying to clean up the backlink profile for my website, I found myself in desperation keying in random passwords trying to access the spam accounts created on various forums by our former SEO agency. Eventually I got lucky and worked out the password for a series of forum accounts was, not very imaginatively, 'seo'. Having worked out this, I was able to delete the spam signatures on about 10 forums. But there are many other accounts where I have no idea of the password used. I guess I'm just wondering if there are standard stock passwords used in the past by many SEOs? Not likely to get an answer to this one, I know, but worth a shot.
White Hat / Black Hat SEO | | mgane0 -
Is this a duplicated content?
I have an e-commerce website and a separated blog hosted on different domains. I post an article on my blog domain weekly. And I copy the 1st paragraph (sometimes only part of it when it's too long) of the article to my home page and a sub-catalog page. And then append it by anchor text "...more" which linked to the article. 1. Is that digest (1st paragraph) on my e-commerce site deemed as duplicated content by Google? Any suggestion? 2. In the future if I move the blog under the e-commerce website would it make any different with regards to this issue? Thanks for your help!
White Hat / Black Hat SEO | | LauraHT0 -
Why would links that were deleted by me 3 months ago still show up in reports?
I inadvertently created a mini link farm some time back by linking all of my parked domains (2000 plus) to some of my live websites (I was green and didn't think linking between the same owner sites / domains was an issue). These websites were doing well until Penguin and although I did not get any 'bad link' advices from Google I figure I was hit by Penguin. So about 3 or 4 months ago I painstakingly deleted ALL links from all of those domains that I still own (only 500 or so - the others were allowed to lapse). None of those domains have any links linking out at all but old links from those domains are still showing up in WMT and in SEOmoz and every other link tracking report I have run. So why would these links still be reported? How long do old links stay in the internet archives? This may sound like a strange question but do links 'remain with a domain for a given period of time regardless'? Are links archived before being 'thrown out' of the web. I know Google keeps archives of data that has expired, been deleted, website closed etc, etc for about 3 years or so (?). In an effort to correct a situation I have spent countless hours manually deleting thousands of links but they won't go away. Looking for some insight here please. cheers, Mike
White Hat / Black Hat SEO | | shags380 -
How is this achieved - SPAM
Hello everyone. Here's my problem: I just searched for "link inside iframe counts for backlinking?" and on #5 there's a site that caught my attention because of it's Description Snippet. http://www.freelancer.com/job-search/iframe-links-count-backlinks/ This page is totally irrelevant to my query if you take time and read what's on it, however it ranks well. It's clever because the page contains all the required elements: one h1 with keyword in it, some short paragraph under it, similar links (totally irrelevant though), a selection of people who are supposed to be relevant to my question but they are not, all the good stuff. I looked in the source code and i found this: link href="[http://www.freelancer.com/rss/search.xml?keyword=iframe+links+count+backlinks](view-source:http://www.freelancer.com/rss/search.xml?keyword=iframe+links+count+backlinks)" rel="alternate" type="application/rss+xml" title="Latest projects" Please take the time and look at this feed and you'll see something totally wrong here. Could someone please explain how this works? I'ts a total spam however they managed to trick the system... Looking forward to hearing your answers. Alex
White Hat / Black Hat SEO | | pwpaneuro0 -
Has anyone seen this kind of google cache spam before?
Has anyone seen this kind of 'hack'? When looking at a site recently I found the Google cache version (from 28 Oct) strewn with mentions of all sorts of dodgy looking pharma products but the site itself looked fine. The site itself is www.istc.org.uk Looking in the source of the pages you can see the home pages contains: Browsing as googlebot showed me an empty page (though msnbot etc. returned a 'normal' non-pharma page). As a mildly amusing aside - when I tried to tell the istc about this, the person answering the phone clearly didn't believe me and couldn't get me off the line fast enough! Needless to say they haven't fixed it a week after being told.
White Hat / Black Hat SEO | | JaspalX0