Weird Cigarette URLs showing up in Google Webmaster Tools
-
Hi there,
I'm noticing a bunch of URLs showing up in my google webmaster tools that are all cigarette related (they are appearing as 404s in the crawl error report). They are throwing 404 errors which is why they are listed here...
Anyone have any idea of what this could be? I recently switched from Wordpress to Shopify and these weird URLs just started appearing on my webmaster tools in the last week. Kinda bizarre / a little alarming!
Thanks,
Bianca -
Awesome! Thank you so much for your help. You rock!
-
I would actually just try to mark it as fixed. 404's are not a big deal nowadays. It's just troublesome sometimes because WMT keeps bugging you about it, especially when you have links pointing to those pages.
-
Actually - one quick question. Should I do anything in webmaster tools - mark them as fixed? Or remove the links manually? Or should I just leave the crawl errors there since they are 404s?
-
This I can do! Thank you for your help.
-
Most likely a breach to your wordpress. If it's an old, outdated version, had vulnerable plugins or the server security was brute forced.
http://web.archive.org/web//http://www.batesmillstore.com/
filter with Buy or Cigar
You'll see that it has been there for a while.
Secure your website, server. Check your backlinks for cigarette links and don't worry about those 404's too much.
-
Maybe Ken? Another company had built and hosted the site for the company (my first project coming on board a few months ago was a new website that we'd manage in-house).
So, the odd thing based on what you were saying is that the links are being show to be coming from was: http://batesmillstore.com/shop/cable-weave-throw (which had been returning a 404 before but is not properly mapped to the right product on the new website).
Thoughts?
-
-
Hi,
Shot in the dark here but is is possible your old site was hacked and someone was hosting some pages there without your knowledge? You might not have seen them in WMT because they had little traffic then but now that they're 404s they are front and center.
FYI, my thought process is based on something similar that happened to me. About a year ago in my WMT I found 1000s of links from other sites. It turns out that people had made copies of the layout and graphics of my site and were putting in random text and putting them on in directories on other sites without the owners knowing. I wound up emailing dozens of site owners and had them removed - they had no idea they were there.
Just a thought.
Ken
-
could you post an example of the URLs you are seeing?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
After you remove a 301 redirect that Google has processed, will the new URL retain any of the link equity from the old URL?
Lets say you 301 redirect URL A to URL B, and URL A has some backlinks from other sites. Say you left the 301 redirect in place for a year, and Google had already replaced the old URL with the new URL in the SERPs, would the new URL (B) retain some of the link equity from URL A after the 301 redirect was removed, or does the redirect have to remain in place forever?
Technical SEO | | johnwalkersmith0 -
Google Serps Not Showing HTTPS in Front of URL
Hi Everyone, We implemented the HTTPS change to our four websites about 6 months ago. I have found something that I feel is strange. The homepage of each website shows www.domain.com, but all the internal pages show https://www.domain.com/page. If you click through it shows it as secure, but I feel that because it is happening on all four websites, that something was done incorrectly. Here is one Google SERP: https://www.google.com/search?client=firefox-b-1&biw=1920&bih=947&ei=gq9GWpizBuuF_Qa_p5e4Bw&q=tanzanite+jewelry+designs&oq=tanzanite+jewelry+designs&gs_l=psy-ab.3..0l2.130446.136028.0.136152.29.17.4.7.9.0.207.2214.7j9j1.17.0....0...1c.1.64.psy-ab..1.28.2350...0i131k1j0i22i30k1.0.BA5-meGmuA0 As you can see, our site displays with no https, but all the internal pages do. It just worries me as I have seen our internal pages increasing in positioning, but not our homepage. Any ideas?
Technical SEO | | vetofunk0 -
Manual Webspam Error. Same Penalty on all sites on Webmaster Tools account.
My URL is: www.ebuzznet.comToday when i checked webmaster tools under manual spam section. I got Manual spam and reason was Thin content with little or no added value. Then I checked other sites from same webmaster tools account, there are 11 sites, all of them received same manual action. I never received any mail, no notification on site messages section regarding this manual action. I just need confirmation whether it is something to do with any error in webmaster tools or all of the sites really received manual spam actions.Most of the article on sites are above 500 words, quality content (no spun or copied).Looking for suggestions, answers
Technical SEO | | ndroidgalaxy0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
How can I see the SEO of a URL? I need to know the progress of a specific landing-page of my web. Not a keyword, an url please. Thanks.
I need to know the evolution on SEO of a specific landing-page (an URL) of my web. Not a keyword, a url. Thanks. (Necesito saber si es posible averiguar el progreso de una URL específica en el posicionamiento de Google. Es decir, lo que hace SEOmoz con las palabras clave pero al revés. Yo tengo una url concreta que quiero posicionar en las primeras posiciones de Google pero quiero ver cómo va progresando en función a los cambios que le voy aplicando. Muchas gracias)
Technical SEO | | online_admiral0 -
Meta description showing in source code but not being detected by SEO Moz or other tools?
Hello fellow SEO enthusiasts, Re www.appetise.com Our developers have added a meta description and I can see it when I right click on pages to 'view source' as follows : Example : BUT - using the on page seo assessment tool on SEO Moz (and also using other tools which assess title, description and keyword optimisation) - they are telling us that the meta description is not present. Please could someone suggest why? If we can get the meta description picked up - we will reach A Grade for our core pages! And this will make us feel good - and hopefully shine through in our results :-). Any help greatly appreciated. Kind Regs, Richard Best - Appetise.com <meta http-equiv="description" content="Online Takeaway Food with appetise.com. 100's of Local Takeaways Menus Online. Order Take Away Food Online for Delivery. Pay by Card Safely. Including Pizza, Chinese, Indian, Italian, Kebab."/>
Technical SEO | | E-resistible0 -
Sitemap.xml showing up in Google Search
Hello when I do a Google search my sitemap.xml shows up for lots of queries. Does anyone have any advise on this? Should I remove url in Google Webmaster? Thanks,
Technical SEO | | Socialdude0 -
Slash at end of URL causing Google crawler problems
Hello, We are having some problems with a few of our pages being crawled by Google and it looks like the slash at the end of the URL is causing the problem. Would appreciate any pointers on this. We have a redirect in place that redirects the "no slash" URL to the "slash" URL for all pages. The obvious solution would be to try turning this off, however, we're unable to figure our where this redirect is coming from. There doesn't appear to be an instruction in our .htaccess file doing this, and we've also tried using "DirectorySlash Off" in the .htaccess file, but that doesn't work either. (if it makes a difference it is a 302 redirect doing this, not a 301) If we can't get the above to work, then the other solution would be to somehow reconfigure the page so that it is recognizable with the slash at the end by Google. However, we're not sure how this would be done. I think the quickest solution would be to turn off the "add slash" redirect. Any ideas on where this command might be hiding, and how to turn it off would be greatly appreciated. Or any tips from people who have had similar crawl problems with google and any workarounds would be great! Thanks!
Technical SEO | | onetwentysix0