Still Going Down In Search
-
After signing up to SEOmoz as a pro user and sorting out all the things that the search flagged up with our website (htyp://www.whosjack.org) we jumped very slightly in search only to continue going down again.
We are a news based site, we have no dup content, we have good writers and good orangic links etc I am currently very close to having to call it a day.
Can anyone suggest anything at all from looking at the site or suggest a good SEO firm that I could talk to who might be able to work out the issue as I am totally at a loss as to what do do now.
Any help or suggestions greatly appreciated.
-
User-agent: Googlebot
Disallow: /*comment-page-1$ -
Thanks Russ. Do you know how I could stop both the comment and article pages being indexed?
-
I am spidering your site right now to see what issues come up. The first thing I noticed was the large number of subdomains...
It appears you are opening up your site to other bloggers - just be careful with it.
Our spiders found over 25,000 pages in the first few minutes, which seems pretty high for a site your size (although I know you have around 9000 articles written)
One issue is that your article and a comment page tend to always get indexed, despite having very similar content, ie: /article/ and /article/comment-page-1/
There are many great SEO firms listed on the SEOMoz Recommended page
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do internal search results get indexed by Google?
Hi all, Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget. The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages. The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function? Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page? Thanks for your thoughts guys.
Intermediate & Advanced SEO | | Mat_C0 -
Site dropped in search for 95% after 18-19 feb
Hi people. I have a client which holds a escort agency website operating in Amsterdam the Netherlands. Please take note that Escort Agency is legal and holland is a liberal country with permit and all, so just trying to point out that it's a perfectly legal crafted business. To the point: we are seeing a 95% drop in traffic (basicly back to barely 10 clicks a day) after 18 to 19 feb. We've inspected the incoming links, and we did see some things going on that where not quite healthy. At first a banner was bought on a dutch advertising website. On their behalf, this sitewide banner got published with a follow on roughly 132k of pages. That was strike 1 (i think). After tossing this in disavow for temporarily basis and informed the advertisal website to put any sitewide link to nofollow in the first place, nothing changed. We found a 2nd site doing the same mistake. Frankly the banner got exposed on roughly 100k of pages on which some of 'm where barely 2 days old. Strike 2. Solved this by putting it in disavow for the time being and asking politely to put the banner again, nofollow. We cleaned out any incoming potential spammy links by using disavow. The data we obtained was a mix of google webmaster itself and moz profiling. However we're one month further now, and the graph is still a big phat flatline. What is going on? I've noted that, one other sites, which share the same brand, but completely different websites / subnets / content and all, has the same threatment going on showing a huge drop from 18th of feb 2019 and is unable to recover. We cannot see anything 'bad' actually going on @ webmasters and there is no manual action taken. So we're kind of stuck now on a site that was my project but now completely fell into oblivion and hurting someone's business. The url is https://www.qualityescort.nl/ - anyone has a reasonable advice to this issue? full.jpg full.jpg
Intermediate & Advanced SEO | | Jvanderlinde0 -
Robots.txt Disallowed Pages and Still Indexed
Alright, I am pretty sure I know the answer is "Nothing more I can do here." but I just wanted to double check. It relates to the robots.txt file and that pesky "A description for this result is not available because of this site's robots.txt". Typically people want the URL indexed and the normal Meta Description to be displayed but I don't want the link there at all. I purposefully am trying to robots that stuff outta there.
Intermediate & Advanced SEO | | DRSearchEngOpt
My question is, has anybody tried to get a page taken out of the Index and had this happen; URL still there but pesky robots.txt message for meta description? Were you able to get the URL to no longer show up or did you just live with this? Thanks folks, you are always great!0 -
Noindex search pages?
Is it best to noindex search results pages, exclude them using robots.txt, or both?
Intermediate & Advanced SEO | | YairSpolter0 -
No Results for Google/Bing Keyword Search by Domain Name
My site is bestwebconsult [dot] com When I do a search for my exact domain name in Google and bing, it does not appear at all. I have submitted a sitemap to Webmaster Tools. It is a relatively new site completed with in the last month. Built with Joomla. This leads me to believe that something is misconfigured on the website. Please advise, thanks!
Intermediate & Advanced SEO | | crave811 -
My site still out of rank
Hello, I am working on a site for past 3 months, here are the problems with this site, 1. It had a forum full of spam becuase initially captcha was not included, 10000 spam backlinks 2. Affiliate page was also hit by spam about 4000 spam backlinks which were either not existing or porn etc.... 3. Too many internal links which were indexed, these additional links were generated due to tags, ids, filters etc. Existing SEO team decided to remove the forum and after 30 days they blocked it in robots. But within 30 days the site moved from 3rd page to no where. Now after few days lator internal links are also cleaned by putting following in the robots, Dissallow: / *? Dissallow: / *id Dissallow: / *tag Links are now cleaning up, all the spam and bad links are now put into disavow file and sent to google via disavow tool. On daily bases good quality links are been produced such as through content, article submission, profile linking, Bookmarks etc. The site is still not any where on top 50 results. The impressions are decreasing, traffic also do not rise as much. How do you see all this situation. What do you suggest and how long do you think it will take to return to top 10 when good linking is being done and all preventive measures are being taken. I would appreciate any feedback on it. Thank you. Site URL: http://www.creativethemes.net keywords: magento themes, magento templates
Intermediate & Advanced SEO | | MozAddict0 -
Should product searches (on site searches) be noindex?
We have a large new site that is suffering from a sitewide panda like penalty. The site has 200k pages indexed by Google. Lots of category and sub category page content and about 25% of the product pages have unique content hand written (vs the other pages using copied content). So it seems our site is labeled as thin. I'm wondering about using noindex paramaters for the internal site search. We have a canonical tag on search results pointing to domain.com/search/ (client thought that would help) but I'm wondering if we need to just no index all the product search results. Thoughts?
Intermediate & Advanced SEO | | iAnalyst.com0 -
Do search engines understand special/foreign characters?
We carry a few brands that have special foreign characters, e.g., Kühl, Lolë, but do search engines recognize special unicode characters? Obviously we would want to spend more energy optimizing keywords that potential customers can type with a keyboard, but is it worthwhile to throw in some encoded keywords and anchor text for people that copy-paste these words into a search? Do search engines typically equate special characters to their closest English equivalent, or are "Kuhl", "Kühl" and "Kühl" three entirely different terms?
Intermediate & Advanced SEO | | TahoeMountain400