I dont know what happened?!? Help!
-
Up until yesturday I was mainly getting entrances to my site through www.moondoggieinc.com/dog-harness.php like 60-70 a day
After yesterday, it's dropped to one or two! I haven't made any major changes to the site? I don't know what happened!
I am starting to feel really discouraged and don't know what to do! I have no clue what my next move should be in trying to get this site working well in organic search. I feel lost now, PLEASE help!
Any suggestions on what to do with http://www.moondoggieinc.com would be a great help! I feel so lost!
THANKS!
Kristy O
-
Hi Kristy,
Be sure to check your SEOmoz PMs, I sent you some additional information there.
Keri
-
When I look in Bing webmaster tools, I don't see anything they suggested w/i parameters? I dont see any parameters set...?
It doesn't appear they changed anything, but I might not be looking in the right place...
I attached a screenshot, but i dont think they set any parameters?
Thanks you so much!
KO
-
Google and Bing don't talk to each other, so Bing wouldn't know about the parameters you set in Google.
My gut says that maybe Bing chose the wrong things to ignore? I'd look at what they suggested and see if it needs adjusting and if they are ignoring too many things.
-
I just checked Bing webmasters tools and found this message from the day that I started not getting hits from bing or yahoo.
It was the only message I had in bing webmasters:
Query parameters for normalization found on moondoggieinc.com
Site: http://moondoggieinc.com/Date: 8/26/2012Priority: LowBing has detected new parameters in your URLs. Please visit the Ignore URL Parameters feature in the Configure My Site section of Bing Webmaster Tools to review the suggested parameters. If you are certain that a suggested parameter does not change or determine the page content, you can add it to the list of parameters to ignore by clicking on the suggested parameter. This helps reduce duplicate URLs for the same content and allows Bing to crawl and index more relevant URLs, potentially improving the overall traffic for your site.I had parameters set in google for almost 2 months now, are they picking up on that?
-
In that case, I'd verify my site in Bing's Webmaster Center and see if you have any notifications, if it's just Yahoo/Bing (since Bing powers Yahoo's index). I am seeing your site and that page in particular in Bing, including when I do a search for dog harness.
-
In looking at it ALL my traffic from yahoo and bing went away... I was receiving about 50 hits a day from those engines and over the past 2 days I have received none.
How can that happen?
-
Hi Kristy,
I'd look at a couple of things. First, have you verified your site in Google Webmaster Tools? If not, do so now. If you have, log into your account and see if you have any messages saying an action was taken with regards to your site.
Second, you haven't made major changes, but even a minor change in the right part of the code could have caused your analytics to stop working.
Before we get into full-fledged panic mode, let's do a little diagnosis. Take a deep breath, and go back to your analytics and get a little more information.
- Is it just that one page that suddenly has no entrances?
- Is your site traffic as a whole down?
- For that one page, what was the source of those entrances? Was it a link on another site, and that link has been removed?
- If the source was organic search, has you traffic declined over all search engines, or just one search engine?
If we figure out more exactly what the problem is, we can help you get things fixed back up.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Help with Schema & what's considered "Spammy structured markup"
Hello all! I was wondering if someone with a good understanding of schema markup could please answer my question about the correct use so I can correct a penalty I just received. My website is using the following schema markup for our reviews and today I received this message in my search console. UGH... Manual Actions This site may not perform as well in Google results because it appears to be in violation of Google's Webmaster Guidelines. Site-wide matches Some manual actions apply to entire site <colgroup><col class="JX0GPIC-d-h"><col class="JX0GPIC-d-x"><col class="JX0GPIC-d-a"></colgroup>
Intermediate & Advanced SEO | | reversedotmortgage
| | Reason | Affects |
| | Spammy structured markup Markup on some pages on this site appears to use techniques such as marking up content that is invisible to users, marking up irrelevant or misleading content, and/or other manipulative behavior that violates Google's Rich Snippet Quality guidelines. Learn more. | I have used the webmasters rich snippets tool but everything checks out. The only thing I could think of is my schema tag for "product." rather than using a company like tag? (https://schema.org/Corporation). We are a mortgage company so we sell a product it's called a mortgage so I assumed product would be appropriate. Could that even be the issue? I checked another site that uses a similar markup and they don't seem to have any problems in SERPS. http://www.fha.com/fha_reverse shows stars and they call their reviews "store" OR could it be that I added my reviews in my footer so that each of my pages would have a chance at displaying my stars? All our reviews are independently verified and we just would like to showcase them. I greatly appreciate the feedback and had no intentions of abusing the markup. From my site: All Reverse Mortgage 4.9 out of 5 301 Verified Customer Reviews from eKomi | |
| | [https://www.ekomi-us.com/review-reverse.mortgage.html](<a class=)" rel="nofollow" title="eKomi verified customer reviews" target="_BLANK" style="text-decoration:none; font-size:1.1em;"> |
| | ![](<a class=)imgs/rating-bar5.png" /> |
| | |
| | All Reverse Mortgage |
| | |
| | |
| | 4.9 out of 5 |
| | 301 Verified Customer Reviews from eKomi |
| | |
| | |
| | |
| | |1 -
Can You Help Confirming That 1stWebDesigner Was hit by Panda 4.0 and PayDay Loans 2.0,3.0?
Hello, really hoped for your feedback here. So my site is http://www.1stwebdesigner.com I just used http://barracuda.digital/panguin-tool/ - and came to realization, that indeed we might have been hit by famous Panda 4.0 update. Here is screenshot from Barracuda tool - https://www.evernote.com/l/AELGaZ6nyxBE1aK7oQJVHBPKJuwjUh5JWk8 and in attachment there is Google Analytics screenshot for that time - especially May, 2014. Can you please help me confirm that we indeed have been hit with the Penalty? It has been 1.5 years already and since then, the traffic never has grown more. Before May, 2014 our site received 1.5 million pageviews/mo, after it..until this moment traffic has never been more than 600,000 pageviews/mo. Now if I look back even more with Barracuda tool, I can see it was also affected by Top Heavy 1.0 update,Thu Jan 19 2012. Even back then before all these updates site grew to 2.5 million pageviews/mo. What is the painful part I always focused on time-consuming link building. Submitting links to design news sites, leaving quality comments under related articles and always have written unique quality content on site for years. Can you tell based on screenshots and confirm we indeed have penalty. And maybe suggest what are the actions to take now? I have ahrefs pro subscription and started using Disavow tool to prepare for submission to Google. Would love to hear feedback on this..it has been painful throughout years. Update: here is one more screenshot from Barracuda looking from 2012 year to traffic - https://www.evernote.com/l/AEIX6iP4USBGtbyRLZ1oTmcACgBtXHBP2rw SpRPIyY
Intermediate & Advanced SEO | | researchninja0 -
Canonical Help (this is a nightmare)
Hi, We're new to SEO and trying to fix our domain canonical issue. A while back we were misusing the "link canonical" tag such that Google was tracking params (e.g. session ids, tagging ) all as different unique urls. This created a nightmare as now Google thinks there's millions of pages associated with our domain when the reality is really a couple thousand unique links. Since then, we've tried to fix this by: 1) specifying params to ignore via SEO webmasters 2) properly using the canonical tag. However, I'm still recognizing there's a bunch of outsanding search results that resulted from this mess. Any idea on expectation on when we'd see this cleaned up? I'm also recognizing that google is looking at http://domain.com and https://domain.com as 2 different pages even though we specify to only look at "http://domain.com" via the link canonical tag. Again, is this just a matter of waiting for Google to update its results? We submitted a site map but it seems like it's taking forever for the results of our site to clear up... Any help or insight would greatly be appreciated!
Intermediate & Advanced SEO | | sfgmedia0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Need help in Title tags
Hi I have word press site and it has 24 pages and the main keywords is Stairlifts there are various kind and type of stairlifts and all the 24 page has the title tag stairslifts , i have tried to make each unique but i cannot avoid using the keyword stairslift. Will that have a negative impact if my main keyword is stairlifts. My target page for stairlifts is the home page. Any suggestions please Many thanks
Intermediate & Advanced SEO | | conversiontactics0 -
Accidental Noindex/Mis-Canonicalisation - Please help!
Hi everybody, I was hoping somebody might be able to help as this is an issue my team and I have never come across before. A client of ours recently migrated to a new site design. 301 redirects were properly implemented and the transition was fairly smooth. However, we realised soon after that a sub-section of pages had either one or both of the following errors: They featured a canonical tag pointing to the wrong page They featured the 'meta noindex' tag After realising this, both the canonicals and the noindex tags were immediately removed. However, Google crawled the site while these were in place and the pages subsequently dropped out of Google's index. We re-submitted the affected pages to Google's index and used WMT to 'Fetch' the pages as Google. We have also since 'allowed' the pages in the robots.txt file as an extra measure. We found that the pages which just had the noindex tag were immediately re-indexed, while the pages which featured the noindex tag and which were mis-canonicalised are still not being re-indexed. Can anyone think of a reason why this might be the case? One of the pages which featured both tags was one of our most important organic landing pages, so we're eager to resolve this. Any help or advice would be appreciated. Thanks!
Intermediate & Advanced SEO | | robmarsden0 -
Help With Preferred Domain Settings, 301 and Duplicate Content
I've seen some good threads developed on this topic in the Q&A archives, but feel this topic deserves a fresh perspective as many of the discussion were almost 4 years old. My webmaster tools preferred domain setting is currently non www. I didn't set the preferred domain this way, it was like this when I first started using WM tools. However, I have built the majority of my links with the www, which I've always viewed as part of the web address. When I put my site into an SEO Moz campaign it recognized the www version as a subdomain which I thought was strange, but now I realize it's due to the www vs. non www preferred domain distinction. A look at site:mysite.com shows that Google is indexing both the www and non www version of the site. My site appears healthy in terms of traffic, but my sense is that a few technical SEO items are holding me back from a breakthrough. QUESTION to the SEOmoz community: What the hell should I do? Change the preferred domain settings? 301 redirect from non www domain to the www domain? Google suggests this: "Once you've set your preferred domain, you may want to use a 301 redirect to redirect traffic from your non-preferred domain, so that other search engines and visitors know which version you prefer." Any insight would be greatly appreciated.
Intermediate & Advanced SEO | | JSOC1 -
My website keywords have been almost completely taken out of indexing in Google since 04/26/11 and I cannot determine why, anyone know?
I had 12 to 15 1st page Google rankings in the iPhone, iPad, app review vertical. As of 04/26/11 I have lost all rankings, traffic has gone from 1,000 to 1,200 a day to 150 to 350 a day. I was using a plugin for auto press releases, but have removed this and deleted the urls. I also have changed themes and hosting over the last 3 weeks. I have been trying to get SEO help, but cannot seem to get anyone to help me. thank you Mike
Intermediate & Advanced SEO | | crazymikesapps1