Why is my DA going down despite doing everything right?
-
I have been doing everything right that I could possibly do on my blog, tracking down and deleting broken links, writing for endless other publications to build my link profile, having an easy to read site with regularly updated content. Despite this, my DA has gone down by two. It's infuriating and I don't know why this is happening - does anyone have any ideas?
-
A decrease in your Domain Authority (DA) despite your efforts to improve it can be frustrating. Several factors could contribute to this situation:
Algorithm updates: Search engines like Google regularly update their algorithms, which can affect how they evaluate websites and determine DA.
Competitor activity: If your competitors are actively improving their own websites, it could impact your relative position and DA.
Technical issues: Issues with your website's technical performance, such as slow loading times or broken links, can negatively impact your DA.
Content quality: Despite your efforts, if your content quality doesn't meet the standards expected by search engines or your audience, it could affect your DA.
Backlink quality: Low-quality or spammy backlinks pointing to your website can harm your DA. It's essential to focus on acquiring high-quality backlinks from reputable sources.
Changes in search trends: Shifts in user behavior or search trends can impact your website's visibility and, consequently, your DA.
To address this issue, continue focusing on best practices for SEO, regularly monitor your website's performance, and adapt your strategies as needed. Conduct a thorough analysis of your website and its performance metrics to identify any specific areas that may need improvement. Additionally, consider seeking guidance from SEO professionals who can provide personalized advice based on your website's unique situation.
-
This happen when the backlinks of your website dropped. You have created new backlinks to boost up your Domain Authority. Here's the complete guide for you.
-
First mistake: to thing you're doing all right.
If DA is going down is because something it's not right. To think that Google made the mistake rating your page is not the right thought. The other answer saying: "SEO is a black box and sometimes weird things without explanation happen". That's bullsh***. Maybe you cannot see it, but the explanation is there.
Once said that, what I can tell you is that maybe you have been using some practices considered black had without your knowledge. Pay special attention with the external links. Well used they can bring you authority but wrong used it can hurts. Learn when to use dofollow or nofollow and how to use the anchor text.
Check the user experience: loading speed of your site, average bounce rate, average engagement time. All those parameters tell Google if users are enjoying your stuff. So check if they are in good rates.
Sometimes there's some things out of your control that can affect your DA. For example if you're sharing the host, maybe one of your "host neighbours" is doing black had, or is a porn page or violence content site. In that case you all have the same IP so you all got penalized. You can check who are you sharing host with. There online free tools that tell you that in seconds. Another option could be someone from the competition trying to hurts you in purpose. They can link to you from bad reputation sites using dofollow links. This could hurts. Check your income links in your search console maybe you can find out something suspicious.
Hope it helped
-
Of course, the domain is wandering-everywhere.com. I haven't used any black hat techniques to my knowledge, and don't believe that I have any spammy links. It's so low as it is and I'm really not sure what I could be doing more,
-
Any black hat techniques have been carried out on your domain? Have you checked on spammy links and used disavow tool?
Can you share the domain?
DA is a long term game, I've seen websites go down and then spike up. SEO is a black box and sometimes weird things without explanation happen.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Search Console rejecting XML sitemap files as HTML files, despite them being XML
Hi Moz folks, We have launched an international site that uses subdirectories for regions and have had trouble getting pages outside of USA and Canada indexed. Google Search Console accounts have finally been verified, so we can submit the correct regional sitemap to the relevant search console account. However, when submitting non-USA and CA sitemap files (e.g. AU, NZ, UK), we are receiving a submission error that states, "Your Sitemap appears to be an HTML page," despite them being .xml files, e.g. http://www.t2tea.com/en/au/sitemap1_en_AU.xml. Queries on this suggest it's a W3 Cache plugin problem, but we aren't using Wordpress; the site is running on Demandware. Can anyone guide us on why Google Search Console is rejecting these sitemap files? Page indexation is a real issue. Many thanks in advance!
Technical SEO | | SearchDeploy0 -
How valuable is a link with a DA 82 but a PA of 1?
Our county's website has a news' blog, and they want to do an article about an award we won. We're definitely going to do it, and we're happy about the link. However, all the other news' articles they have only have a PA of 1. The DA is 82, and the link is completely white hat. It's a govt site in our locale, however, with such a terrible PA, I'm don't think the link is really all that great from an SEO stand point. Am I right or wrong (or is it some dreadful murky grey area like everything else in this industry (which I'm thankful to be a part of 🙂 )? Thanks so much for any insights! Ruben
Technical SEO | | KempRugeLawGroup0 -
Disavowing the "right" bad backlinks
Hello, From july to november (this year), I gained 110.000 backlinks. Considering that I'm having trouble ranking well for any keyword in my niche (a niche that I was ranking #1 for several keywords and now I'm losing), I'm starting to believe that negative seo is affecting me. I already read several articles about negative seo, some telling this is a myth, others telling that negative SEO is alive and kicking... My site is about health and fitness in brazilian-portuguese language, and there's polish/chinese/english with warez/viagra/others drugs pointing to my domain and a massive links in comments with blogs without comment approval. Considering that all these new backlinks are not on my language and are clearly irrelevant, can I disavow them without fear of affecting my SEO even more ? Everytime you see someone talking about the disavow tool, is always the same warning: "cautiong when disavowing a link, you can hurt you site even more, removing a link that - in some way - was helping you". Any help or guidelines if I can remove this links safely would be greatly appreciated. Thank you and sorry for my english (it's not my native language) 5ZDjUcK.jpg
Technical SEO | | broncobr0 -
How to fix duplicate content errors with Go Daddy Site
I have a friend that uses a free GoDaddy template for his business website. I ran his site through Moz Crawl diagnostics, and wow - 395 errors. Mostly duplicate content and duplicate page title I dug further and found the site was doing this: URL: www.businessname.com/page1.php and the duplicate: businessname.com/page1.php Essentially, the duplicate is missing the www. And it does this 2 hundred times. How do I explain to him what is happening?
Technical SEO | | cschwartzel0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Target term hits a glass ceiling despite A grade
Greetings from 13 degrees C wetherby UK 🙂 Ive hit a roadbloack in my attempts to get a target term onto page one, below is a url pointing to a graph illustrting the situation. The target term is on the graph (I'm reluctant to stick it in here incase this page comes up) http://i216.photobucket.com/albums/cc53/zymurgy_bucket/glass-ceiling-office-to-let.jpg This is what Ive done to date for page -
Technical SEO | | Nightwing
http://www.sandersonweatherall.co.uk/office-to-let-leeds/ 1. Ensured the Markup follows SEO best parctice
2. Internally linked to the page via a scrolling footer
3. Shortened the URL
4. Requested the Social media efforts points links to the page
5. Requested additional content But i wonder... Is the reason for hitting a glass ceiling now down to lack of content ie just one page or is there a deeper issue of an indexing road block? Any insights welcome 🙂0 -
Mobile website settings - I am doing right?
Hi, http://www.schicksal.com has a "normal" and a "mobile' version. We are using a browser detection routine to redirect the visitor to the "default site" or the "mobile site". The mobile site is here:
Technical SEO | | GeorgFranz
http://www.schicksal.com/m The robots.txt contains these lines: User-agent: *
Allow: / User-agent: Googlebot
Disallow: /m
Allow: / User-agent: Googlebot-Mobile
Disallow: /
Allow: /m Sitemap: http://www.schicksal.com/sitemaps/index So, the idea is: Only allow the Googlebot-Mobile Bot to access the mobile site. We have also separate sitemaps for default and mobile version. One of the mobile sitemap is here My problem: Webmaster tool is saying that Google received 898 urls from the mobile sitemap, but none has been indexed. (Google has indexed 550 from the "web sitemap".) I've checked the webmaster tools - no errors on the sitemap. So, if you are searching at google.com/m - you are getting results from the default web page, but not the mobile version. This is not that bad because you will be redirected to the mobile version. So, my question: Is this the "normal" behaviour? Or is there something wrong with my config? Would it be better to move the mobile site to a subdomain like m.schicksal.com? Best wishes, Georg.0 -
How best to go about creating an application?
Hi there, I work within the travel sector, and I've had an idea of getting an embeddable application built, which would be of use to my company, but also lots of other companies (our competitors) and general websites in our niche. The idea would be that we'd get (and pay for) the application to be built, and then allow other parties to embed it into their site with a snippet of our code so we get the link back from them. There are obviously some technical issues here. The app will be built with Javascript (we can't use PHP on our web server , its a long story!) and I'd want a way to stop other swiping the code and using without the link to us. Is this going to be possible? Also, whats going to be the best way to get the link from them? If a competitor used it, they are less likely to do so with our company name plastered all over it, so it would need to be subtle, or an image link, or something. Not sure. Anyone done this sort of thing before? Thanks
Technical SEO | | neilpagecruise0