Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Unsolved Link Tracking List Error
-
"I have been maintaining 5 directories of backlinks in the 'Link Tracking List' section for several months. However, I am unable to locate any of these links at this time.
Additionally, the link from my MOZ profile is currently broken and redirects to an error page, no to Elche Se Mueve.
Given the premium pricing of MOZ's services, these persistent errors are unacceptable."
-
@Alberto_Diaz said in Link Tracking List Error:
"I have been maintaining 5 directories of backlinks in the 'Link Tracking List' section for several months. However, I am unable to locate any of these links at this time.
Additionally, the link from my MOZ profile is currently broken and redirects to an error page, no to Elche Se Mueve.
Given the premium pricing of MOZ's services, these persistent errors are unacceptable."
If the backlinks in your "Link Tracking List" have disappeared, it could be due to a data refresh or syncing issue on Moz’s end. Try clearing your browser cache or logging in with an incognito window to rule out any display glitches. For the broken link in your Moz profile, check that the URL in your profile settings is entered correctly and that there are no typos or extra spaces. If the issue persists despite the URL being correct, it might be a redirect or server configuration problem on Moz’s end, which their support team would need to fix.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Backlink builder (Professional)
If I wanted to hire a professional person to build high quality backlinks for me, where would I look? I'm asking here because I'd rather go on the recommendation of other businesses owners, than a google search. Does anyone have a person that they can recommend? It's for an insurance agency in Texas. Many thanks
Link Building | | laurentjb2 -
Unsolved how to add my known backlinks manually to moz
hello
Moz Local | | icogems
i have cryptocurrency website and i found backlinks listed in my google webmasters dashboard, but those backlinks dont show in my moz dashboard even after 45 days. so my question is can i add those backlinks to moz, just to check my website real da score thanks,0 -
All backlinks not showing on my website
hey folks, i am fairly new to SEO and i have noticed that i have backlinks that tools like ubersuggest or SEM rush are nothshowing my product is lazyapply.com. -> It's an automated job search application can anyone suggest me what should id do ? PS - for example i have backlink from crunchbase.com but my website is not showing me that
Link Building | | viveklazy0 -
Should I disavow these links?
Hi all, I have a ski website that I am currently performing a toxic backlink audit on. I have noted that a lot of the links being flagged as toxic/spammy by the tool I am using seem to be the same/similar sites with different URLs. The sites are vaguely related to skiing (relating to helicopter travel options for travelling to ski resorts) but it is concerning me that there are so many of them and they are being flagged as so toxic.
Link Building | | SolveWebMedia
Do you think it is worth disavowing these? Or contacting the owner to ask them to remove the link? I have included an example of some of the links below. https://www.cannes-helicopters.co.uk/index.php?menuopen=21&showcontent=5
https://nice-helicopter.co.uk/index.php?menuopen=21&showcontent=5
https://monaco-helicopter.co.uk/index.php?menuopen=21&showcontent=5 Slightly different site but same favicon icon:
https://monaco-helicopter.co.uk/index.php?menuopen=21&showcontent=5
https://www.whitetracks-holidays.com/Helicopter_Transfers_Villars_Switzerland.htm Thanks in advance for any advice / help!0 -
Tumblr and Link Equity
Hi Moz Community, I've recently decided to start a project where I gather 1,000 great examples of something that's searched for often, and am thinking that posting it to a Tumblr site like the following website did could be a great way to pass link equity back to my main site (with a little "site by [my site]" somewhere in the header or footer). While I was super pumped about this idea today, and have now gathered almost 500 of my examples (mentioned above), I am not seeing link equity passed from this site, even on the non-redirected links here: http://gothamlogos.tumblr.com/ Anyone have any experience with projects like this? I've checked read the Moz Tumblr and SEO article from a few years ago, which makes it seem like this should be an SEO "win"... But using the Moz Pro account tools, I'm not seeing any of these non-redirect links (ordinary links) giving any value to anyone on this example site. Thanks so much in advance, Zack
Moz Pro | | Zack2 -
Htaccess and robots.txt and 902 error
Hi this is my first question in here I truly hope someone will be able to help. It's quite a detailed problem and I'd love to be able to fix it through your kind help. It regards htaccess files and robot.txt files and 902 errors. In October I created a WordPress website from what was previously a non-WordPress site it was quite dated. I had built the new site on a sub-domain I created on the existing site so that the live site could remain live whilst I created on the subdomain. The site I built on the subdomain is now live but I am concerned about the existence of the old htaccess files and robots txt files and wonder if I should just delete the old ones to leave the just the new on the new site. I created new htaccess and robots.txt files on the new site and have left the old htaccess files there. Just to mention that all the old content files are still sat on the server under a folder called 'old files' so I am assuming that these aren't affecting matters. I access the htaccess and robots.txt files by clicking on 'public html' via ftp I did a Moz crawl and was astonished to 902 network error saying that it wasn't possible to crawl the site, but then I was alerted by Moz later on to say that the report was ready..I see 641 crawl errors ( 449 medium priority | 192 high priority | Zero low priority ). Please see attached image. Each of the errors seems to have status code 200; this seems to be applying to mainly the images on each of the pages: eg domain.com/imagename . The new website is built around the 907 Theme which has some page sections on the home page, and parallax sections on the home page and throughout the site. To my knowledge the content and the images on the pages are not duplicated because I have made each page as unique and original as possible. The report says 190 pages have been duplicated so I have no clue how this can be or how to approach fixing this. Since October when the new site was launched, approx 50% of incoming traffic has dropped off at the home page and that is still the case, but the site still continues to get new traffic according to Google Analytics statistics. However Bing Yahoo and Google show a low level of Indexing and exposure which may be indicative of the search engines having difficulty crawling the site. In Google Analytics in Webmaster Tools, the screen text reports no crawl errors. W3TC is a WordPress caching plugin which I installed just a few days ago to speed up page speed, so I am not querying anything here about W3TC unless someone spots that this might be a problem, but like I said there have been problems re traffic dropping off when visitors arrive on the home page. The Yoast SEO plugin is being used. I have included information about the htaccess and robots.txt files below. The pages on the subdomain are pointing to the live domain as has been explained to me by the person who did the site migration. I'd like the site to be free from pages and files that shouldn't be there and I feel that the site needs a clean up as well as knowing if the robots.txt and htaccess files that are included in the old site should actually be there or if they should be deleted... ok here goes with the information in the files. Site 1) refers to the current website. Site 2) refers to the subdomain. Site 3 refers to the folder that contains all the old files from the old non-WordPress file structure. **************** 1) htaccess on the current site: ********************* BEGIN W3TC Browser Cache <ifmodule mod_deflate.c=""><ifmodule mod_headers.c="">Header append Vary User-Agent env=!dont-vary</ifmodule>
Moz Pro | | SEOguy1
<ifmodule mod_filter.c="">AddOutputFilterByType DEFLATE text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon application/json
<ifmodule mod_mime.c=""># DEFLATE by extension
AddOutputFilter DEFLATE js css htm html xml</ifmodule></ifmodule></ifmodule> END W3TC Browser Cache BEGIN W3TC CDN <filesmatch ".(ttf|ttc|otf|eot|woff|font.css)$"=""><ifmodule mod_headers.c="">Header set Access-Control-Allow-Origin "*"</ifmodule></filesmatch> END W3TC CDN BEGIN W3TC Page Cache core <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteRule .* - [E=W3TC_ENC:_gzip]
RewriteCond %{HTTP_COOKIE} w3tc_preview [NC]
RewriteRule .* - [E=W3TC_PREVIEW:_preview]
RewriteCond %{REQUEST_METHOD} !=POST
RewriteCond %{QUERY_STRING} =""
RewriteCond %{REQUEST_URI} /$
RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|w3tc_logged_out|wordpress_logged_in|wptouch_switch_toggle) [NC]
RewriteCond "%{DOCUMENT_ROOT}/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" -f
RewriteRule .* "/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" [L]</ifmodule> END W3TC Page Cache core BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress ....(((I have 7 301 redirects in place for old page url's to link to new page url's))).... #Force non-www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC]
RewriteRule ^(.*)$ http://domain.co.uk/$1 [L,R=301] **************** 1) robots.txt on the current site: ********************* User-agent: *
Disallow:
Sitemap: http://domain.co.uk/sitemap_index.xml **************** 2) htaccess in the subdomain folder: ********************* Switch rewrite engine off in case this was installed under HostPay. RewriteEngine Off SetEnv DEFAULT_PHP_VERSION 53 DirectoryIndex index.cgi index.php BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /WPnewsiteDee/
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /subdomain/index.php [L]</ifmodule> END WordPress **************** 2) robots.txt in the subdomain folder: ********************* this robots.txt file is empty **************** 3) htaccess in the Old Site folder: ********************* Deny from all *************** 3) robots.txt in the Old Site folder: ********************* User-agent: *
Disallow: / I have tried to be thorough so please excuse the length of my message here. I really hope one of you great people in the Moz community can help me with a solution. I have SEO knowledge I love SEO but I have not come across this before and I really don't know where to start with this one. Best Regards to you all and thank you for reading this. moz-site-crawl-report-image_zpsirfaelgm.jpg0 -
Automatically Check List of Sites For Links To Specific Domain
Hi all, Can anyone recommend a tool that will allow me to put in a list of about 200 domains that are then checked for a link back to a specific domain? I know I can do various link searches and use Google site: command on a site by site basis, but it would be much quicker if there was a tool that could take the list of domains I am expecting a link on and then find if that link exists and if so on what page etc. Hope this makes sense otherwise I have to spend a day doing it by hand - not fun! Thanks,
Moz Pro | | MrFrisbee
charles.0 -
Checking multiple keywords in Rank tracking
Besides the rankings in the campaigns i want to check 100+ keywords at once in rank tracking. Is this possible and if not why? It says i can check 400 keywords a day but manually entering them is time consuming and that's exactly why i use SEOMOZ, to save time.
Moz Pro | | FindFactory1