410 is the recommended way to tell search engines the page is gone. all of the things mentioned above are a facet of how you should deal with this issue. sorry for the brevity and terrible punction. moz forum is a pretty iffy thing via mobile. my eggs are getting cold.
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Best posts made by Travis_Bailey
-
RE: What to do with removed pages and 404 error
-
RE: Pages are Indexed but not Cached by Google. Why?
For starters, the robots.txt file is blocking all search engine bots. Secondly, I was just taking a look at the live site and I received a message that stated something like; "This IP has been blocked for today due to activity similar to bots." I had only visited two or three pages and the cached home page.
Suffice to say, you need to remove the User-agent: * Disallow: / directive from robots.txt and find a better way to handle potentially malicious bots. Otherwise, you're going to have a bad time.
My guess is the robots.txt file was pushed from dev to production and no one edited it. As for the IP blocking script, I'm Paul and that's between y'all. But either fix or remove it. You also don't necessarily want blank/useless robots.txt directives either. Only block those files and directories you need to block.
Best of luck.
Here's your current robots.txt entries:
User-agent: googlebot Disallow: User-agent: bingbot Disallow: User-agent: rogerbot Disallow: User-agent: sitelock Disallow: User-agent: Yahoo! Disallow: User-agent: msnbot Disallow: User-agent: Facebook Disallow: User-agent: hubspot Disallow: User-agent: metatagrobot Disallow: User-agent: * Disallow: /
-
RE: Difference in using dividers in TITLE TAG
The problem with figuring out the benefit of these minute changes is that there are thousands of other changes that have occurred. Many of them are out of our control. So while you were experimenting with separators, another webmaster might have made a change that made a competing page or pages decrease. And that's just one example.
It would make for an interesting outlying case if your site/pages were positioned just so that one weird, seemingly innocuous, tweak made the difference.
If you've seen the first film in the Star Trek reboot, you might remember how Scotty figured out his transporter problem. Well, actually Future Spock just gave it to him - since Future Scotty was going to fix the problem anyway. But upon looking at the calculations, Scotty realized that space itself was also moving. Which was why his previous experiments were a disaster.
What I'm saying is, aside from 'I'm a dork', is nothing exists in a vacuum. It's very hard to determine if a tiny change had a positive benefit. There are so many external factors/moving parts. Though it would help reduce uncertainty if you didn't change anything else.
-
RE: Pages are Indexed but not Cached by Google. Why?
No worries, I'm not frustrated at all.
I usually take my first couple passes at a site in Chrome Incognito. I had sent a request via Screaming Frog. I didn't spoof the user agent, or set it to allow cookies. So that may have been 'suspicious' enough from one IP in a short amount of time. You can easily find the screaming frog user agent in your logs.
Every once in a while I'll manage to be incorrect about something I should have known. The robots.txt file isn't necessarily improperly configured. It's just not how I would have handled it. The googlebot, at least, would ignore the directive since there isn't any path specified. A bad bot doesn't necessarily obey robots.txt directives, so I would only disallow all user agents from the few files and directories I don't want crawled by legit bots. I would then block any bad bots at the server level.
But for some reason I had it in my head that robots.txt worked something like a filter, where the scary wildcard and slash trump previous instructions. So, I was wrong about that - and now I finally deserve my ice cream. How I went this long without knowing otherwise is beyond me. At least a couple productive things came out of it... which is why I'm here.
So while I'm totally screwing up, I figured I would ask when the page was first published/submitted to search engines. So, when did that happen?
Since I'm glutton for punishment, I also grabbed another IP and proceeded to spoof googlebot. Even though my crawler managed to scrape meta data from 60+ pages before the IP was blocked, it never managed to crawl the CSS or JavaScript. That's a little odd to me.
I also noticed some noindex meta tags, which isn't terrible, but could a noarchive directive have made it into the head of one or more pages? Just thought about that after the fact. Anyway, I think it's time to go back to sleep.
-
RE: Difference in using dividers in TITLE TAG
I would say more links and new/refreshed content would definitely have more to do with it than the presence or absence of a pipe.
With a pipe the title will be slightly longer. The example titles you gave seem on the long side, but nothing longer than most in that industry.
-
RE: Unique page URLs and SEO titles
I've already crawled the site. Quite a few titles appear to be rather long. Most of the titles use San Francisco Video Production.
Only include the keyword in the title if that's what the page is about. Having a keyword in the slug is a good idea, again just don't go overboard. One page should suffice, but that alone won't guarantee the first page (No one and nothing guarantees first page results).
You have to look at domain age as well. Does one of your competitors have a couple of years on you? Do they have ten years on you? That's a big thing.
Where are you're competitors getting links? Are these links good, will they drive traffic? That's another concern.
I like to put page speed/usability in the first order, but many people say it's second order. Does your site load well/fast? What can you do to reduce page load speed?
These are some really basic things you have to consider. If you've answered them properly, your situation should improve.
-
RE: Wikipedia links - any value?
In the olden days, before search engines, our elders judged links based upon the traffic they would send. You have to consider that someone is going to click on that link. Maybe that set's the site up as an authority in one person's mind. Eventually they will run into other people that are like-minded .
Maybe these people go out and publish something, with followed links, from somewhere pretty nice. It may be a long shot, but Wikipedia tends to rank well for informational queries. The links that may follow would help later.
You have content on a site with pretty high visibility. I would ask you, how is this a bad thing?
-
RE: Backlink Query. Unranked pages of High Ranking sites.
I can expand a little bit upon what Silkstream has said.
If there are trade associations relevant to your site, make those a first priority. They tend to be easier if you're in with the crowd. Plus, your competitors are likely to link there. It's a way of getting an indirect link from your competitor. Though you can apply this to other situations. Use your imagination, brochacho.
A lot of webmasters make the mistake of thinking that no one is interested in them. That is untrue. There's an audience for belly button lint.
Find your audience. Learn everything about your audience. Work it.
That approach is far better than worrying about PR. Worrying about the PR of a site gets you into bad territory. That's when you start considering links from easylinkseodirectory4ulol.com. That would be a bad place to be in general, and much more a bad place to start.
Use the following search operator:
related:google.comReplace google.com with your site, a competitor, or a site that didn't respond to your outreach. Plenty of fish in the sea. Just don't suck at outreach. Be real, be authentic - don't mention a link upfront, at all.
That shouldn't be your goal anyway. Your goal should be exposure through good things that you give others. -
RE: My Website No Longer Appears in Mobile Google Search but Does in Desktop...Why Is This?
I'm laboring under the apprehension that you mean the site you listed in your profile. If that is the case, based upon what I've seen - the site isn't as mobile-friendly as it could be. I would recommend looking into responsive design.
What has likely happened is your content was pretty good for queries on mobile devices, up until recently. Competitors are likely serving up content that's 'mobile-friendlier' than yours in some way. That could mean anything from faster page load speeds to actual dedicated adaptive or responsive sites have started crowding the SERPs.
However, there are more pressing concerns that can be remedied now. The www and non-www version of the site is thoroughly indexed. If there's been a change to the .htaccess file, it would be a good idea to revisit it - the sooner the better.
Also, my crawler is picking up some 500 errors in the tag and category folders. Finally, it appears there is some weirdness happening with the /log-out page. It would be best to noindex/nofollow that page and/or block it with robots.txt. It appears to be sapping crawl budget.
So I suppose:
- Look into the duplicate content issue.
- Fix crawlability issues.
If you would like, I can share a Google Sheets copy of the crawl - free of charge. Just message me from my profile. Hopefully this has been helpful.