Wrong Page Indexing in SERPS - Suggestions?
-
Hey Moz'ers!
I have a quick question. Our company (Savvy Panda) is working on ranking for the keyword: "Milwaukee SEO".
On our website, we have a page for "Milwaukee SEO" in our services section that's optimized for the keyword and we've been doing link building to this. However, when you search for "Milwaukee SEO" a different page is being displayed in the SERP's.
The page that's showing up in the SERP's is a category view of our blog of articles with the tag "Milwaukee SEO".
**Is there a way to alert google that the page showing up in the SERP's is not the most relevant and request a new URL to be indexed for that spot? **
I saw a webinar awhile back that showed something like that using google webmaster sitelinks denote tool.
I would hate to denote that URL and then loose any kind of indexing for the keyword.
Ideas, suggestions? -
I'm not sure how many of your /tag/ pages are ranking but if you can figure that part out, you can try doing htaccess 301 redirects for specific URLs, example:
redirect 301 //tag/Milwaukee-SEO.html http://savvypanda.com/services/milwaukee-seo.html
If you need further help with .htaccess and Joomla, I'm pretty well rounded with my skills. We use Joomla for a majority of our clients (followed by Wordpress.)
-
i'm cool with not having them indexed, i'm just worried that if I demote or block the /tag/ from being indexed we'll lose ranking for keywords.
Right now the /tag/ URL is ranking fairly well. ?
-
I personally would not bother indexing the /tag/ pages since all that content exists on their own "permalink" somewhere within your site from what I could tell with a quick look.
-
Hey Dan,
You caught on to the big problem we're correcting now. It's the way our tagging system works in our blog... it's causing all kinds of duplicate content errors. We're changing tagging systems to help this problem.So I plan on doing this first, but do you have any ideas how to correct the /tag/ URL that's being indexed instead of our "MIlwaukee SEO" services page?
-
I see your /tag/ listing is showing up in the SERPs. I also noticed you have duplicate content issues on your website.
S****ee this for an example:
I'd consider fixing the duplicate content issue first, that is definitely a major problem and is probably affecting a lot of other landing pages. Fixing this might also fix your original problem that you posted about.
-
I believe you are referring to googles robot.txt which is designed to have google skip a page while indexing. I dont think you want to do this. However, I checked the backlinks (anchored text to your site) and seems like you have not built any incoming links using your keyword "Milwaukee SEO" . I would recommend just building some good links to using "Milwaukee SEO"
Your code should look like this Milwaukee SEO
Post this on a few Local sites. Since you are web design company as well, you can include that script in some of your local sites footers : ) Goof luck.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Paginated Pages Page Depth
Hi Everyone, I was wondering how Google counts the page depth on paginated pages. DeepCrawl is showing our primary pages as being 6+ levels deep, but without the blog or with an infinite scroll on the /blog/ page, I believe it would be only 2 or 3 levels deep. Using Moz's blog as an example, is https://moz.com/blog?page=2 treated to be on the same level in terms of page depth as https://moz.com/blog? If so is it the https://site.comcom/blog" /> and https://site.com/blog?page=3" /> code that helps Google recognize this? Or does Google treat the page depth the same way that DeepCrawl is showing it with the blog posts on page 2 being +1 in page depth compared to the ones on page 1, for example? Thanks, Andy
Intermediate & Advanced SEO | | AndyRSB0 -
I've got duplicate pages. For example, blog/page/2 is the same as author/admin/page/2\. Is this something I should just ignore, or should I create the author/admin/page2 and then 301 redirect?
I'm going through the crawl report and it says I've got duplicate pages. For example, blog/page/2 is the same as author/admin/page/2/ Now, the author/admin/page/2 I can't even find in WordPress, but it is the same thing as blog/page/2 nonetheless. Is this something I should just ignore, or should I create the author/admin/page2 and then 301 redirect it to blog/page/2?
Intermediate & Advanced SEO | | shift-inc0 -
Old/wrong meta-titles in index
Hi, We have problems with old Meta titles in the index of google.nl. If you look for example at this wine: https://www.wijnvoordeel.nl/Italie/Just-Hugo::5460.html The Meta tile is: **Just Hugo | Heerlijke Hugo | Het zomerdrankje van 2014 | Wijnvoordeel ** If you look at the results in Google: https://www.google.nl/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#safe=active&q=just hugo The Meta tile is: Just Hugo - Wijnvoordeel(this is an old/automatic generated Meta tile). I already added the code "", but I don't see any progress. Does anybody knows what could be the problem? Thanks for the help! Douwe Veldstra
Intermediate & Advanced SEO | | Eluscious.com0 -
Pages getting into Google Index, blocked by Robots.txt??
Hi all, So yesterday we set up to Remove URL's that got into the Google index that were not supposed to be there, due to faceted navigation... We searched for the URL's by using this in Google Search.
Intermediate & Advanced SEO | | bjs2010
site:www.sekretza.com inurl:price=
site:www.sekretza.com inurl:artists= So it brings up a list of "duplicate" pages, and they have the usual: "A description for this result is not available because of this site's robots.txt – learn more." So we removed them all, and google removed them all, every single one. This morning I do a check, and I find that more are creeping in - If i take one of the suspecting dupes to the Robots.txt tester, Google tells me it's Blocked. - and yet it's appearing in their index?? I'm confused as to why a path that is blocked is able to get into the index?? I'm thinking of lifting the Robots block so that Google can see that these pages also have a Meta NOINDEX,FOLLOW tag on - but surely that will waste my crawl budget on unnecessary pages? Any ideas? thanks.0 -
Does Google still don't index Hashtag Links ? No chance to get a Search Result that leads directly to a section of a page? or to one of numeras Hashtag Pages in a single HTML page?
Does Google still don't index Hashtag Links ? No chance to get a Search Result that leads directly to a section of a page? or to one of numeras Hashtag Pages in a single HTML page? If I have 4 or 5 different hashtag link section pages , consolidated into one HTML Page, no chance to get one of the Hashtag Pages to appear as a search result? like, if under one Single Page Travel Guide I have two essential sections: #Attractions #Visa no chance to direct search queries for Visa directly to the Hashtag Link Section of #Visa? Thanks for any help
Intermediate & Advanced SEO | | Muhammad_Jabali0 -
Why do my https pages index while noindexed?
I have some tag pages on one of my sites that I meta noindexed. This worked for the http version, which they are canonical'd to but now the https:// version is indexing. The https version is both noindexed and has a canonical to the http version, but they still show up! I even have wordpress set up to redirect all https: to http! For some reason these pages are STILL showing in the SERPS though. Any experience or advice would be greatly appreciated. Example page: https://www.michaelpadway.com/tag/insurance-coverage/ Thanks all!
Intermediate & Advanced SEO | | MarloSchneider0 -
Home page url 301 redirect suggestion
Hello, In our site we have already done 301 redirect from http:// to http://www. However, the home page links are still coming in 2 ways http://www.mycarhelpline.com/ http://www.mycarhelpline.com/index.php?option=com_newcar&view=search&Itemid=2 Need suggestion We have already use rel canonical is another 301 redirect to be used for maintaining the home page pr from seo point of view. Does google still takes both urls as separate url and finds duplicate content
Intermediate & Advanced SEO | | Modi0 -
Push for site-wide https, but all pages in index are http. Should I fight the tide?
Hi there, First Q&A question 🙂 So I understand the problems caused by having a few secure pages on a site. A few links to the https version a page and you have duplicate content issues. While there are several posts here at SEOmoz that talk about the different ways of dealing with this issue with respect to secure pages, the majority of this content assumes that the goal of the SEO is to make sure no duplicate https pages end up in the index. The posts also suggest that https should only used on log in pages, contact forms, shopping carts, etc." That's the root of my problem. I'm facing the prospect of switching to https across an entire site. In the light of other https related content I've read, this might seem unecessary or overkill, but there's a vaild reason behind it. I work for a certificate authority. A company that issues SSL certificates, the cryptographic files that make the https protocol work. So there's an obvious need our site to "appear" protected, even if no sensitive data is being moved through the pages. The stronger push, however, stems from our membership of the Online Trust Alliance. https://otalliance.org/ Essentially, in the parts of the internet that deal with SSL and security, there's a push for all sites to utilize HSTS Headers and force sitewide https. Paypal and Bank of America are leading the way in this intiative, and other large retailers/banks/etc. will no doubt follow suit. Regardless of what you feel about all that, the reality is that we're looking at future that involves more privacy protection, more SSL, and more https. The bottom line for me is; I have a site of ~800 pages that I will need to switch to https. I'm finding it difficult to map the tips and tricks for keeping the odd pesky https page out of the index, to what amounts to a sitewide migratiion. So, here are a few general questions. What are the major considerations for such a switch? Are there any less obvious pitfalls lurking? Should I even consider trying to maintain an index of http pages, or should I start work on replacing (or have googlebot replace) the old pages with https versions? Is that something that can be done with canonicalization? or would something at the server level be necessary? How is that going to affect my page authority in general? What obvious questions am I not asking? Sorry to be so longwinded, but this is a tricky one for me, and I want to be sure I'm giving as much pertinent information as possible. Any input will be very much appreciated. Thanks, Dennis
Intermediate & Advanced SEO | | dennis.globalsign0