Metadata configured, but Google only shows URL with sitelinks. Something wrong with my code?
-
Hi guys!
I have a metadata problem with my home page. If I look for the brand's keyword, the SERPs don´t show the metadata I configured, instead it shows the URL with sitelinks. If I use the "site:" command, it doesn't appear at all. This happens only on the home page, not the rest, which are roughly 700 pages. Those appear fine.
I already have a meta title and meta description configured, which include the mentioned KW. It used to appear correctly before. GSC shows it indexed. Most audit tools (configured to crawl JS) detect the metadata. Moz's On Page tool doesn't. Could it be because of the JS configuration? Or am I missing something else? Here´s the meta description code:What do you think? I'd appreciate your input. Thanks!
-
Google could ignore any given metadata when it wants and how it wants. Ive seen this nummerous times that a manual set title and description is being thrown overboard and google simply selects the content thats onpage in this case.
Dont stare blind on it; make sure your content is relevant on which you wanted to be searched for in the first place. The rest will pretty much follow.
-
It would help if you shared your website URL.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Vanity URLs are being indexed in Google
We are currently using vanity URLs to track offline marketing, the vanity URL is structured as www.clientdomain.com/publication, this URL then is 302 redirected to the actual URL on the website not a custom landing page. The resulting redirected URL looks like: www.clientdomain.com/xyzpage?utm_source=print&utm_medium=print&utm_campaign=printcampaign. We have started to notice that some of the vanity URLs are being indexed in Google search. To prevent this from happening should we be using a 301 redirect instead of a 302 and will the Google index ignore the utm parameters in the URL that is being 301 redirect to? If not, any suggestions on how to handle? Thanks,
Technical SEO | | seogirl221 -
Redirect 302 status code to 301 status code
Dear All, According to Mozz crawling report our site (www.rijwielcashencarry.n) have a few medium priority problems. There are 302 temporarly direct which i would like to redirect to 301 (because of the linkjuice). What is the proper way to do this?
Technical SEO | | rijwielcashencarry040
I keep looking for it, but i can't seem to find the right solution. Thanks for your help!0 -
How Google can interpret all "hreflag" links into HTML code
I've found the solution. The problem was that did not put any closing tag into the HTML code....
Technical SEO | | Red_educativa0 -
What am i doing wrong?
Hi There we are trying to get http://www.stainless-steel-fabricators.net/ on to page 1 in the UK for keyword 'stainless steel fabricators'. according to reports most of the top 10 websites hardly have any back links or any basic optimisation (title tag, met description). our pages have A grade optimisation and loads of linking domains. stuck as to know what else to do anyone help? Bob
Technical SEO | | bobsnowzell0 -
I must be doing something very wrong
Can I get some direct advice for a domain I am trying to optimize? The domain is mmfiles.com , it contains software submitted by users, 1500 different listings/pages with title/description, around 20 000 Google results for the domain, site is live since Dec 2008. Problem I am getting 20-40 hits from Google per day, it is pathetic. Best days were around June 2010, ~400 hits/day (if it matters). I am not sure what my problem is but with so much content and so little hits I must be doing something very wrong. Some possible problems and things I did: Google says I have 8 back links, that is not good but I know it's not all about links. SeoMoz says I have "too many on page links", can this be so important ? How should I redirect users landing on an url that moved? e.g software title can change, old location /12/photo-gallery/ is now /12/xml-photo-gallery/ , if user lands on old URL should I 301 redirect to new one? Because I can tell it's intention by the listing number. I used to 301 redirect, now I just display same content on any url string like /12/whatever/ I put rel="nofollow" on some internal pages like contact page, login, register, etc hoping to prevent diluting the page rank. If someone can have a look at the site and mention the most obvious seo problems it would be great.
Technical SEO | | adrianTNT0 -
Friendly URLS (SEO urls)
Hello, I own a eCommerce site with more than 5k of products, urls of products are : www.site.com/index.php?route=product/product&path=61_87&product_id=266 Im thinking about make it friend to seo site.com/category/product-brand Here is my question,will I lost ranks for make that change? Its very important to me know it Thank you very much!
Technical SEO | | matiw0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
How to keep a URL social equity during a URL structure/name change?
We are in the process of making significant URL name/structure change to one of our property and we want to keep the social equity (likes, share, +1, tweets) from the old to the new URL. We have been trying many different option without success. We are running our social "button" in an iframe. Thanks
Technical SEO | | OlivierChateau0