use the_title(); for the H1 tag instead
Not sure what that other code does... if(!wp_title("",false)) { echo bloginfo( 'title');} ?>
if no title set, show blog title? what pages would this affect?
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
use the_title(); for the H1 tag instead
Not sure what that other code does... if(!wp_title("",false)) { echo bloginfo( 'title');} ?>
if no title set, show blog title? what pages would this affect?
You pretty much nailed it. If the only reason is cosmetic (which I doubt has any influence on anything), there is absolutely no reason to risk losing top 5 rankings. Good luck!
Side note.. Google's official stance is that 301s pass 100% authority but not worth testing. You are guaranteed to see a dip in rankings/organic traffic after you set up the 301s.
What is your goal here? What is on the businessname.com? What is in hospital.com/businessname/?
Off the bat, I think canonicals will be involved.
My guess is that Google doesn't have enough clues as to what the images are and treating them all as the same. All of them have the same alt text ("View Slideshow"), image file names are just numbers & surrounding text is the same for each listing.
I recommend optimizing your images (e.g. on http://www.nyc-officespace-leader.com/listings/347-varick-st-office-lease-2267sf image filenames would be 181-varick-st-nyc-office-rental-1.jpg and have alt text = Commercial Office Space Rental at 181 Varick St NYC)
P.S. Why does URL say 347 Varick st yet page description say 181 Varick st?
There is no such thing as an SEO SSL certificate. You just need one that is 2048 bit encryption ideally
Having an entire site as https is not a bad thing... in fact it's recommended by Google.
Its up to you to implement it sitewide or only on pages with secure information being exchanged (checkout, login)
There is a small boost to SEO from https urls but there also might be a small dip in rankings from switching over (and using 301 redirects). In addition, if you mess up the URL transfer, you could hurt yourself so be sure to do a proper site move.
This is the right answer.
Great way to check is to see if you have multiple versions of that URL indexed, which you don't: https://www.google.com/search?q=site:http://www.key.co.uk/en/key/platform-trolleys-trucks
In theory, no. an ecommence store can have en-us and en-uk as hreflangs, which would have the same content but different currency - totally acceptable.
Sounds like you are in the same scenario.
I'm a believer in subfolder > subdomain
I've removed many manual penalties by submitting only a disavow file. Google can't really track which links were removed by you or naturally so they can't consider that when deciding on your reconsideration request.
What does your robots.txt file contain? (or share the link)
Try removing it, clearing server cache and fetching as google again.
My guess it's a http(s) and www/nonwww issue. e.g. you have the www version in GSC which contains a sitemap that has non-www urls and you force remove www on the actual web pages (so the non-www pages are indexed, but the www. version of the site you are looking at in GSC isn't)
Make sure all of these are consistent.
Not much unfortunately. Just make sure the website is as secure as possible as hacked pages usually do more damage than bad links.
If you continue to build quality content and links, the effects of the spammy backlinks will continue to diminish (until as some point, you will be considered an authority and negative SEO won't really work any longer). But until then, staying up to date on incoming backlinks and updating the disavow file is your best bet.
Yes, that would do good. Since content is identical for each of these products, there should only be 1 URL with all of the variations of that product in order to consolidate all of the authority. If you want to keep all of the variations in search, look into creating anchor links that point to the same "master" url. e.g. http://www.prams.net/easywalker-mini-buggy-lightweight-union-jack-b can be linked as http://www.prams.net/easywalker-mini#union-jack
That way, the URL is the structure is more SEO friendly but aesthetically the site is identical.
Here are the 4 you should submit (if you don't have the site in https, can just do the first two)
the http:// is the default protocol so no need to do both http:// and without any protocol
Create question "category" pages that groups up multiple answers into larger, more encompassing pages.
1 and/or 3. not 2.
if there is a lot of volume and its competitive keywords, #3 would help you build sites that are more focused on specific topics (which are ranking well now). Note that this is harder to manage/maintain and will have a larger investment in marketing than option 1.
I also think you should have 1 "corporate/umbrella" website that does what your option 1 says which links to your option 3 sites. Hopefully you can have both sites ranking for your terms.
I can see the links working in reverse (page=200 links to page=199 via rel="prev" code) but what is the largest number and where did that link come from?
Good news is that Google isn't picking up on those urls so you should be find rankings wise (its a Moz crawler issue) - https://www.google.com/search?q=site:http://www.interstellarstore.com/star-trek-memorabilia%3Fpage%3D&start=0
Fix your "lowPrice" and "highPrice" markup (remove € and add priceCurrency)
to be honest, I think it was automatically flagged (same ratings/reviews on multiple urls) and if you can request reconsideration, they'd approve.
Great question, don't have definitive answer.
My guess is that Google would use the final page source (post javascript) since this is what is does when a website adds content via javascript post load. So if you remove the links, they shouldn't be counted.
I'd test this by fetching as google and seeing what it sees as the source.
--
Overall, I don't think having double links to the same URLs will hurt your SEO much. Overall consensus is that only the first link will be counted anyways.
If its possible to avoid a redirect, do.
www.corporatesite.com/product/ -> www.consumersite.com/product/ is better than adding a redirect into the chain.
noindex means that crawlers can still visit the page (using crawl budget). You would need to link to those pages using a nofollow tag + block via robots.txt to prevent crawlers from accessing them.
Overall, if those pages aren't being crawled currently, then they aren't affecting your crawl budget since they aren't being visited. However, if you build more authority to your website, your crawl budget will grow so crawlers might start visiting those pages again.
From my experience*, using a subfolder structure will yield the best results.
I think the more important factor is how these pages are linked to from within other pages on your website. You need to build links from high authority internal pages to your degree pages using keyword rich anchor text. If you do this from both websites, I don't think it will make much of a difference which domain the pages are hosted on. i.e. if you link to the degree pages sitewide from both domains, you will see similar results hosting them on either domain.
So if you are going to move the degrees to the other domain, you might as well switch to a subfolder structure. If not, focus on building a stronger internal link profile.
*As you said, this topic is highly debated. Google's official stance is that there is no difference for ranking between subfolder/subdirectory. They also say that if the content on the subdomain is highly relevant to the content on the root domain, they will be treated as the same domain (authority overflows).
I see a good amount of pages indexed: https://www.google.com/search?q=site%3Ahttps%3A%2F%2Fwww.northshoreymca.org%2F
Make sure the profile you are using in GSC (formerly GWT) is the https://www. version of the website (any other profile will show fewer/no indexed pages).
That robots.txt should be fine.. its not blocking anything.
The reason the crawl is stopping on the homepage is this code:
<meta name="<a class="attribute-value">robots</a>" content="<a class="attribute-value">nofollow</a>">
Which tells bots to not follow any links on the page. Remove that and you should be good.
DA/PA = Moz. This has no effect on rankings.
I am surprised that moz isn't picking up on the redirect (usually says "this domain redirects to ______") but I wouldn't worry about it.
You don't need to ask old backlinks to update links... unless the site is the same, there is a chance they will simply remove the backlinks since they aren't as relevant. Best you can do is ping google to recrawl the backlinks of itsgr82bme.com
edit the HTML in the child theme files. Likely the header and footer files specifically.
You should remove it if possible. No reason to link to a dead link (poor ux, waste crawler time) or call a non-existent resource (slow site down).
However, I don't think it will make any noticeable effect on your rankings.
If you want your category pages to rank, then that is a fine strategy. I recommend adding some unique content to the actual category pages so that they rank better.
agreed. force https (via 301 redirects) and make sure all elements on your site are secured.
I would guess similar to 301 redirects... about 2 weeks.
The faster you can get Google to recrawl the pages and index the new canonical URLs, the quicker the authority will transfer over.
Agreed, the redirects/canonicals should be permanent (well, for as long as you want the authority to pass along).
You would see changes in serps within 2 weeks usually.
best not to generate a new url to follow. ideally, css / js would replace the url change functionality.
if you must generate the print page with a ?print variable, block it via robots.txt (Disallow: /?print= ) so that those pages aren't even crawled (having a noindex tag allows them to be crawled, just not indexed in search results)
yea, wouldn't spend too much time recoding that then. i would still block the print pages via robots.txt so that they aren't crawled at all.
logo should be within "publisher", not "article".
Overall, I'd recommend using a schema generator tool to assist like: http://tools.seochat.com/tools/schema-article-generator/#sthash.iWTjQpvO.dpbs
ehh. unless reposting to the other website naturally creates more backlinks (e.g. each site has its own separate audience that would increase the total number of backlinks) keeping the articles on separate sites and interlinking with keyword anchor text is probably the better strategy.
content syndication works best when republishing to a new audience, on domains you don't already own.