Search engine submission - Urgent
-
Is it necessary to submit a new site to search engines?
I have a brand-new site I purchased a few days ago which I didn't think to check until after I purchased it, But it has not been indexed by Google!
The domain was registered three months ago, and probably the website wouldn't have been designed until after that.
But I'm still left puzzling why the site is not indexed by Google.Any ideas?
Thanks in advance.
-
I would agree with Dirk. There is not much to rank for on your website, all of your content comes from amazon.
Still, another questions is why is it not indexed yet. The website is built on Wordpress and if you haven't touched your robots.txt then it should not block crawlers. As I can see you are not ranking for your exact match domain name (i.e. site: yourdomain.com) which can be sign of a manual penalty.
What I would suggest is to add some content, do some internal optimisation (download seo Yoast plugin), add Titles, H1, optimise images and so forth. Then, create a separate search console account and submit your sitemap and see if it works.
-
A penalty is not irreversible - but with the effort you will have to put into it you might as well start on a new domain from scratch. This is what I personally would do. It's not that the domain is so powerful by itself - an exact match domain might give you a small advantage but on the other hand you will have to put much more effort to re-build the site reputation.
To be 100% sure - check the search console of the site - using a "new" Google account not related to your current one; if you want to be extremely careful - do it on some external network - not on your own network.
Dirk
-
I know it's stupid but I didn't think to check for a Google penalty before I brought it, normally I wouldn't even look at a domain that wasn't indexed.
I brought it with the idea of beefing it up a lot, I realised I would have to do all the SEO stuff, but if it's had a penalty it is debatable whether I even bother with it.
Is it worthwhile putting a bit into it and seeing whether indexes or not?
It's not the end of the world I got it for a snip, it may be better to curb the losses and put it into a site that is clean. -
The site did exist before - check https://web.archive.org/web/20141117163048/http://www.(your domain)/ - so quite possible it had a manual action (if the type of content was as low quality then as it is now) erasing it from the index.
-
It's a very thin affiliate site with 0% original content (all content = Amazon). On top of that - its quite heavy to load, has no optimisation whatsoever (H1/Meta/..etc); several elements on page that return 404 status, low pagespeed scores and as it is new, no incoming links.
You could check the logs - it's quite possible that Googlebot hasn't discovered the site yet. If it has visited, it probably considered the site too low quality to index. If not visited, you could register in Search Console and do a "fetch like Google".
It will probably put some pages in the index - but there is no chance that with the current content this site is going to rank.Dirk
-
The site is great-headphones [dot] c o m
It is an Amazon affiliate store, nothing in the way of blog yet just products.
I haven't added it to my google search console account yet, just in case it is dodgy, I don't want Google penalising the rest of my sites as well.
-
what's patients name? or is it a secret?
As for robots.txt - typically not having any wouldn't prevent bots from accessing a site, but who knows.
P.S. please, answer all questions asked - content? seo? any messages in google search console (previously known as Google Webmaster Tools)?
-
It is not indexed at all. I have tried the info: and site: parameter
As far as I can tell it is accessible. But have just found there is no robots.txt! Dose is mater?
-
Hi there.
Who's the patient?
Is it actually not indexed or not ranking on first page? Is there any content? Any SEO done? what about accessibility by bots? have you checked robots.txt? Any meta robots tags?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Load Balancer issues on Search Console
The top linked domains in search console are coming from our load balancer setup. Does anyone know how to remove these as unique sites pointing back to our primary domain? I was told Google is smart enough to ignore these as duplicate domains but if that was the case, why would they be listed as the top linked domains in search console? Most concerned....
Intermediate & Advanced SEO | | DonFerrari21690 -
How to Submit My new Website in All Search Engines
Hello Everyone, Can Any body help to suggest Good software, or Any other to easily Submit my website , to All Search Engines ? ? Any expert Can help please, Thanx in Advance
Intermediate & Advanced SEO | | falguniinnovative0 -
What's the best possible URL structure for a local search engine?
Hi Mozzers, I'm working at AskMe.com which is a local search engine in India i.e if you're standing somewhere & looking for the pizza joints nearby, we pick your current location and share the list of pizza outlets nearby along with ratings, reviews etc. about these outlets. Right now, our URL structure looks like www.askme.com/delhi/pizza-outlets for the city specific category pages (here, "Delhi" is the city name and "Pizza Outlets" is the category) and www.askme.com/delhi/pizza-outlets/in/saket for a category page in a particular area (here "Saket") in a city. The URL looks a little different if you're searching for something which is not a category (or not mapped to a category, in which case we 301 redirect you to the category page), it looks like www.askme.com/delhi/search/pizza-huts/in/saket if you're searching for pizza huts in Saket, Delhi as "pizza huts" is neither a category nor its mapped to any category. We're also dealing in ads & deals along with our very own e-commerce brand AskMeBazaar.com to make the better user experience and one stop shop for our customers. Now, we're working on URL restructure project and my question to you all SEO rockstars is, what can be the best possible URL structure we can have? Assume, we have kick-ass developers who can manage any given URL structure at backend.
Intermediate & Advanced SEO | | _nitman0 -
Natural Fluctuation in Search Traffic
This is going to sound like a weird question... I'm curious to know whether there is a natural fluctuation in the actual number of searches being made online each week. It would be great to relate this to the performance of my own organic traffic each week. For example, if organic search traffic is down 10% week on week, is that because search in general is down 10%? Has anybody ever looking into this?
Intermediate & Advanced SEO | | ausmed0 -
Organic search traffic dropped 40% - what am I missing?
Have a client (ecommerce site with 1,000+ pages) who recently switched to OpenCart from another cart. Their organic search traffic (from Google, Yahoo, and Bing) dropped roughly 40%. Unfortunately, we weren't involved with the site before, so we can only rely on the wayback machine to compare previous to present. I've checked all the common causes of traffic drops and so far I mostly know what's probably not causing the issue. Any suggestions? Some URLs are the same and the rest 301 redirect (note that many of the pages were 404 until a couple weeks after the switch when the client implemented more 301 redirects) They've got an XML sitemap and are well-indexed. The traffic drops hit pretty much across the site, they are not specific to a few pages. The traffic drops are not specific to any one country or language. Traffic drops hit mobile, tablet, and desktop I've done a full site crawl, only 1 404 page and no other significant issues. Site crawl didn't find any pages blocked by nofollow, no index, robots.txt Canonical URLs are good Site has about 20K pages indexed They have some bad backlinks, but I don't think it's backlink-related because Google, Yahoo, and Bing have all dropped. I'm comparing on-page optimization for select pages before and after, and not finding a lot of differences. It does appear that they implemented Schema.org when they launched the new site. Page load speed is good I feel there must be a pretty basic issue here for Google, Yahoo, and Bing to all drop off, but so far I haven't found it. What am I missing?
Intermediate & Advanced SEO | | AdamThompson0 -
Best practice for removing indexed internal search pages from Google?
Hi Mozzers I know that it’s best practice to block Google from indexing internal search pages, but what’s best practice when “the damage is done”? I have a project where a substantial part of our visitors and income lands on an internal search page, because Google has indexed them (about 3 %). I would like to block Google from indexing the search pages via the meta noindex,follow tag because: Google Guidelines: “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.” http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35769 Bad user experience The search pages are (probably) stealing rankings from our real landing pages Webmaster Notification: “Googlebot found an extremely high number of URLs on your site” with links to our internal search results I want to use the meta tag to keep the link juice flowing. Do you recommend using the robots.txt instead? If yes, why? Should we just go dark on the internal search pages, or how shall we proceed with blocking them? I’m looking forward to your answer! Edit: Google have currently indexed several million of our internal search pages.
Intermediate & Advanced SEO | | HrThomsen0 -
Need some urgent Panda advice. Open discussion about recovering from the Panda algorithm.
I have a site that has been affected by Panda, and I think I have finally found the problem. When I created this site in the year 2006, I bought content without checking it. Recently, when I went through the site I found out that this content had many duplicates around the web. Not 100% exact, but close to. The first thing I did is ask my best writer to rewrite these topics, as they are a must on my site. This is a very experienced writer, and she will make the categories and subpages outstanding. Second thing I did was putting a NOINDEX, FOLLOW robots meta in place for the pages I determined being bad. They haven't been de-indexed yet. Another thing I recently did is separate other languages and move these over to other domains (with 301's redirecting the old locations to the new.) This means that the site now has a /en/ directory in the URL which is no longer used. With this in mind I was thinking to relocate the NEW content, and 301 the old (to preserve the juice for a while.) For example: http://www.mysite.com/en/this-is-a-pandalized-page/ 301 to http://www.mysite.com/this-is-the-rewritten-page/ The benefits of doing this are: decreasing the amounts of directories in the URL getting rid of pages that are possibly causing trouble getting fresh pages added to the site Now, the advice I am looking for is basically this: Do you agree with the above? Or don't you agree? If you don't, please be so kind to include a reason with your answer. If you do, and have any additional information, or would like to discuss, please go ahead 🙂 Thanks, Giorgio PS: Is it proven that Panda is now a running update? Or is it still periodically executed?
Intermediate & Advanced SEO | | VisualSense1 -
Is there any correlation to time and search ranking?
Is there any evidence that google acknowledges the time that a site has been online with all other things being equal for search ranking?
Intermediate & Advanced SEO | | casper4340