Indexing techniques
-
Hi,
I just want a confirmation about my indexing technique, if is good or can be improved. The technique is totally whitehat and can be done by one person. Any suggestions or improvements are welcome.
- I create the backlinks ofcource first
- I make a list on public doc from Google.
- On the doc are only ten links.
- After I digg it , and add some more bookmarks 5-6.
- I tweet the digg and each doc. (my 2 twitter accounts have page authority 98)
- I like them in Fb.
- I ping them thru ping serviecs.
- Thats it. Works ok for moment.
Is anything what I can do to improve my technique?
Thanks lot
- I create the backlinks ofcource first
-
No is not gaming, is adult but I am thinking also to develop a gaming site , to turn Mine in a gaming site because in Cy no jobs about SEO. They are more gamblers there , And Online I dont think so that I will go good... Also I make more money from affiliate like to work for somebody... Maybe I wasnt so much lucky I guess...But is ok..Im still happy:)
-
Based on your profile, I'm guessing this is a gaming-related site?
-
My goal is about the old pages to get crawled fast. Which contains my links on them. Is not about my pages.
-
Many of them are authority 10-20-30-40, some other are zero. All are indexed pages because I am taking the links from a competitor. Yes some are low quality links but he is ranking number 1 after 2 500 000 exact matches.I just do this effort to speed up the indexing because many of them are not getting indexed fast. I mean I saw some of them that after 1 month start to show up in Webmaster Tools. After this process all are etting indexed in one day maximum. As for the quality links what you are suggesting to get is almost impossible due to the nature of the niche. Nobody want to give them, as this specific keyword is extremely profitable and have millions of searches. I mean the hardest part is to get the already good ones, and build authority for the other what I create new...OHHHH.. Also we are just 2 persons working here...From 1000 links what I visit until now only 60 was possible to get . Stay another 9000 links for checking.....If I get until 600 from his links will be good I guess , my site is already ranking with his keyword, but in position 50 about(just on page optimization)...and is old, pr 2 with 150 likes and some tweets, all real.The new links are builded in the last 2 days so I dont know where it will goes the site . Other bad on this is that they are around 45 exact matches domains under him with the same keyword...Mine is even not in url..
-
I believe you are referring to getting backlinks indexed. The only reason you would need to go to all that effort is if you were building low quality links on deep pages or pages with thin content that Google would not value in their index (e.g. Forum profile links, blog comments) I'm sure you are doing more than enough to get your links indexed but they will become quickly deindexed if Google no longer values the page content. If you are going to all this effort to index a batch low quality links then why not put that same effort into building links on pages with more trust & better quality content that Google will want in their index?
-
IF your goal is to get your webpages indexed, then why not create a sitemap and submit it in GWT? I don't understand why you would go through all that trouble to get your webpages indexed.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How long should it take for indexed pages to update
Google has crawled and indexed my new site, but my old URLS appear in the search results. Is there a typical amount of time that it takes for Google to update the URL's displayed in search results?
Intermediate & Advanced SEO | | brianvest0 -
Removing Parameterized URLs from Google Index
We have duplicate eCommerce websites, and we are in the process of implementing cross-domain canonicals. (We can't 301 - both sites are major brands). So far, this is working well - rankings are improving dramatically in most cases. However, what we are seeing in some cases is that Google has indexed a parameterized page for the site being canonicaled (this is the site that is getting the canonical tag - the "from" page). When this happens, both sites are being ranked, and the parameterized page appears to be blocking the canonical. The question is, how do I remove canonicaled pages from Google's index? If Google doesn't crawl the page in question, it never sees the canonical tag, and we still have duplicate content. Example: A. www.domain2.com/productname.cfm%3FclickSource%3DXSELL_PR is ranked at #35, and B. www.domain1.com/productname.cfm is ranked at #12. (yes, I know that upper case is bad. We fixed that too.) Page A has the canonical tag, but page B's rank didn't improve. I know that there are no guarantees that it will improve, but I am seeing a pattern. Page A appears to be preventing Google from passing link juice via canonical. If Google doesn't crawl Page A, it can't see the rel=canonical tag. We likely have thousands of pages like this. Any ideas? Does it make sense to block the "clicksource" parameter in GWT? That kind of scares me.
Intermediate & Advanced SEO | | AMHC0 -
Pages getting into Google Index, blocked by Robots.txt??
Hi all, So yesterday we set up to Remove URL's that got into the Google index that were not supposed to be there, due to faceted navigation... We searched for the URL's by using this in Google Search.
Intermediate & Advanced SEO | | bjs2010
site:www.sekretza.com inurl:price=
site:www.sekretza.com inurl:artists= So it brings up a list of "duplicate" pages, and they have the usual: "A description for this result is not available because of this site's robots.txt – learn more." So we removed them all, and google removed them all, every single one. This morning I do a check, and I find that more are creeping in - If i take one of the suspecting dupes to the Robots.txt tester, Google tells me it's Blocked. - and yet it's appearing in their index?? I'm confused as to why a path that is blocked is able to get into the index?? I'm thinking of lifting the Robots block so that Google can see that these pages also have a Meta NOINDEX,FOLLOW tag on - but surely that will waste my crawl budget on unnecessary pages? Any ideas? thanks.0 -
Infinite Scrolling: how to index all pictures
I have a page where I want to upload 20 pictures that are in a slideshow. Idea is that pictures will only load when users scroll down the page (otherwise too heavy loading). I see documentation on how to make this work and ensure search engines index all content. However, I do not see any documentation how to make this work for 20 pictures in a slideshow. It seems impossible to get a search engines to index all such pictures, when it shows only as users scroll down a page. This is documentation I am already familiar with, and which does not address my issue:
Intermediate & Advanced SEO | | khi5
http://googlewebmastercentral.blogspot.com/2014/02/infinite-scroll-search-friendly.html http://www.appelsiini.net/projects/lazyload http://luis-almeida.github.io/unveil/ thank you0 -
Huge google index with un-relevant pages
Hi, i run a site about sport matches, every match has a page and the pages are generated automatically from the DB. pages are not duplicated, but over time some look a little bit similar. after a match finishes it has no internal links or sitemap entry, but it's reachable by direct URL and continues to be on google index. so over time we have more than 100,000 indexed pages. since past matches have no significance and they're not linked and a match can repeat and it may look like duplicate content....what you suggest us to do: when a match is finished - not linked, but appears on the index and SERP 301 redirect the match Page to the match Category which is a higher hierarchy and is always relevant? use rel=canonical to the match Category do nothing.... *301 redirect will shrink my index status, some say a high index status is good... *is it safe to 301 redirect 100,000 pages at once - wouldn't it look strange to google? *would canonical remove the past matches pages from the index? what do you think? Thanks, Assaf.
Intermediate & Advanced SEO | | stassaf0 -
Remove content that is indexed?
Hi guys, I want to delete a entire folder with content indexed, how i can explain to google that content no longer exists?
Intermediate & Advanced SEO | | Valarlf0 -
Site Indexed by Google but not Bing or Yahoo
Hi, I have a site that is indexed (and ranking very well) in Google, but when I do a "site:www.domain.com" search in Bing and Yahoo it is not showing up. The team that purchased the domain a while back has no idea if it was indexed by Bing or Yahoo at the time of purchase. Just wondering if there is anything that might be preventing it from being indexed? Also, Im going to submit an index request, are there any other things I can do to get it picked up?
Intermediate & Advanced SEO | | dbfrench0 -
Indexing non-indexed content and Google crawlers
On a news website we have a system where articles are given a publish date which is often in the future. The articles were showing up in Google before the publish date despite us not being able to find them linked from anywhere on the website. I've added a 'noindex' meta tag to articles that shouldn't be live until a future date. When the date comes for them to appear on the website, the noindex disappears. Is anyone aware of any issues doing this - say Google crawls a page that is noindex, then 2 hours later it finds out it should now be indexed? Should it still appear in Google search, News etc. as normal, as a new page? Thanks. 🙂
Intermediate & Advanced SEO | | Alex-Harford0