Page performance and reinstating previous version
-
Hi all,
I hope there is a search guru out there who can assist with this.
I decided 3 months ago that our SEO contractors weren't doing as well as they should be and after long discussion, i reoptimised all pages myself. Most are performing better now but sales and enquiries went through the floor.
I have today found ranking information from the week before i made the changes, something i thought i had lost. Now i have this, i have found that one of the two main pages has gone from position 1 to position 38. This explains the problem.
I know all things are not equal and in the intervening months, competitors have updated their sites and links, however, this is the security sector and things don't change much, so, all things being equal, would reinstating the old version of the page be likely to reinstate my previous ranking position or thereabouts? or will the MIGHTY GOOGLE punish me in some way for swapping back to a previous page version?
We use a CMS system and all revisions are stored.
The page in question is www.compoundsecurity.co.uk/security-equipment and the keyword in question is 'wireless alarms'.
Any help will be greatly appreciated by this non SEO plebe.
Cheers
Si
P.S. feel free to berate me for not recording all pertinent info about rankings BEFORE i star playing around with the site. It was my first time and i have well and truly learn my lesson.
-
Thank you Wissam,
I will give it a try today.
Regards,
Si
-
all right
then yes, Google is reevaluating the new content and signals and it will either put the page back or better or leave it at #38.
just wait for a couple weeks to see if the new change is permanent or whatever you did actually helped.
If you cant wait (time is money), then revert back the changes
-
Oh and i added a couple of extra internal links
-
Hi Wissam,
I changed the title, H1 tag, description and the entire copy of the page.
-
I checked with semrush and search metrics to see certain changes that matches known updates happened here and couldn't find any.
What type of changes you did to the page?
Google my reset what it thinks of a documents if "enough" changes has been done to it .. which it reevaluate all the signals and show a more stable rankings for that document
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Indexing "Without WWW" while it is already redirected to the "WWW" version
Hi Guys My websites are being indexed without "WWW" while the 'http://abc.com' is redirected to 'http://www.abc.com'. Now what I believe is that the URL encoded in my website files are written as 'http://abc.com' rather than 'http://www.abc.com' And since now Google has removed the "Set Preferred Domain" option from the Webmaster Tools, I can't set the preferred version of the URL. Oh & Some pages are indexed with "WWW" & Some are indexed "without WWW" Now I think that it's not an issue, but a lot of people have been saying that this may hurt the rankings.. Some comments/tips would be really appreciated
Industry News | | kasiddiqi0 -
Can a Google Local listing be merged with a Goolge+ page?
I have a google+ company page set up and have recently added a Google local listing but have noticed it automatically sets up an additional company G+ page. Is there a way to merge the two? Can not see any reference to this issue on Google Q&A. Thanks, Stacey
Industry News | | skehoe0 -
How can i discover how many of my pages have been indexed by google?
I am currently in the process of trying to produce a report for my corporation and this is a metric that i cannot seem to find on OpenSiteExplorer. Could anyone help?
Industry News | | CF20150 -
Effect of changes on the Content page on search ranking
Hi, I have a question related to my ranking on Google Search On the content pages of my website, there is a section where our content keeps on changing. Whenever a visitor enters new information, Old information will be removed from the page. Therefore our content page is dynamic. Does this make us difficult to rank for particular keyword / overall.
Industry News | | adiez12340 -
If I have a Google+ Business page, do I need a Google Places page as well?
It seems like the two are redundant? Any official word on this? I'm fairly OCD about things being tidy and I dont want to split my reviews / shares / etc between two profiles. Are they not the same thing? I searched for my company, both my plus business page and my places page came up. I attached a SS of the situation. placesvplus.png
Industry News | | jonnyholt1 -
100's of versions of the same page. Is rel=canonical the solution???
Hi, I am currently working with an eCommerce site that has a goofy set up for their contact form. Basically, their are hundreds of "contact us" pages that look exactly the same but have different URLs and are used to help the store owner determine which product the user contacted them about. So almost every product has it's own "contact us" URL. The obvious solution is to do away with this set up but if that is not an option, would a rel=canonical tag linked back to the actually "contact us" page be a possible solution? Or is the canonical tag only used to show the difference between www vs non-www? Thanks!
Industry News | | RossFruin0 -
Need a contractor to create Wikipedia pages
Hey guys! Can anyone recomend a good contractor to create/maintain Wikipedia pages? We are in the publishing business and I want to create Wikipedia pages for our authors/products. Need someone who has successfully created wikipedia pages before, can make basic research to find sources that wikipedia will consider reliable, has good academic writing skills and can start a debate in case they want to remove our article. If anyone knows good contractors please recommend. Thanks!
Industry News | | Alexey_mindvalley0 -
What is the best method for getting pure Javascript/Ajax pages Indeded by Google for SEO?
I am in the process of researching this further, and wanted to share some of what I have found below. Anyone who can confirm or deny these assumptions or add some insight would be appreciated. Option: 1 If you're starting from scratch, a good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses. You can use Hijax to help ajax and html links coexist. You can use Meta NoFollow tags etc to prevent the crawlers from accessing the javascript versions of the page. Currently, webmasters create a "parallel universe" of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline. In current practice, "progressive enhancement" in the form of Hijax-links are often used. Option: 2
Industry News | | webbroi
In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following: The site adopts the AJAX crawling scheme. For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example www.example.com/index.html#key=value, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed. The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results. In order to make this work, the application must use a specific syntax in the AJAX URLs (let's call them "pretty URLs;" you'll see why in the following sections). The search engine crawler will temporarily modify these "pretty URLs" into "ugly URLs" and request those from your server. This request of an "ugly URL" indicates to the server that it should not return the regular web page it would give to a browser, but instead an HTML snapshot. When the crawler has obtained the content for the modified ugly URL, it indexes its content, then displays the original pretty URL in the search results. In other words, end users will always see the pretty URL containing a hash fragment. The following diagram summarizes the agreement:
See more in the....... Getting Started Guide. Make sure you avoid this:
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Here is a few example Pages that have mostly Javascrip/AJAX : http://catchfree.com/listen-to-music#&tab=top-free-apps-tab https://www.pivotaltracker.com/public_projects This is what the spiders see: view-source:http://catchfree.com/listen-to-music#&tab=top-free-apps-tab This is the best resources I have found regarding Google and Javascript http://code.google.com/web/ajaxcrawling/ - This is step by step instructions.
http://www.google.com/support/webmasters/bin/answer.py?answer=81766
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
Some additional Resources: http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
http://www.google.com/support/webmasters/bin/answer.py?answer=357690