Facebook code being duplicated? (Any developers mind taking a peek?)
-
I'm using a few different plug ins to give me various Facebook functions on my site. I'm curious there are any developers out there would could take a look at my source code and see if it looks there is some code being duplicated that's slowing down my site. Thanks so much!
-
Are you using the Facebook plugin for WP? You'll have to pay attention when you activate different sections in the settings.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will Google Judge Duplicate Content on Responsive Pages to be Keyword Spamming?
I have a website for my small business, and hope to improve the search results position for 5 landing pages. I recently modified my website to make it responsive (mobile friendly). I was not able to use Bootstrap; the layout of the pages is a bit unusual and doesn't lend itself to the options Bootstrap provides. Each landing page has 3 main div's - one for desktop, one for tablet, one for phone.
Web Design | | CurtisB
The text content displayed in each div is the same. Only one of the 3 div’s is visible; the user’s screen width determines which div is visible. When I wrote the HTML for the page, I didn't want each div to have identical text. I worried that
when Google indexed the page it would see the same text 3 times, and would conclude that keyword spamming was occurring. So I put the text in just one div. And when the page loads jQuery copies the text from the first div to the other two div's. But now I've learned that when Google indexes a page it looks at both the page that is served AND the page that is rendered. And in my case the page that is rendered - after it loads and the jQuery code is executed – contains duplicate text content in three div's. So perhaps my approach - having the served page contain just one div with text content – fails to help, because Google examines the rendered page, which has duplicate text content in three div's. Here is the layout of one landing page, as served by the server. 1000 words of text goes here. No text. jQuery will copy the text from div id="desktop" into here. No text. jQuery will copy the text from div id="desktop" into here. ===================================================================================== My question is: Will Google conclude that keyword spamming is occurring because of the duplicate content the rendered page contains, or will it realize that only one of the div's is visible at a time, and the duplicate content is there only to achieve a responsive design? Thank you!0 -
Multiple sites using same text - how to avoid Google duplicate content penalty?
Hi Mozers, my client located in Colorado is opening a similar (but not identical) clinic in California. Will Google penalize the new California site if we use text from our website that features his Colorado office? He runs the clinic in CO and will be a partner of the clinic in CA, so the CA clinic has his "permission" to use his original text. Eventually he hopes to go national, with multiple sites utilizing essentially the same text. Will Google penalize the new CA site for plagiarism and/or duplicate content? Or is there a way to tell Google, "hey Google, this new clinic is not ripping off my text"?
Web Design | | CalamityJane770 -
SEO Value to Improving HTML Code of Website That Validates According to W3C?
Greetings MOZ Community: My real estate website www.nyc-officespace-leader.com, originally designed in Drupal, was relaunched using Wordpress in 2013. The code for all URLs validates. The relaunch was performed by developers in Argentina. As part of an SEO campaign, I very reputable design/coding company in the has provided new wire frames to correct useability issues holding back conversions. In the course of the design adjustments they inspected the code and have told me that it is inefficient, that a number of shortcuts were taken and that the code does not conform to Wordpress best practices. What concerns me most is their claim that the quality of coding makes it more difficult for Google to index the site and this may be detrimental to ranking. Is it possible for the original developers to clean up this code if the deficiencies are pointed out to them? Or once coding shortcuts are taken are they impossible to fix? Would it make sense for me to request that the new design team put together a list of HTML deficiencies and provide to the original developers and ask them to correct? I am spending tens of thousands of dollars on content optimization, content marketing. It would be absurd if these coding issues would ultimately prevent improvement in ranking and traffic. At the same time, I hate to be a cynic but the domestic design/coding firm, while being very professional, does have an incentive in getting me to ditch the original design so I commit to an costly rebranding. If these issues are really minor maybe it is not worth making the effort to clean up the code (assuming that is even possible) and focus the budget on content marketing. Any thoughts?
Web Design | | Kingalan10 -
What is the code to 301 http to www in htaccess file on unix server
i want to 301 my http home page to www on a linux server and all my other redirects are set up similar to this in my htaccess file: redirect 301 /example-page.html http://www.example-page.html how do I 301 redirect: http://example.com to http://www.example.com I've tried all kinds of code recommended for an htaccess file on a linux server and nothing seems to work. Thanks for the help mozzers! Ron
Web Design | | Ron100 -
Duplicate Content & Canonicals
I am a bit confused about canonicals and whether they are "working" properly on my site. In Webmaster Tools, I'm showing about 13,000 pages flagged for duplicate content, but nearly all of them are showing two pages, one URL as the root and a second with parameters. Case in point, these two are showing as duplicate content: http://www.gallerydirect.com/art/product/vincent-van-gogh/starry-night http://www.gallerydirect.com/art/product/vincent-van-gogh/starry-night?substrate_id=3&product_style_id=8&frame_id=63&size=25x20 We have a canonical tag on each of the pages pointing to the one without the parameters. Pages with other parameters don't show as duplicates, just one root and one dupe per listing, So, am I not using the canonical tag properly? It is clearly listed as:Is the tag perhaps not formatted properly (I saw someone somewhere state that there needs to be a /> after the URL, but that seems rather picky for Google)?Suggestions?
Web Design | | sbaylor0 -
Does Google penalize duplicate website design?
Hello, We are very close to launching five new websites, all in the same business sector. Because we would like to keep our brand intact, we are looking to use the same design on all five websites. My question is, will Google penalize the sites if they have the same design? Thank you! Best regards,
Web Design | | Tiberiu
Tiberiu0 -
Getting a lot more duplicate content warnings than I expected.
I run WordPress on many of my sites and a site crawl has found MANY duplicate content pages on the latest domain I started a campaign for. I expected to see quite a lot on the tag pages that only had one post but even tag pages with multiple posts and author and category pages with many posts are showing as duplicate content. Is this normal for a WordPress site to have so much duplicate content warnings from the taxonomy pages? I have the option to bulk noindex, follow the category and tag pages but should I do it? I get some traffic directly to the tag pages so removing the pages from search results would dent the traffic of the site a little (generally high bounce rate, low engagement traffic anyway) but could removing the apparent duplicate content actually improve the article pages themselves? Or does anyone have any WordPress specific advice for making the pages not duplicate content? I've toyed with the idea of just displaying excerpts but creating manual excerpts for the 4 years worth of posts, some of which I have no personal knowledge of the subject matter so other suggestions are welcome.
Web Design | | williampatton0 -
SEOMoz crawl report shows a duplicate content and duplicate title for these two url's http://freightmonster.com/ and http://freightmonster.com/index.html. How do I fix this?
What page is attached to http://freightmonster.com/ if it is not the index.html ? Should I do a redirect from the index page to something more descriptive?
Web Design | | FreightBoy1