Can I "Run Macros" on my own?
-
I talked to the SEO company I am using and trying to get an understanding of what it is they are doing for me. They told me that one of the most important things they are doing is running macros. Is this something I could learn to do myself? What does it mean? How do I do it? How long does it take?? I have recently been educating myself on SEO and coded my website with metadata titles and descriptions. Is running macros something I can do on my own too? I guess I'd also just like to know what it is.
-
Here are a couple of YouMoz posts that discuss things that are good to show in reports from an agency that should give you a couple of ideas.
http://www.seomoz.org/ugc/agent-seo-reporting-maam
http://www.seomoz.org/ugc/what-to-include-in-your-seo-reports
-
What type of reports should they be able to supply? What would the report show?
-
I totally agree with the other two responses. Ask what they mean by running macros, and be sure you understand what they are doing for you, and ask for regular reports. If they're building any links, you need to get the list from them about the exact links they have built.
-
They are probably referring to the office application macros which are scripts that run inside word documents, excel spreadsheet etc. If this is the case then the language you would need to learn is VBA, there are tons of tutorials online you can search for.
As goodlegaladvice suggested, this is probably for creating reports and analytical data etc. If this is the most important thing that they are doing then I would ask for a more detailed explanation of their work!
-
I would guess they might have made macros to organize data they get from analytics, seomoz or some other tracking software. That might be a benefit to there clients but you could do seo without it if you didn't have a huge ammount of keywords and pages
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How should I deal with "duplicate" content in an Equipment Database?
The Moz Crawler is identifying hundreds of instances of duplicate content on my site in our equipment database. The database is similar in functionality to a site like autotrader.com. We post equipment with pictures and our customers can look at the equipment and make purchasing decisions. The problem is that, though each unit is unique, they often have similar or identical specs which is why moz (and presumably google/bing) are identifying the content as "duplicate". In many cases, the only difference between listings are the pictures and mileage- the specifications and year are the same. Ideally, we wouldn't want to exclude these pages from being indexed because they could have some long-tail search value. But, obviously, we don't want to hurt the overall SEO of the site. Any advice would be appreciated.
Technical SEO | | DohenyDrones0 -
Rel="publisher" validation error in html5
Using HTML5 I am getting a validation error on in my HTML Validation error: Bad value publisher for attribute rel on element link: Not an absolute IRI. The string publisher is not a registered keyword or absolute URL. This just started showing up on Tuesday in validation errors. Never showed up in the past. Has something changed?
Technical SEO | | RoxBrock0 -
How can I best handle parameters?
Thank you for your help in advance! I've read a ton of posts on this forum on this subject and while they've been super helpful I still don't feel entirely confident in what the right approach I should take it. Forgive my very obvious noob questions - I'm still learning! The problem: I am launching a site (coursereport.com) which will feature a directory of schools. The directory can be filtered by a handful of fields listed below. The URL for the schools directory will be coursereport.com/schools. The directory can be filtered by a number of fields listed here: Focus (ex: “Data Science”) Cost (ex: “$<5000”) City (ex: “Chicago”) State/Province (ex: “Illinois”) Country (ex: “Canada”) When a filter is applied to the directories page the CMS produces a new page with URLs like these: coursereport.com/schools?focus=datascience&cost=$<5000&city=chicago coursereport.com/schools?cost=$>5000&city=buffalo&state=newyork My questions: 1) Is the above parameter-based approach appropriate? I’ve seen other directory sites that take a different approach (below) that would transform my examples into more “normal” urls. coursereport.com/schools?focus=datascience&cost=$<5000&city=chicago VERSUS coursereport.com/schools/focus/datascience/cost/$<5000/city/chicago (no params at all) 2) Assuming I use either approach above isn't it likely that I will have duplicative content issues? Each filter does change on page content but there could be instance where 2 different URLs with different filters applied could produce identical content (ex: focus=datascience&city=chicago OR focus=datascience&state=illinois). Do I need to specify a canonical URL to solve for that case? I understand at a high level how rel=canonical works, but I am having a hard time wrapping my head around what versions of the filtered results ought to be specified as the preferred versions. For example, would I just take all of the /schools?focus=X combinations and call that the canonical version within any filtered page that contained other additional parameters like cost or city? Should I be changing page titles for the unique filtered URLs? I read through a few google resources to try to better understand the how to best configure url params via webmaster tools. Is my best bet just to follow the advice on the article below and define the rules for each parameter there and not worry about using rel=canonical ? https://support.google.com/webmasters/answer/1235687 An assortment of the other stuff I’ve read for reference: http://www.wordtracker.com/academy/seo-clean-urls http://www.practicalecommerce.com/articles/3857-SEO-When-Product-Facets-and-Filters-Fail http://www.searchenginejournal.com/five-steps-to-seo-friendly-site-url-structure/59813/ http://googlewebmastercentral.blogspot.com/2011/07/improved-handling-of-urls-with.html
Technical SEO | | alovallo0 -
Can Googlebot read the content on our homepage?
Just for fun I ran our homepage through this tool: http://www.webmaster-toolkit.com/search-engine-simulator.shtml This spider seems to detect little to no content on our homepage. Interior pages seem to be just fine. I think this tool is pretty old. Does anyone here have a take on whether or not it is reliable? Should I just ignore the fact that it can't seem to spider our home page? Thanks!
Technical SEO | | danatanseo0 -
Does "?" in my URL have a negative effect?
I am having a difficult time finding specific information about the effect, if any, having a ? within the URL structure. We have the descriptive keyword phrase followed by the ? location id as in this example: http://www.adventuresonly.com/adventure-locations/things-to-do-in-arizona?stateid=124 Any feedback on effect and a corrective process to improve if necessary would be appreciated!
Technical SEO | | RBBonds0 -
Can you have multiple rich snippets show up for the same page
Is it possible to have multiple rich snippets show up in the SERPs for the same page?For example, could a product page have both the aggregate review rich snippet and also the author thumbnail?
Technical SEO | | ProjectLabs0 -
Can You 301 Unwanted Links to Another Site?
I am trying to clean up my link profile, and have noticed that a I have a lot of crappy inbound links linking to some of my old pages. And those old pages have since been 301'ed to current pages. My question is, is it worth trying to 301 those old pages, and thus those crappy links, to another website? Would this do anything to clean up my link profile?
Technical SEO | | red6marketing0 -
How do I use the Robots.txt "disallow" command properly for folders I don't want indexed?
Today's sitemap webinar made me think about the disallow feature, seems opposite of sitemaps, but it also seems both are kind of ignored in varying ways by the engines. I don't need help semantically, I got that part. I just can't seem to find a contemporary answer about what should be blocked using the robots.txt file. For example, I have folders containing site comps for clients that I really don't want showing up in the SERPS. Is it better to not have these folders on the domain at all? There are also security issues I've heard of that make sense, simply look at a site's robots file to see what they are hiding. It makes it easier to hunt for files when they know the directory the files are contained in. Do I concern myself with this? Another example is a folder I have for my xml sitemap generator. I imagine google isn't going to try to index this or count it as content, so do I need to add folders like this to the disallow list?
Technical SEO | | SpringMountain0