Category Archives: SEO

Q&A: TopRank Marketing’s SEO Experts Share Tips for Creating Better Content

As a member of TopRank Marketing’s (epic) content team, it’s certainly no surprise that I spend much of my day developing and executing content strategies for the wide array of client programs I work on.

And as I settle in for a content extravaganza each day, my work is guided by an important agency principle: Create content that allows our clients to be the best answer for their audience.

Lucky for me—and perhaps unlucky for them—my work station is strategically placed within staring and shouting distance of every one of our SEO experts. As seasoned marketers know, quality and strategic SEO is absolutely key to crafting content that helps you satisfy your audience’s quest for answers. So, I’m often picking the brains of my neighbors to help me create content that will get results.

Today, I summoned insights  from three members of our SEO team to share with you. Below you’ll find some helpful tips and insights for better leveraging SEO within your content planning and creation efforts.

What does the “perfect balance” between SEO and user experience look like?

User experience should have the “perfect balance” of SEO without anyone realizing that it’s been optimized already. The SEO aspect of content or user experience should be baked into the process early so that the optimization is as natural as possible.

The Takeaway: Make SEO an intricate part of your content planning and creation process, rather than an afterthought.

Kevin Cotch, SEO Analyst

 

How can marketers leverage existing content to take advantage of SEO opportunities?

I have two favorite ways to use existing content to take advantage of SEO opportunities. The first one is to look for evergreen content that doesn’t really move KPIs. For example, I love to find a blog post that drives thousands of visits and doesn’t really drive many leads. Chances are that post does a great job answering a searcher’s question and that’s why it is receiving loads of organic traffic. I like to try and figure out what question the searcher will ask next. Once you’ve figured that out you can add a call to action that offers an opt in to an email campaign or offer the visitor a downloadable asset.

You can also leverage existing content by identifying what Google doesn’t like. Many companies have multiple blog posts about ancient holiday parties, volunteer events, etc. Google will spend time and resources adding that content to their index and then never let it see the light of day. I like to look for pages and posts that are indexable and don’t receive organic traffic. Then I do one of two things. I either tell Google to ignore it or I try to improve the content so it is of value to searchers.

The Takeaway: Your work doesn’t end when your content is published. Take the time to analyze how your content is performing, and take advantage of opportunities to make it better for the user and search engines.

Steve Slater, Digital Advertising & SEO Manager

What are some key characteristics of “good” SEO content?

Overall, I think that the most important factors that make a piece of content good for SEO (i.e. robots) are the same as those that make content good for users (i.e. humans).

The best content:

Is targeted towards a specific niche
Is built upon an understanding of which questions and pain points are relevant to said niche
Delivers value to readers by answering questions or helping to address pain points

The Takeaway: When you understand what your audience is searching for you can create amazing content that delivers value to people and search engines.

Evan Prokop, Senior Analytics Manager

 

Bonus: What’s one of the quickest ways to evaluate a piece of content’s SEO value or opportunities?

Kevin: One of the quickest ways to evaluate a piece of content’s SEO value or opportunities would be seeing how many organic conversions it drives. Organic conversions can range from actual purchases to subscribing to an email list. Marketers should put less emphasis on keyword rankings for SEO value as there are more than just one keyword that drives people (organically) to a website or conversion.

Steve: One of the quickest ways to evaluate a piece of content’s SEO value is to look at the Organic Landing Page report in Google Analytics. This report tells you the pages that searchers land on right after they see search results. This means that the pages in this report are the pages that come up in search results. When you are looking through this report you can identify pages that drive conversions and you can also identify pages that drive searchers back to Google to look for a better answer.

Evan: Put yourself in your reader’s shoes by doing a search using a phrase that you think your content is most relevant for (i.e. should rank for) and honestly ask yourself, or a friend, if it delivers a satisfying answer to what you searched for. If it doesn’t for you, you can bet it won’t for your readers either.

Next, compare your content to what’s already ranking. Is your title at least as compelling as the competition? Are you covering the topic as well? Does the page load fast and look nice on either desktop or mobile? If the answer to any of these questions is no, there’s your opportunity.

What are your burning SEO and content questions? Ask them in the comments section below.

Email Newsletter
Gain a competitive advantage by subscribing to the
TopRank® Online Marketing Newsletter.

© Online Marketing Blog – TopRank®, 2017. |
Q&A: TopRank Marketing’s SEO Experts Share Tips for Creating Better Content | http://www.toprankblog.com

The post Q&A: TopRank Marketing’s SEO Experts Share Tips for Creating Better Content appeared first on Online Marketing Blog – TopRank®.


The Big Top: A New Model for SEO-Driven Content

For over a decade now, the fundamental unit of content marketing has been the blog post. Your post may be a block of text, an infographic, or a listicle about memes, but the underlying structure is the same. A regular cadence of posts to the company blog is the foundation of most content marketing strategies.

The problem is, each individual blog post has only a small window of effectiveness for SEO. A post might go viral, get hundreds of shares, and then sit in your archives for eternity. Identifying and promoting evergreen content can get more mileage out of a good post. But by nature and design, these posts aren’t built to be an enduring SEO resource. Think about it: When was the last time you clicked through on a blog post that was over a year old?

That’s not to say you should stop blogging altogether, of course. Blogs generate subscribers, help promote gated assets, contribute to thought leadership—all worthwhile goals for content marketers. But as SEO continues to evolve, it’s time for new models of SEO-driven content.

At TopRank Marketing, we’ve been working on a new way to integrate SEO and content to build longer-lasting, more valuable resources. Essentially, it’s reverse-engineering evergreen content, purposefully building well-supported “tentpole” content with SEO baked in.

Here’s how to design a content strategy I’m calling the “Big Top” model.

#1: Create Your Tentpole(s)

The tentpole content is the big asset that the rest of your strategy will be supporting. It should be a comprehensive take on a single topic relevant to your business and your audience, one with plenty of opportunities to crosslink with supporting content.

Research topics and keywords for your tentpole the way you would any best answer content: listen to customers, evaluate competing content, and use tools like Bloomberry and UberSuggest.

What will make your content into a tentpole instead of a blog post are a few distinguishing features:

A tentpole should be between 1500 and 3000 words.
Your tentpole will cover multiple aspects of your topic, divided into 250-300 word sections, each section based on long-tail keywords.
This last one is key. Your tentpole will not live on your blog. It should have a permanent place of pride, preferably not more than two clicks deep into your site, with a short URL. A “Resources” section is the ideal place.

You can break up the sections in your tentpole with eye-catching visuals, embedded SlideShare or video content, even CTAs to gated content.

Your tentpole is a prime location or influencer engagement as well. Curate quotes from influencers to highlight in the text—or, better yet, reach out to influencers to co-create and cross-promote the content.

Here’s a good example of a tentpole piece our client LinkedIn Marketing Solutions published earlier this year: How to Advertise on LinkedIn. Notice it’s not a post on their blog; it’s a standalone resource. This piece is currently ranking at the top of the SERP for “How to advertise on LinkedIn.”

You don’t have to limit your strategy to a single tentpole, either. If you have multiple topics that you can cover in depth and at length, create a pole for each one.

#2: Create Your Stakes

Your “stakes” are blog posts that will connect to the tentpole, driving traffic to it from your blog and boosting the blog’s SEO as well. There are several ways to create a supporting stake:

Take one 200-300 word section and expand it with supplemental material to 750 words or so, as the content requires
Cover a related topic that naturally links to your tentpole
Create an announcement post for the tentpole launch
Do an influencer roundup on a topic related to your tentpole

Each stake should have a CTA to the tentpole. If you have anchor links for navigation, as in our example above, you can also link to specific subsections that are relevant to the post.

#3: Connect Your Guylines

Guylines connect the stakes to the tentpole, providing stability and structure. In content terms, that means creating links from your supporting content to the tentpole and vice versa. The goal is to create a destination that users can explore, following their interest through multiple pieces of content, back and forth from the pole. This kind of structuring provides value for your readers, and increases positive search engine signals like time-on-site and session length.

As you develop more tentpoles, look for opportunities to link them together. Make sure each link is a logical next step for your reader. Over time, your “content big top” can become a full-fledged three-ring circus.

#4: Say, “Come One! Come All!”

Support your tentpole launch with all the amplifying force you have:

Use stats or quotes to make social media ads
Publish excerpts (or one of your stakes in its entirety) on sites like LinkedIn and Medium
Encourage influencer amplification
Seek out guest posting opportunities

These promotional efforts will build on your tentpole’s native SEO value, giving it some momentum that will help build external links and bring in organic results.

Make Your Content the Greatest Show on Earth

The Ringling Brothers have put up their big top for the last time, but your big top content can last for years to come. Just remember to keep it relevant; plan for regular updates and revisions (which are a great opportunity to re-promote the content).

The humble blog post is still a fundamental unit of content marketing. But when you supplement the blog with SEO-optimized tentpole content, the results can be… in tents.

Want to learn more about best answer content? Check out these 6 inspiring examples.

Email Newsletter
Gain a competitive advantage by subscribing to the
TopRank® Online Marketing Newsletter.

© Online Marketing Blog – TopRank®, 2017. |
The Big Top: A New Model for SEO-Driven Content | http://www.toprankblog.com

The post The Big Top: A New Model for SEO-Driven Content appeared first on Online Marketing Blog – TopRank®.


11 SEO Myths You Need To Stop Believing Today

11 SEO Myths You Need To Stop Believing Today

SEO is by far the most talked about, searched for and read about topic for web entrepreneurs. And so it should be, considering how vital it is to get SEO right for your business!

What’s puzzling, however, is that there are so many myths and misconceptions about SEO floating about. I mean, we all want to get SEO right, so why do these myths live on, acquiring the status of urban legend?

Some of them are hilarious, but some can really hurt you. They can keep you from improving your search traffic and website rankings, and hinder your best content marketing efforts.

Personally, I would hate to spend weeks perfecting one aspect of SEO only to realize a month later that it’s not even considered important by Google (and I’m sure you would, too!)

So I thought I’d write this post to debunk some of the most common SEO myths I’ve come across, and explain why you need to stop believing them today.

1. SEO is a scam

The myth: Fast-talking SEO consultants charge astronomical fees to provide services without any explanations that do almost nothing and may even penalize your website.

The reality: Sigh. SEO is not a scam. Check out Moz’s organic search improvement over SEO efforts of three years.

Image Source

Sadly this myth probably came into existence because there are many dodgy SEO companies out there that make a profit spamming sites with your links, leading to a quick increase in rankings that rapidly drops when the sites linking to your site are deemed to be spammers by Google.

However just because there are unethical SEO companies that promise you top rankings in Google then leave you high and dry, that doesn’t mean SEO isn’t legit or ‘real’.

For decent companies making sincere efforts to increase website traffic for their clients and improve user experience, SEO is a continuous effort that helps them beat competitors and gain from high SERPs.

This myth is probably rooted in the false idea that SEO involves quick and easy wins with little effort.

It doesn’t. It’s a continual investment, but it’s worth it. Just stop making silly SEO mistakes and keep the quality work up.

2. Reacting quickly to algorithm updates makes you more successful

The myth: Every time Google updates its organic search ranking algorithm, you need to make changes to your site as soon as possible to stay ahead!

The reality: Every search engine out there is continuously working to improve its search algorithms – Google alters its search algorithm approximately 500 times a year. The only updates you need to worry about are the major algorithm updates.

When these happen, the smart thing to do is wait and see if your site has been impacted. More often than not, if you are doing SEO right, your site won’t have been impacted negatively anyway, and you could even see a boost!

There’s no such thing as the perfect search algorithm, so updates will always be around. Try to wait to react, read credible sources about what the update involves, and give yourself a couple of days or even weeks to make adjustments if necessary.

If it’s an update that the search engine will stick to, you will soon hear about best practices for adjustments from the company itself anyway.

I visit this site on a semi-regular basis to stay abreast of the latest web news, and you could also follow the Twitter accounts of SEO gurus. However, the main thing to remember is that in the instance of an update, no one wins a prize for panicking or revamping their site the fastest.

Make a note of where you are when the update occurs and compare your metrics after a few weeks.

3. If you optimize for Google, you’re covered for all sites

The myth: You don’t need to worry about optimizing your content for other search engines if you’ve optimized it for Google.

The reality: Google search may comprise more than 60% of the search market, but Bing’s share is improving steadily. Bing is a great example of a website that works slightly different from Google and deserves your attention.

Bing doesn’t value backlinks as much as Google: instead, it compiles rankings based on user engagement, social signals, click-through rates, page authority and keyword domains. Google doesn’t use metrics such as Facebook shares or Twitter Followers directly in search rankings. So you can clearly see that if you only optimize for Google, you’re not covered for Bing.

If you are targeting exposure to 100% of web traffic, you should optimize for at least the top 3 search engines.

Image Source

4. HTTPS isn’t important unless you’re selling stuff

The myth: You only need to bother with HTTPS encryptions if you’re in eCommerce, otherwise the original HTTP protocol works fine.

The reality: Wrong! At the start of 2017, the average worldwide volume of encrypted internet traffic finally surpassed the average volume of unencrypted traffic, according to Mozilla (the company behind the Firefox web browser).

That means when you visit a website, you’re more likely than not to see a little green lock right next to the web address that indicates it came to you via HTTPS, the web’s secure protocol, rather than plain old HTTP.

Google has said loud and clear that it will give preference to websites with the HTTPS prefix over others.

That’s because the encryption within HTTPS provides benefits like confidentiality, integrity and identity.

Ultimately, using HTTPS is better than leaving the web unencrypted and it’s been a priority for big sites like Facebook, Google, Wikipedia and The New York Times to switch to HTTPS.

We’ve passed the tipping point when it comes to encrypted vs unencrypted data, and organizations like Let’s Encrypt are now helping millions of companies add HTTPS to their sites for free.

 

5. H1 tags increase search rankings

The myth: Using H1 tags is a must-do when it comes to good SEO practice.

The reality: This is not at all true, technically. Whereas H1 tags do help to make content more organized for the reader and also make it easier for web developers to design your website, they don’t contribute to SEO directly.

Former Google software engineer Matt Cutts says in this video that it doesn’t matter whether you use H1 or H2. What matters is that your page contains relevant and useful information that will address the needs of your users.

A few years ago, H1 tags used to be one of the most critical SEO factors; today, however, they’re just a part of basic best practice and not a source of SEO differentiation.

6. Link-building is black hat and attracts Google penalties

The myth: Google hates black hat link-building!

The reality: This is hilarious, really. Google rewards your website for backlinks – the only proviso is that these backlinks have got to be from relevant and credible sources.

If you plant your website’s links on article farms, unrelated websites, spammy websites or websites with malware and other suspicious scripts, then yes, you can expect to be penalized for back-linking.

But in that instance, it’s actually spamming, not back-linking.

When you’re building quality links, you don’t need to worry about this SEO myth. Many people think that leaving comments on blogs is a black hat SEO technique, but that’s only the case if the comments only link to your website without adding value.

The key is to ask yourself if you’re adding value every time you leave a comment on a blog or link to a website in an article – if you are, then you’ve got nothing to worry about.

7. Content is king

The myth: All you need to do is create high-quality, useful content to rank well in search results without much help from SEO.

The reality: Look, I’m not going to bag out the ‘content is king’ mantra here for fear of upsetting too many digital marketers. But while publishing timely, relevant and well-researched content is great, it’s not going to get you to the top of Google alone.

Content is like one of many directors sitting on a board, waiting to make a joint decision. The other directors are equally powerful: some of them include quality backlinks, user experience and responsive design.

If your whole website isn’t optimized, crawlers could struggle to even find your content, which means it won’t show up in results at all.

Focus on content, for sure, but don’t be myopic about it, as you’ve got to take care of the user experience on the whole.

8. Hosting location is important

The myth: If your website isn’t located in the country you are targeting, you may as well forget about success.

The reality: While it is better to host your website in the company you are targeting, it’s not essential. Google is smart enough to showcase the right country version of your website to the right audience. And this study shows us that Google prioritizes quality information over local content.

That means ‘au’ links are shown to Australians and ‘nz’ links are shown to New Zealanders.

If you don’t already use a country code top-level domain (ccTLD), I suggest using Google Webmaster Tools’ geographic target setting. In the Webmaster Tools sidebar, simply go to Search Traffic > International Targeting, and specify the target country for the website.

For international websites, just select ‘unlisted’ from the tab below.

9. Having an XML sitemap will boost your search rankings

The myth: Installing an XML sitemap can help improve your search engine rankings.

The reality: A sitemap doesn’t affect the rankings of your web pages, although it does make them more crawlable.

Sitemaps give more information about your site to Google and therefore make sure it indexes quickly.

However, there’s never been any Google announcement or study-based outcome to suggest that XML sitemap submission improves your website’s SEO.

Use one to make sure all of your URLs are indexed for easy crawling as this can improve the visibility of your website in the long run.

I suggest trying a plugin like Google XML Sitemaps generator, which works great with WordPress websites.

10. With personalized Google searches, there’s no such thing as ranking first anymore

The myth: Since everyone’s search results are personalized, everyone sees different results and there’s no way to be ranked #1 anymore.

The reality: My request to all readers – please, don’t be mislead by such rumors. Here’s a trick to try at home.

Do five Google searches related to your industry’s niche, first using your personal computer (where, in all likelihood, you’re seeing personalized Google search results), and then by adding &pws=0 at the end of the URL of the SERP.

That depersonalizes Google.

Now notice the difference.

Chances are, there isn’t one. Because websites that are good enough to make it to Google’s top 10 are good enough to feature on any personalized searches, too!

The differences between personalized results and non-personalized results are relatively minor. The advent of personalization does mean that rank tracking may provide somewhat less authoritative data than before.

But in no way is it the end of SEO or does it necessitate a completely new look at SEO practices.

11. Keywords in comments and title tags provide SEO juice

The myth: The strategic placement of keywords in HTML comment tags and the title attributes of IMG and A HREF tags will help you win at SEO.

The reality: Rankings really don’t work this way.

First and foremost, comment tags specifically mean that the content is out of Google’s view for calculating ratings.

Secondly, title attributes are not supposed to help you with SEO.

This Moz article will help you understand the specifics of why precisely title attribute tags are not linked to SEO.

In summary

There are at least half a dozen more SEO myths I could add to this list, but these are some of the main ones I see causing confusion amongst digital marketers, programmers, webmasters, designers, small businesses and entrepreneurs.

Hopefully I’ve debunked a few myths for you or at least motivated you to apply a bit of critical thinking to the next one you hear.

There is no easy science to SEO, and because the digital landscape is constantly changing, it’s hardly surprising that there’s a lot of misinformation out there. But moving forward, stop giving time or energy to SEO strategies or  techniques that have no substance behind them or probably came about because of a bunch of snake-oil SEO salesmen.

Which SEO myth ticks you off the most? Let me know in the comments below.

Guest Author: Kuldeep Bisht, Inbound Marketing Consultant for SEMark, has over eight years of digital marketing experience. Throughout his career, he has helped many enterprise clients and local small businesses improve their marketing results by using strategic thinking and proven methodologies. You can follow his journey at KuldeepBisht.com and you may connect with him on Linkedin, Twitter, Google+ and Facebook.

The post 11 SEO Myths You Need To Stop Believing Today appeared first on Jeffbullas's Blog.


JavaScript & SEO: Making Your Bot Experience As Good As Your User Experience

Posted by alexis-sanders

Understanding JavaScript and its potential impact on search performance is a core skillset of the modern SEO professional. If search engines can’t crawl a site or can’t parse and understand the content, nothing is going to get indexed and the site is not going to rank.

The most important questions for an SEO relating to JavaScript: Can search engines see the content and grasp the website experience? If not, what solutions can be leveraged to fix this?

FundamentalsWhat is JavaScript?

When creating a modern web page, there are three major components:

HTML – Hypertext Markup Language serves as the backbone, or organizer of content, on a site. It is the structure of the website (e.g. headings, paragraphs, list elements, etc.) and defining static content.
CSS – Cascading Style Sheets are the design, glitz, glam, and style added to a website. It makes up the presentation layer of the page.
JavaScript – JavaScript is the interactivity and a core component of the dynamic web.

Learn more about webpage development and how to code basic JavaScript.

javacssseo.gif

Image sources: 1, 2, 3

JavaScript is either placed in the HTML document within <script> tags (i.e., it is embedded in the HTML) or linked/referenced. There are currently a plethora of JavaScript libraries and frameworks, including jQuery, AngularJS, ReactJS, EmberJS, etc.

JavaScript libraries and frameworks:

What is AJAX?

AJAX, or Asynchronous JavaScript and XML, is a set of web development techniques combining JavaScript and XML that allows web applications to communicate with a server in the background without interfering with the current page. Asynchronous means that other functions or lines of code can run while the async script is running. XML used to be the primary language to pass data; however, the term AJAX is used for all types of data transfers (including JSON; I guess “AJAJ” doesn’t sound as clean as “AJAX” [pun intended]).

A common use of AJAX is to update the content or layout of a webpage without initiating a full page refresh. Normally, when a page loads, all the assets on the page must be requested and fetched from the server and then rendered on the page. However, with AJAX, only the assets that differ between pages need to be loaded, which improves the user experience as they do not have to refresh the entire page.

One can think of AJAX as mini server calls. A good example of AJAX in action is Google Maps. The page updates without a full page reload (i.e., mini server calls are being used to load content as the user navigates).

Related image

Image source

What is the Document Object Model (DOM)?

As an SEO professional, you need to understand what the DOM is, because it’s what Google is using to analyze and understand webpages.

The DOM is what you see when you “Inspect Element” in a browser. Simply put, you can think of the DOM as the steps the browser takes after receiving the HTML document to render the page.

The first thing the browser receives is the HTML document. After that, it will start parsing the content within this document and fetch additional resources, such as images, CSS, and JavaScript files.

The DOM is what forms from this parsing of information and resources. One can think of it as a structured, organized version of the webpage’s code.

Nowadays the DOM is often very different from the initial HTML document, due to what’s collectively called dynamic HTML. Dynamic HTML is the ability for a page to change its content depending on user input, environmental conditions (e.g. time of day), and other variables, leveraging HTML, CSS, and JavaScript.

Simple example with a <title> tag that is populated through JavaScript:

HTML source

DOM

What is headless browsing?

Headless browsing is simply the action of fetching webpages without the user interface. It is important to understand because Google, and now Baidu, leverage headless browsing to gain a better understanding of the user’s experience and the content of webpages.

PhantomJS and Zombie.js are scripted headless browsers, typically used for automating web interaction for testing purposes, and rendering static HTML snapshots for initial requests (pre-rendering).

Why can JavaScript be challenging for SEO? (and how to fix issues)

There are three (3) primary reasons to be concerned about JavaScript on your site:

Crawlability: Bots’ ability to crawl your site.
Obtainability: Bots’ ability to access information and parse your content.
Perceived site latency: AKA the Critical Rendering Path.

Crawlability

Are bots able to find URLs and understand your site’s architecture? There are two important elements here:

Blocking search engines from your JavaScript (even accidentally).
Proper internal linking, not leveraging JavaScript events as a replacement for HTML tags.
Why is blocking JavaScript such a big deal?

If search engines are blocked from crawling JavaScript, they will not be receiving your site’s full experience. This means search engines are not seeing what the end user is seeing. This can reduce your site’s appeal to search engines and could eventually be considered cloaking (if the intent is indeed malicious).

Fetch as Google and TechnicalSEO.com’s robots.txt and Fetch and Render testing tools can help to identify resources that Googlebot is blocked from.

The easiest way to solve this problem is through providing search engines access to the resources they need to understand your user experience.

!!! Important note: Work with your development team to determine which files should and should not be accessible to search engines.

Internal linking

Internal linking should be implemented with regular anchor tags within the HTML or the DOM (using an HTML tag) versus leveraging JavaScript functions to allow the user to traverse the site.

Essentially: Don’t use JavaScript’s onclick events as a replacement for internal linking. While end URLs might be found and crawled (through strings in JavaScript code or XML sitemaps), they won’t be associated with the global navigation of the site.

Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.

URL structure

Historically, JavaScript-based websites (aka “AJAX sites”) were using fragment identifiers (#) within URLs.

Not recommended:

The Lone Hash (#) – The lone pound symbol is not crawlable. It is used to identify anchor link (aka jump links). These are the links that allow one to jump to a piece of content on a page. Anything after the lone hash portion of the URL is never sent to the server and will cause the page to automatically scroll to the first element with a matching ID (or the first <a> element with a name of the following information). Google recommends avoiding the use of “#” in URLs.
Hashbang (#!) (and escaped_fragments URLs) – Hashbang URLs were a hack to support crawlers (Google wants to avoid now and only Bing supports). Many a moon ago, Google and Bing developed a complicated AJAX solution, whereby a pretty (#!) URL with the UX co-existed with an equivalent escaped_fragment HTML-based experience for bots. Google has since backtracked on this recommendation, preferring to receive the exact user experience. In escaped fragments, there are two experiences here:

Original Experience (aka Pretty URL): This URL must either have a #! (hashbang) within the URL to indicate that there is an escaped fragment or a meta element indicating that an escaped fragment exists (<meta name=”fragment” content=”!”>).
Escaped Fragment (aka Ugly URL, HTML snapshot): This URL replace the hashbang (#!) with “_escaped_fragment_” and serves the HTML snapshot. It is called the ugly URL because it’s long and looks like (and for all intents and purposes is) a hack.

Image result

Image source

Recommended:

pushState History API – PushState is navigation-based and part of the History API (think: your web browsing history). Essentially, pushState updates the URL in the address bar and only what needs to change on the page is updated. It allows JS sites to leverage “clean” URLs. PushState is currently supported by Google, when supporting browser navigation for client-side or hybrid rendering.

A good use of pushState is for infinite scroll (i.e., as the user hits new parts of the page the URL will update). Ideally, if the user refreshes the page, the experience will land them in the exact same spot. However, they do not need to refresh the page, as the content updates as they scroll down, while the URL is updated in the address bar.
Example: A good example of a search engine-friendly infinite scroll implementation, created by Google’s John Mueller (go figure), can be found here. He technically leverages the replaceState(), which doesn’t include the same back button functionality as pushState.
Read more: Mozilla PushState History API Documents

Obtainability

Search engines have been shown to employ headless browsing to render the DOM to gain a better understanding of the user’s experience and the content on page. That is to say, Google can process some JavaScript and uses the DOM (instead of the HTML document).

At the same time, there are situations where search engines struggle to comprehend JavaScript. Nobody wants a Hulu situation to happen to their site or a client’s site. It is crucial to understand how bots are interacting with your onsite content. When you aren’t sure, test.

Assuming we’re talking about a search engine bot that executes JavaScript, there are a few important elements for search engines to be able to obtain content:

If the user must interact for something to fire, search engines probably aren’t seeing it.

Google is a lazy user. It doesn’t click, it doesn’t scroll, and it doesn’t log in. If the full UX demands action from the user, special precautions should be taken to ensure that bots are receiving an equivalent experience.

If the JavaScript occurs after the JavaScript load event fires plus ~5-seconds*, search engines may not be seeing it.

*John Mueller mentioned that there is no specific timeout value; however, sites should aim to load within five seconds.
*Screaming Frog tests show a correlation to five seconds to render content.
*The load event plus five seconds is what Google’s PageSpeed Insights, Mobile Friendliness Tool, and Fetch as Google use; check out Max Prin’s test timer.

If there are errors within the JavaScript, both browsers and search engines won’t be able to go through and potentially miss sections of pages if the entire code is not executed.
How to make sure Google and other search engines can get your content1. TEST

The most popular solution to resolving JavaScript is probably not resolving anything (grab a coffee and let Google work its algorithmic brilliance). Providing Google with the same experience as searchers is Google’s preferred scenario.

Google first announced being able to “better understand the web (i.e., JavaScript)” in May 2014. Industry experts suggested that Google could crawl JavaScript way before this announcement. The iPullRank team offered two great pieces on this in 2011: Googlebot is Chrome and How smart are Googlebots? (thank you, Josh and Mike). Adam Audette’s Google can crawl JavaScript and leverages the DOM in 2015 confirmed. Therefore, if you can see your content in the DOM, chances are your content is being parsed by Google.

adamaudette - I don't always JavaScript, but when I do, I know google can crawl the dom and dynamically generated HTML

Recently, Barry Goralewicz performed a cool experiment testing a combination of various JavaScript libraries and frameworks to determine how Google interacts with the pages (e.g., are they indexing URL/content? How does GSC interact? Etc.). It ultimately showed that Google is able to interact with many forms of JavaScript and highlighted certain frameworks as perhaps more challenging. John Mueller even started a JavaScript search group (from what I’ve read, it’s fairly therapeutic).

All of these studies are amazing and help SEOs understand when to be concerned and take a proactive role. However, before you determine that sitting back is the right solution for your site, I recommend being actively cautious by experimenting with small section Think: Jim Collin’s “bullets, then cannonballs” philosophy from his book Great by Choice:

“A bullet is an empirical test aimed at learning what works and meets three criteria: a bullet must be low-cost, low-risk, and low-distraction… 10Xers use bullets to empirically validate what will actually work. Based on that empirical validation, they then concentrate their resources to fire a cannonball, enabling large returns from concentrated bets.”

Consider testing and reviewing through the following:

Confirm that your content is appearing within the DOM.
Test a subset of pages to see if Google can index content.

Manually check quotes from your content.
Fetch with Google and see if content appears.

Fetch with Google supposedly occurs around the load event or before timeout. It’s a great test to check to see if Google will be able to see your content and whether or not you’re blocking JavaScript in your robots.txt. Although Fetch with Google is not foolproof, it’s a good starting point.
Note: If you aren’t verified in GSC, try Technicalseo.com’s Fetch and Render As Any Bot Tool.

After you’ve tested all this, what if something’s not working and search engines and bots are struggling to index and obtain your content? Perhaps you’re concerned about alternative search engines (DuckDuckGo, Facebook, LinkedIn, etc.), or maybe you’re leveraging meta information that needs to be parsed by other bots, such as Twitter summary cards or Facebook Open Graph tags. If any of this is identified in testing or presents itself as a concern, an HTML snapshot may be the only decision.

2. HTML SNAPSHOTSWhat are HTmL snapshots?

HTML snapshots are a fully rendered page (as one might see in the DOM) that can be returned to search engine bots (think: a static HTML version of the DOM).

Google introduced HTML snapshots 2009, deprecated (but still supported) them in 2015, and awkwardly mentioned them as an element to “avoid” in late 2016. HTML snapshots are a contentious topic with Google. However, they’re important to understand, because in certain situations they’re necessary.

If search engines (or sites like Facebook) cannot grasp your JavaScript, it’s better to return an HTML snapshot than not to have your content indexed and understood at all. Ideally, your site would leverage some form of user-agent detection on the server side and return the HTML snapshot to the bot.

At the same time, one must recognize that Google wants the same experience as the user (i.e., only provide Google with an HTML snapshot if the tests are dire and the JavaScript search group cannot provide support for your situation).

Considerations

When considering HTML snapshots, you must consider that Google has deprecated this AJAX recommendation. Although Google technically still supports it, Google recommends avoiding it. Yes, Google changed its mind and now want to receive the same experience as the user. This direction makes sense, as it allows the bot to receive an experience more true to the user experience.

A second consideration factor relates to the risk of cloaking. If the HTML snapshots are found to not represent the experience on the page, it’s considered a cloaking risk. Straight from the source:

“The HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking.”
– Google Developer AJAX Crawling FAQs
Benefits

Despite the considerations, HTML snapshots have powerful advantages:

Knowledge that search engines and crawlers will be able to understand the experience.

Certain types of JavaScript may be harder for Google to grasp (cough… Angular (also colloquially referred to as AngularJS 2) …cough).

Other search engines and crawlers (think: Bing, Facebook) will be able to understand the experience.

Bing, among other search engines, has not stated that it can crawl and index JavaScript. HTML snapshots may be the only solution for a JavaScript-heavy site. As always, test to make sure that this is the case before diving in.

"It's not just Google understanding your JavaScript. It's also about the speed." -DOM - "It's not just about Google understanding your Javascript. it's also about your perceived latency." -DOM

Site latency

When browsers receive an HTML document and create the DOM (although there is some level of pre-scanning), most resources are loaded as they appear within the HTML document. This means that if you have a huge file toward the top of your HTML document, a browser will load that immense file first.

The concept of Google’s critical rendering path is to load what the user needs as soon as possible, which can be translated to → “get everything above-the-fold in front of the user, ASAP.”

Critical Rendering Path – Optimized Rendering Loads Progressively ASAP:

progressive page rendering

Image source

However, if you have unnecessary resources or JavaScript files clogging up the page’s ability to load, you get “render-blocking JavaScript.” Meaning: your JavaScript is blocking the page’s potential to appear as if it’s loading faster (also called: perceived latency).

Render-blocking JavaScript – Solutions

If you analyze your page speed results (through tools like Page Speed Insights Tool, WebPageTest.org, CatchPoint, etc.) and determine that there is a render-blocking JavaScript issue, here are three potential solutions:

Inline: Add the JavaScript in the HTML document.
Async: Make JavaScript asynchronous (i.e., add “async” attribute to HTML tag).

Defer: By placing JavaScript lower within the HTML.

!!! Important note: It’s important to understand that scripts must be arranged in order of precedence. Scripts that are used to load the above-the-fold content must be prioritized and should not be deferred. Also, any script that references another file can only be used after the referenced file has loaded. Make sure to work closely with your development team to confirm that there are no interruptions to the user’s experience.

Read more: Google Developer’s Speed Documentation

TL;DR – Moral of the story

Crawlers and search engines will do their best to crawl, execute, and interpret your JavaScript, but it is not guaranteed. Make sure your content is crawlable, obtainable, and isn’t developing site latency obstructions. The key = every situation demands testing. Based on the results, evaluate potential solutions.

Thanks: Thank you Max Prin (@maxxeight) for reviewing this content piece and sharing your knowledge, insight, and wisdom. It wouldn’t be the same without you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

3 Reasons You’re Not Getting the SEO Budget You Need to Be Successful

I have heard digital marketers say that Search Engine Optimization is “free” traffic. I want to help set the record straight by letting you know that SEO is not “free.” Budgets are still needed to write content and have a SEO consultant work on a website.

There have been multiple times in my career that I needed to create a compelling argument for more budget for an SEO campaign. These types of recommendations are often challenged, yet thousands of dollars for PPC can be spent without blinking an eye. Some marketers would say it’s easier to see the ROI on ad spend compared to the SEO campaigns and initiatives that we are also running.

There are a three reasons that SEO specialists struggle to get the budgets they want compared to their digital advertising counterparts including:

SEO is a long-term strategy
Proving ROI for SEO
Potential SEO risks from blackhat tactics

Now is the time to learn how to showcase the importance of SEO and fight for the budget that it needs to make your website successful from an organic standpoint. Below are some top tips for getting the SEO budget that you want and need to show success.

SEO is a long-term strategy

The first step to getting the budget you request is educating your team and decision makers that SEO is a long-term strategy. Of course, there are SEO tactics that you can complete, but SEO takes time to work when done correctly.

By providing education and information, you are able to build trust from the start and setting realistic expectations for all parties involved.

The competitive nature of your industry will also determine how much SEO budget you will need. A competitive industry should expect to pay more money to be aggressive to gain more visibility. Competitive industries usually take a longer time to penetrate from an SEO standpoint and will need a more advanced strategy to succeed.

Most seasoned SEO specialists can provide an estimate on when a website will see an organic benefit, especially when prioritizing tactics. State that from the list of prioritized SEO tactics, you can estimated that the website will begin to see organic performance improvements within a couple weeks to months.

Proving ROI for SEO campaigns

This might be the most important reason you are not getting the budget you are requesting. SEO can be hard to attribute success to if you don’t plan for it. Tracking and monitoring your analytics is crucial to success. Often, SEO teams will report on the wrong metrics like sessions, bounce rate and keyword rankings. Those vanity metrics are important to track but not necessarily to the most essential metrics to share with an executive. Instead, focus on showing how many conversions were generated through organic sessions.

Then, it’s important to come to a consensus on what those conversions normally net for ROI. That way you can show an estimate on revenue from the organic campaigns. ead of monetary value.

SEO teams can also leverage information about what the company is investing in PPC efforts to create a budget and SEO strategy to rank for those keywords organically, potentially saving the company money. Then, that money that is saved can be reallocated to other paid efforts or additional SEO campaigns.

The potential SEO risks of negative tactics from previous SEO consultants

Most marketers are aware of the negative or blackhat SEO tactics from the past (and present). The real issue with these tactics is once discovered, it can be difficult to reverse the impact. More often than not, webmasters who have been hit with a SEO penalty can be reluctant to work with other SEO vendors. Additionally, there a multiple websites online that are willing to charge a low amount of money while providing guarantees that are meaningless. Blackhat tactics and terrible guarantees shine a bad light on the search industry.

When searching for a potential agency or partner for SEO, be sure that they are not implementing blackhat SEO tactics as they can have a long-term impact.

Time to get your budget

SEO is a tried-and-true, cornerstone tactic of successful digital marketing programs. If you’re struggling to get the budget you need, the tips above can help you secure a SEO budget that will help you meet your marketing goals.

If you’re on the hunt for an agency to help you meet your organic search goals, take a look at our Search Engine Optimization services to see if we are a fit for your needs.

Email Newsletter
Gain a competitive advantage by subscribing to the
TopRank® Online Marketing Newsletter.

© Online Marketing Blog – TopRank®, 2017. |
3 Reasons You’re Not Getting the SEO Budget You Need to Be Successful | http://www.toprankblog.com

The post 3 Reasons You’re Not Getting the SEO Budget You Need to Be Successful appeared first on Online Marketing Blog – TopRank®.


New Site Crawl: Rebuilt to Find More Issues on More Pages, Faster Than Ever!

Posted by Dr-Pete

First, the good news — as of today, all Moz Pro customers have access to the new version of Site Crawl, our entirely rebuilt deep site crawler and technical SEO auditing platform. The bad news? There isn’t any. It’s bigger, better, faster, and you won’t pay an extra dime for it.

A moment of humility, though — if you’ve used our existing site crawl, you know it hasn’t always lived up to your expectations. Truth is, it hasn’t lived up to ours, either. Over a year ago, we set out to rebuild the back end crawler, but we realized quickly that what we wanted was an entirely re-imagined crawler, front and back, with the best features we could offer. Today, we launch the first version of that new crawler.

Code name: Aardwolf

The back end is entirely new. Our completely rebuilt “Aardwolf” engine crawls twice as fast, while digging much deeper. For larger accounts, it can support up to ten parallel crawlers, for actual speeds of up to 20X the old crawler. Aardwolf also fully supports SNI sites (including Cloudflare), correcting a major shortcoming of our old crawler.

View/search *all* URLs

One major limitation of our old crawler is that you could only see pages with known issues. Click on “All Crawled Pages” in the new crawler, and you’ll be brought to a list of every URL we crawled on your site during the last crawl cycle:

You can sort this list by status code, total issues, Page Authority (PA), or crawl depth. You can also filter by URL, status codes, or whether or not the page has known issues. For example, let’s say I just wanted to see all of the pages crawled for Moz.com in the “/blog” directory…

I just click the [+], select “URL,” enter “/blog,” and I’m on my way.

Do you prefer to slice and dice the data on your own? You can export your entire crawl to CSV, with additional data including per-page fetch times and redirect targets.

Recrawl your site immediately

Sometimes, you just can’t wait a week for a new crawl. Maybe you relaunched your site or made major changes, and you have to know quickly if those changes are working. No problem, just click “Recrawl my site” from the top of any page in the Site Crawl section, and you’ll be on your way…

Starting at our Medium tier, you’ll get 10 recrawls per month, in addition to your automatic weekly crawls. When the stakes are high or you’re under tight deadlines for client reviews, we understand that waiting just isn’t an option. Recrawl allows you to verify that your fixes were successful and refresh your crawl report.

Ignore individual issues

As many customers have reminded us over the years, technical SEO is not a one-sized-fits-all task, and what’s critical for one site is barely a nuisance for another. For example, let’s say I don’t care about a handful of overly dynamic URLs (for many sites, it’s a minor issue). With the new Site Crawl, I can just select those issues and then “Ignore” them (see the green arrow for location):

If you make a mistake, no worries — you can manage and restore ignored issues. We’ll also keep tracking any new issues that pop up over time. Just because you don’t care about something today doesn’t mean you won’t need to know about it a month from now.

Fix duplicate content

Under “Content Issues,” we’ve launched an entirely new duplicate content detection engine and a better, cleaner UI for navigating that content. Duplicate content is now automatically clustered, and we do our best to consistently detect the “parent” page. Here’s a sample from Moz.com:

You can view duplicates by the total number of affected pages, PA, and crawl depth, and you can filter by URL. Click on the arrow (far-right column) for all of the pages in the cluster (shown in the screenshot). Click anywhere in the current table row to get a full profile, including the source page we found that link on.

Prioritize quickly & tactically

Prioritizing technical SEO problems requires deep knowledge of a site. In the past, in the interest of simplicity, I fear that we’ve misled some of you. We attempted to give every issue a set priority (high, medium, or low), when the difficult reality is that what’s a major problem on one site may be deliberate and useful on another.

With the new Site Crawl, we decided to categorize crawl issues tactically, using five buckets:

Critical Crawler Issues
Crawler Warnings
Redirect Issues
Metadata Issues
Content Issues

Hopefully, you can already guess what some of these contain. Critical Crawler Issues still reflect issues that matter first to most sites, such as 5XX errors and redirects to 404s. Crawler Warnings represent issues that might be very important for some sites, but require more context, such as meta NOINDEX.

Prioritization often depends on scope, too. All else being equal, one 500 error may be more important than one duplicate page, but 10,000 duplicate pages is a different matter. Go to the bottom of the Site Crawl Overview Page, and we’ve attempted to balance priority and scope to target your top three issues to fix:

Moving forward, we’re going to be launching more intelligent prioritization, including grouping issues by folder and adding data visualization of your known issues. Prioritization is a difficult task and one we haven’t helped you do as well as we could. We’re going to do our best to change that.

Dive in & tell us what you think!

All existing customers should have access to the new Site Crawl as of earlier this morning. Even better, we’ve been crawling existing campaigns with the Aardwolf engine for a couple of weeks, so you’ll have history available from day one! Stay tuned for a blog post tomorrow on effectively prioritizing Site Crawl issues, and for a more in-depth look into how to use Site Crawl, check out the recorded webinar.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

How to Get More Backlinks and Rank Higher on Google

get more backlinks

Backlinks are still the major indicator Google uses to determine the authority of a blog. The better your backlinks, the better you’ll rank in search results. But they sure aren’t easy to get.

Building backlinks is a tricky business because many of the “easy” ways go against Google’s terms of service.

What’s more, if you build them in a way that even looks artificial you run the risk of getting a penalty that can be hard to undo.

Today we’re going to take a quick look at the good old backlink – the good and the bad – and talk about a few ways you can start to get more.

Hope it helps!

What is a backlink?

Let’s start with the basics in case any newer bloggers are a little bit unsure.

I’d recommend you go back and read this post on the basics of SEO so that you feel up to speed. But we can quite simply define a backlink as this:

A backlink is when another website links to your blog.

For example, that link above to the article about blogging SEO counts as a backlink for that article. In that case, the anchor text is “the basics of SEO” as those are the words you click to visit the article.

What do backlinks do?

It’s also important to remember that backlinks have several purposes and lead to quite a few different results, each of them different depending on whether you are the one linking out or being linked to.

Some of the main ones include:

They send traffic to your site
When you get a backlink from another blog you will get traffic from that blog as people click through to read the article. This is important even without the SEO benefits.
They make your blog seem authoritative
A backlink is a lot like an endorsement from a blog which is good for your branding as you appear more authoritative if you get backlinks from other quality sites.
They provide new resources for readers
When you link out to other blogs it gives your readers new information. Don’t be afraid to link out a lot as it shows that your posts are well researched and valuable sources of new information.
They get your noticed by the owners
When you link to another blog and someone clicks through that click shows up as a site visitor in statistics programs. This is a good way to get other blogs to notice your blog and should form a valuable part of your blogging strategy.
They help you rank better on Google
Of course, Google uses hundreds of factors to determine how well your blog ranks (site age, freshness, location, points of difference, load time, etc.) but backlinks are still a major one. If you have lots of quality links from relevant, naturally earned sources then you will rank better.

For the purposes of this post we’re just going to focus on how to earn backlinks instead of really going in to details about the different types and the effect that they have on your blog. This post should hopefully be a starting point for more research.

How to get more backlinks

Okay, so let’s jump in to the tofu and potatoes of the post and start looking at the different ways you can get more backlinks to your blog.

Remember, you always want to earn your links in a natural way. I really don’t advocate anything that is cheating or goes against Google’s terms of service. We’ll talk more about that at the end of the article.

1. Produce something different

One of the most common pieces of advice you get about SEO is that you have to produce quality content. But, to be honest, that’s not the whole picture.

If you look at the front page of Google these days you’ll notice that all the results are slightly different. For example, you’ll see tweets, long form articles, tools, calculators, news items, etc.

What this means is that it’s a good idea to try and approach your target keywords with content that is slightly different to the rest. If you can make something that stands out from the crowd it is a lot more likely that you’ll linked to because other people will see it as something valuable and exciting.

2. Link out to bloggers

A lot of bloggers seem scared to link to lots of other sites for fear of losing page rank or something like that.

In fact, linking out to other bloggers is actually a really good way to get more backlinks yourself because, as mentioned above, you get on the radar of those blogs.

linking out

For example, if you look at sites like Quicksprout you’ll notice that the articles are filled with links. This is partly that Neil wants to give bloggers more resources, and partly because he knows that when he links to people he’s more likely to get shares, links, and promotion. It’s a bit of an unspoken rule of blogging:

It's an unspoken rule of blogging that if you link to me I'll promote your post (assuming it's decent). #blogging

— Ramsay (@BlogTyrant) June 3, 2017

Try to include at least one link per paragraph and make sure it goes to an up-to-date website that is very useful and helpful. Once your post goes live you can tweet to the owner and let them know you’ve included their blog.

3. Explore local options

Many states and local councils have websites or online magazines that regularly feature local businesses as a way to support and encourage them.

Sometimes it’s just a link in a directory so that people can search by topic, but other times it’s possible to find a way to get mentioned by creating some content that helps their website.

gov backlinks

For example, the Sydney City Council has a separate website for local environmental initiatives. If your business or blog was doing something to help this issue or had some advice for other businesses it’s quite possible you could submit an article to them.

Even if you have to work a little bit harder by holding a physical workshop one evening, it can be worth it because links from government websites are extremely valuable from an SEO point of view.

It’s worth asking around your local council, suburb, etc. to see if there are any online magazines or websites where you might be able to contribute something in your niche as a way to help the locals and add content to their website.

4. Explore educational options

Another similar option is schools and universities that have separate spaces for students and teachers to write about issues relating to student life or post-grad options. Often they will be open to working on an article in order to provide more resources for students.

This is particularly relevant if you are a graduate of the university. For example, many colleges love to feature past students who have gone on to succeed in their area and would be happy to feature a story.

The most important thing here is to do your research and find the right person and topic that will be interesting. Colleges are extremely strict about what they publish and often editorials have to go through many people before being approved. A regular old guest post pitch won’t work here. We need to be creative.

5. Collaborate to host/develop something with another blogger

If you follow guys like Pat Flynn and Chris Ducker you’ll notice that they often do extraordinarily helpful business events together.

This is a really clever idea because both individuals then go away and promote the event to their own audience which has the dual effect of building interest for the event and also tapping in to the other guy’s readers.

What’s more, this type of event helps internet business people succeed and, as such, they are much more likely to link out to the event or the people that helped them.

blogging quiz

You can also develop tools like the quiz me and Slavko from Fitness Updated produced last year (you can try it out at the bottom of this post). This was a fantastic collaboration and we both earned new links from each other in the process.

Try and think about something that is missing in your niche and then make contact with another blogger who might be interested in teaming up with you to either develop it or just promote it. Often if you’ve already done the hard work and are willing to give naming rights to someone else you can get a really big link.

6. Write valuable content off-site (not just guest posts)

A lot of SEO advice is aimed at improving the content and optimization of our own blogs – and rightly so – but we should also spend a bit more time creating content on other sites.

For example, we all know that guest posts can be important in this respect (although the advice has changed somewhat) but there are a lot of other options that can really help to build backlinks.

inboung

In the Internet Marketing world there is sites like Inbound.org that have an original articles section where experts write about topics in depth. You can, of course, includes links to your own blog in these but the main advantage is getting in front of other bloggers and site owners who might link to you if they discover you through that platform.

You can also do this kind of thing on LinkedIn and Medium as they really favor long form content and well crafted stuff seems to do extremely well and sometimes even gets picked up by news sites. And that leads us on to the next point.

7. Hire a publicist

A lot of people consider publicists to be a little bit out of date and perhaps not in touch with the new digital age. But nothing could be further from the truth. Unless you’re John Miller.

While quite expensive, a good publicist can get you featured in media outlets that would otherwise be virtually impossible to crack. If, for example, you have a blogging event or launch coming up, you can hire a publicist who can help you craft an angle to an interview or a feature article.

This is a good one to explore because links from news sites are extremely valuable in terms of their impact on SEO, but they also send a lot of traffic and regularly get syndicated to other outlets around the world.

When looking for a publicist start in your local area and ask them specifically if they have contacts in newspapers that also have online versions. A mention in a newspaper will still bring a lot of traffic, but you also want that long-lasting benefit of a backlink.

And before you get started make sure you read this.

8. Help out other sites where the might not know they need it

The last idea I wanted to mention tonight is something that can be a bit hit and miss but, when you get it right, it can work wonders.

It is centered around the idea of contacting other bloggers in your niche and helping them out in some way that they might not have expected. For example, if you can notice some glaring errors with their site you could think about giving them a big solution for free and then later on pitch more improvements in exchange for a post about the process.

Similar to this is the idea of interning or helping out on projects. For example, last week I mentioned my friend Vishal who I met through the comments section on Blog Tyrant. He was always so friendly and eager to learn and, after a while, I paid him to do a few tasks and he ended up getting links from the site.

So perhaps take a look around the blogs in your niche and see if you can come up with something that is useful to them and get involved in a very non-spammy way. It might be putting together a high quality PDF audit, or assisting them with something like managing comments if you can see that they are struggling.

What about buying links?

I couldn’t end this post on how to get more links without talking about the most obvious and nefarious way: buying them.

The sad fact of the matter is that, in a lot of cases, buying links still seems to work for people. I can categorically say that I have never once purchased any single link in my whole career, but I do sometimes think I’m missing out when I see crappy sites competing in my niche who clearly didn’t earn the links they have.

That being said, those sites often don’t last for very long. All it takes is a manual review from Google and you have one of those pesky penalties.

Personally I don’t agree with buying links, but I know a lot of people hit back at that argument by saying that Google unfairly favors a lot of big websites, and they often take content off those sites and put it directly on the search results so that no one clicks through. In other words, they’ve made the playing field harder. And I can sympathize with that.

But I still believe that if you want a long term, sustainable blog then the best way to build links is to do it naturally by earning them with clever content, useful tools, and concerted outreach.

How do you build links?

I’d really like to know if you use any other link building methods that you think might be worth mentioning. This is a constantly evolving and changing space and it would be really cool to see what other bloggers are working on.

Please leave a comment and let us know.

Top photo © BDanomyte

How to Measure Performance with Custom Dimensions in Google Analytics [Tutorial]

Posted by tombennet

Data-driven marketing means understanding what works. This means not only having accurate data, but also having the right data.

Data integrity is obviously critical to good reporting, but Analytics auditing shouldn’t focus solely on the validity of the tracking code. Even amongst digital marketing teams who place importance on reporting, I frequently encounter the attitude that a technically sound, out-of-the-box implementation of Google Analytics will provide all the insight you could require.

Because of this, Google Analytics is rarely used to its full potential. When it comes to deeper insights — analyzing the ROI of top-of-funnel marketing activities, the impact of content engagement on raw business KPIs, or the behavior of certain subsets of your audience, for example — many will overlook the ease with which these can be measured. All it takes is a little investment in your tracking setup and a careful consideration of what insight would be most valuable.

In this article, I’ll be exploring the ways in which the Custom dimensions feature can be used to supercharge your Google Analytics reporting setup. We’ll run through some practical examples before diving into the various options for implementation. By the end, you’ll be equipped to apply these techniques to your own reporting, and use them to prove your prowess to your clients or bosses.

What are custom dimensions?

In a nutshell, they enable you to record additional, non-standard data in Google Analytics. You can then pivot or segment your data based on these dimensions, similarly to how you would with standard dimensions like source, medium, city, or browser. Custom dimensions can even be used as filters at the View-level, allowing you to isolate a specific subset of your audience or traffic for deeper analysis.

In contrast to the Content Grouping feature — which allows you to bucket your existing pages into logical groups — custom dimensions let you attach entirely new data to hits, sessions, or users. This last point is critical; custom dimensions can take advantage of the different levels of scope offered by Google Analytics. This means your new dimension can apply to an individual user and all their subsequent interactions on your website, or to a single pageview hit.

For the purposes of this tutorial, we’re going to imagine a simple scenario: You run a popular e-commerce website with a content marketing strategy that hinges around your blog. We’ll start by illustrating some of the ways in which custom dimensions can provide a new perspective.

1. User engagement

You publish a series of tutorials on your blog, and while they perform well in organic search and in social, you struggle to demonstrate the monetary value of your continued efforts. You suspect that engagement with the tutorials correlates positively with eventual high-value purchases, and wish to demonstrate this in Analytics. By configuring a user-level custom dimension called “Commenter” which communicates a true/false depending on whether the user has ever commented on your blog, you can track the behavior of these engaged users.

2. User demographics

User login status is frequently recommended as a custom dimension, since it allows you to isolate your existing customers or loyal visitors. This can be a great source of insight, but we can take this one step further: Assuming that you collect additional (anonymous) data during the user registration process, why not fire this information to Analytics as a user-level custom dimension? In the case of our example website, let’s imagine that your user registration form includes a drop-down menu for occupation. By communicating users’ selections to Analytics, you can compare the purchase patterns of different professions.

3. Out-of-stock products

Most e-commerce sites have, at one time or another, encountered the SEO conundrum of product retirement. What should you do with product URLs that no longer exist? This is often framed as a question of whether to leave them online, redirect them, or 404 them. Less frequently investigated is their impact on conversion, or of the wider behavioral effects of stock level in general. By capturing out-of-stock pageviews as a custom dimension, we can justify our actions with data.

Now that we have a clear idea of the potential of custom dimensions, let’s dive into the process of implementation.

How to implement custom dimensions

All custom dimensions must first be created in the Google Analytics Admin interface. They exist on the Property level, not the View level, and non-premium Google Analytics accounts are allowed up to 20 custom dimensions per Property. Expand Custom Definitions, hit Custom Dimensions, and then the red New Custom Dimension button.

C:UsersThomasB.BUILTVISIBLEAppDataLocalMicrosoftWindowsINetCacheContent.Wordcreating-custom-dimensions-1.png

In the next screen, you’ll need to give your dimension a name, select a Scope (hit, session, user, or — for enhanced e-commerce implementations — product), and check the Active box to enable it. Hit Create, and you’ll be shown a boilerplate version of the code necessary to start collecting data.

C:UsersThomasB.BUILTVISIBLEAppDataLocalMicrosoftWindowsINetCacheContent.Wordour-custom-dimension.png

The code — which is documented fully on Google Developers and Google Support — is very simple:

var mozDimensionValue = ‘Howdy Moz Fans’;
ga(‘set’, ‘dimension1’, mozDimensionValue);

As you can see, we’re defining the value of our dimension in a JavaScript variable, then using the set method with the ga() command queue to pass that variable to Analytics as a custom dimension. All subsequent hits on the page (pageviews, events, etc) would then include this custom dimension. Note that we refer to our dimension by its index number, which in this instance is 1; return to the main Custom Dimensions screen in the Admin area to see the index number which Analytics assigned to your new dimension.

While your developer will typically handle the nuts and bolts of implementation — namely working out how best to pass your desired value into a JavaScript variable — the syntax is simple enough that it can be modified with ease. Using the first of our examples from earlier — tracking commenters — we want to send a value of ‘commenter’ to the Dimension 2 slot as part of an event hit which is configured to fire when somebody comments on the blog. With this slot pre-configured as a user-level dimension, we would use:

ga(‘send’, ‘event’, ‘Engagement’, ‘Blog Comment’, {
‘dimension2’: ‘commenter’
});

This approach is all well and good, but it’s not without its drawbacks. It requires on-page tracking code changes, significant developer involvement, and doesn’t scale particularly well.

Thanks to Google Tag Manager, we can make things much easier.

Implementation with Google Tag Manager

If you use GTM to deploy your Analytics tracking — and for all but the simplest of implementations, I would recommend that you do — then deploying custom dimensions becomes far simpler. For those new to GTM, I gave an introductory talk on the platform at BrightonSEO (slides here), and I’d strongly suggest bookmarking both Google’s official documentation and Simo Ahava’s excellent blog.

For the sake of this tutorial, I’ll assume you’re familiar with the basics of GTM. To add a custom dimension to a particular tag — in this case, our blog comment event tag — simply expand “Custom Dimensions” under More Settings, and enter the index number and value of the dimension you’d like to set. Note that to see the More Settings configuration options, you’ll need to check the “Enable overriding settings in this tag” box if you’re not using a Google Analytics Settings Variable to configure your implementation.

C:UsersThomasB.BUILTVISIBLEAppDataLocalMicrosoftWindowsINetCacheContent.Wordgtm.png

What about our latter two examples, user demographics and out-of-stock products?

Our demographic scenario involved a user registration form which included an “Occupation” field. In contrast to our commenting example, the dimension value in this instance will need to be set programmatically depending on user input — it’s not a simple true/false variable that can be easily attached to a suitable event tag.

While we could use the “DOM Element” variable type to scrape the value of the “Occupation” drop-down field directly off the page, such an approach is not particularly scalable. A far better solution would be to fire the value of the field — along with the values of any other fields you feel may offer — to your website’s data layer.

Attention, people who don’t yet use a data layer:

While your development team will need to be involved in the implementation of a data layer, it’s well worth the effort. The advantages for your reporting can be huge, particularly for larger organizations. Defining the contents of your site’s data layer is a great opportunity for cross-team collaboration, and means that all potentially insightful data points are accessible in a machine-readable and platform-agnostic format, ready to be fired to GA. It’s also less subject to mistakes than ad-hoc tracking code. Much like how CSS separates out style from content, the data layer isolates your data.

Your developer will need to make the required information available in the data layer before you can define it as a Data Layer Variable in GTM and start using it in your tags. In the example below, imagine that the JavaScript variable ‘myValue’ has been configured to return the occupation entered by the user, as a string. We push it to the data layer, then define it as a Data Layer Variable in GTM:

var myValue = ‘Professional Juggler’;
dataLayer.push({‘userOccupation’: ‘myValue’});

C:UsersThomasB.BUILTVISIBLEAppDataLocalMicrosoftWindowsINetCacheContent.Wordgtm-dlv.png

Attach a custom dimension to your User Registration event tag, as before, then simply reference this Data Layer Variable as the dimension value. Done!

Our third example follows the exact same principles: Having identified product-in-stock status as a hit-level datapoint with potential reporting insight, and with our data layer configured to return this as a variable on product pages, we simply configure our pageview tag to use this variable as the value for a new custom dimension.

C:UsersThomasB.BUILTVISIBLEAppDataLocalMicrosoftWindowsINetCacheContent.Wordgtm-stock.png

Reporting & analysis

The simplest way to view custom dimension data in Analytics is to apply a secondary dimension to a standard report. In the example below, we’ve set our new “User Occupation” dimension as the secondary dimension in a New/Returning visitor report, allowing us to identify the professions of our newest users, and those of our frequent visitors.

C:UsersThomasB.BUILTVISIBLEAppDataLocalMicrosoftWindowsINetCacheContent.Wordsecondary-dim.png

By cross-referencing your new dimensions with behavioral data — think social share frequency by occupation — you can gain insight into the subsets of your audience who are most likely to engage or convert.

In truth, however, applying a secondary dimension in this manner is rarely conducive to effective analysis. In many instances, this approach will hugely increase the number of rows of data in your report without providing any immediately useful information. As such, it is often necessary to take things one step further: You can export the data into Excel for deeper analysis, or build a custom dashboard to pivot the data exactly the way you want it. In the example below, a chart and table have been configured to show our most viewed out-of-stock products over the course of the last week. Timely, actionable insight!

C:UsersThomasB.BUILTVISIBLEAppDataLocalMicrosoftWindowsINetCacheContent.Worddashboard.png

Sometimes, it’s necessary to completely isolate a subset of data in a dedicated view. This can be particularly powerful when used with a user-level custom dimension. Let’s say we wish to drill down to show only our most engaged users. We can do this by applying a Filter to a new view. In the following example, we have applied a custom ‘Include’ Filter which specifies a value of ‘commenter’ based on our “Blog Commenter” custom dimension.

C:UsersThomasB.BUILTVISIBLEAppDataLocalMicrosoftWindowsINetCacheContent.Wordfilter-include.png

The result? A dedicated view which reports on engaged users only.

For more information on the intricacies of filtering data based on session or user-level custom dimensions — and their implications for your Real Time reports — be sure to check out this great post from LunaMetrics.

Final thoughts

A deeper understanding of your target audience is never a bad thing. Custom dimensions are just one of the many ways in which Google Analytics can be extended beyond its default configuration to provide more granular, actionable insights tailored to the needs of your business.

As with many other advanced Analytics features, execution is everything. It’s better to have no custom dimensions at all than to waste your limited slots with dimensions which are poorly implemented or just plain unnecessary. Planning and implementation should be a collaborative process between your marketing, management, and development teams.

Hopefully this article has given you some ideas for how custom dimensions might offer you a new perspective on your audience.

Thanks for reading!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Free SEO

Hello and welcome to TechyBytes.com. I’m John Stevenson, the owner of Techy Bytes Consulting. We specialize in search engine marketing and ranking business websites on the 1st page of the Google search results. To close out the year, we’re bringing back our “Free SEO Program”. The details are below, but first I’d like to briefly discuss how SEO can help your business grow in 2016. Maybe you’ve tried SEO before or currently have an SEO company working on your website optimization for you. The reality is that if you aren’t on the 1st page of the Google search results for the most important keywords in your local market, it’s costing you thousands or even tens of thousands of dollars in lost revenue each and every month. To make matters worse, that revenue is going directly to your competitors. If you haven’t tried SEO in the past or you’re not entirely sure what SEO is, Search Engine Optimization (SEO) is the process of making your website easier for the major search engines (Google, Yahoo and Bing) to find and also letting them know what your website is all about. The search engines use a complex algorithm to determine which websites are the most relevant to the requested search term and that’s how websites are ranked in the results. You might have the most relevant business or website based on a search term, but it’s possible that the search engines may never find your site because it doesn’t have the proper optimization. And if the search engines don’t find your site, your potential customers aren’t going to find it either. I think you’ll agree that when you’re looking for a product or service, you probably start your search on the internet by typing in a couple of words describing what you’re looking for. This may lead you to exactly what you want, or you might end up typing in a few different words to view some different results. This is how I find things on a daily basis and almost everyone I know does the same thing. So if your business website isn’t coming up when those searches are being made in your market, those customers go directly to your competitors. No matter what business or city you’re in, there are hundreds if not thousands of people searching for your exact product or service every month. The type of high quality SEO Techy Bytes offers is extremely effective in ranking websites on the first page of Google. And typically we don’t just rank your website on the first page, but we are also able to rank your Facebook page, Twitter page, Yelp page, a Youtube video and other properties on the first page as well. As you can imagine, this type of first page domination is VERY effective at bringing new customers and revenue into your business. The type of SEO work we do is “Google friendly” and has stood the test of time against Google’s latest penalties and algorithm changes. We […]

The post Free SEO appeared first on Phoenix SEO.

Avoid the Google Panda Penalty

The question for many business owners is how do they implement an effective Phoenix SEO strategy without being penalized by the search engines.  This is a very important question because using poor SEO measures can quickly result in a lowering of your site’s ranking and leave you so far down in the search results that you’ll likely never be found by potential customers looking for the services or products you have to offer. When you own a business, an online presence is essential. This is true whether you operate a traditional brick-and-mortar business and just use your website for keeping in touch with potential customers and providing company information or your business is solely an online enterprise. Of course, having a website means making sure you are able to direct traffic to your site. It doesn’t do any good to have a website that nobody can find. Driving traffic and potential customers to your site is the key focus of effective search engine optimization (SEO).     Back in 2011 Google made changes to its search algorithm that was used in determining the ranking of websites. Many businesses were hit with the Google Panda penalty and dropped significantly in the search engine rankings. This can be detrimental because most people turn to a major search engine like Google to find everything from a great restaurant to a plethora of household products. In fact, a quick Google search is often the first step in any purchase of goods or services. Clearly, as a business owner, you want to ensure that your site is at the top of the search results when products or services you offer are entered into a Google search. While the Google Panda penalty primarily focused on poor quality sites that offered very little value, back in April 2012, Google rolled out another change in its ranking algorithm. This new change was called Penguin. Much like the Google Panda penalty, many sites suffered and lost their ranking due to the Google Penguin penalty. Penguin updates have been aimed at targeting sites that use Black Hat SEO strategies. These SEO measures often include things like keyword stuffing, invisible text and link spamming. Google frequently makes changes to their ranking algorithm and the only way to ensure your site is not penalized in the rankings is to use high quality SEO strategies that specifically meet Google’s idea of a quality site. Techy Bytes can help you avoid being penalized by Google.

The post Avoid the Google Panda Penalty appeared first on Phoenix SEO.