Archive for January, 2016

PostHeaderIcon Targeted Link Building in 2016 – Whiteboard Friday



Posted by randfish

SEO has much of its roots in the practice of targeted link building. And while it’s no longer the only core component involved, it’s still a hugely valuable factor when it comes to rank boosting. In this week’s Whiteboard Friday, Rand goes over why targeted link building is still relevant today and how to develop a process you can strategically follow to success.

Click on the whiteboard image above to open a high resolution version in a new tab!

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about four questions that kind of all go together around targeted link building.

Targeted link building is the practice of reaching out and trying to individually bring links to specific URLs or specific domains — usually individual pages, though — and trying to use those links to boost the rankings of those pages in search engine results. And look, for a long time, this was the core of SEO. This was how SEO was done. It was almost the start and the end.

Obviously, a lot of other practices have come into play in the industry, and I think there’s even been some skepticism from folks about whether targeted link building is still a valid practice. I think we can start with that question and then get on to some of these others.

When does it make sense?

In my opinion, targeted link building does make sense when you fulfill certain conditions. We know from our experimentation, from correlation data, from Google’s own statements, from lots of industry data that links still move the needle when it comes to rankings. If you have a page that’s ranking number 4, you point a bunch of new links to it from important pages and sites around the web, particularly if they contain the anchor text that you’re trying to rank for, and you will move up in the rankings.

It makes sense to do this if your page is already ranking somewhere in the, say, top 10 to 20, maybe even 30 results and/or if the page has measurable high impact on business metrics. That could be sales. It could be leads. It could be conversions. Even if it’s indirect, if you can observe both those things happening, it’s probably worthwhile.

It’s also okay if you say, “Hey, we’re not yet ranking in the top 20, but our paid search page is ranking on page 1. We know that we have high conversions here. We want to move from page 3, page 4 up to page 1, and then hopefully up into the top two, top three results. Then it is worth this targeted link building effort, because when you build up that visibility, when you grow those rankings, you can be assured that you are going to gain more visits, more traffic that will convert and send you these key business metrics and push those things up. So I do think targeted link building still makes sense when those conditions are fulfilled.

Is this form of link building worthwhile?

Is this something that can actually do the job it’s supposed to do? And the answer, yeah. Look, if rank boosting is your goal, links are one of the ways where if you already have a page that’s performing well from a conversion standpoint — from a user experience standpoint, pages per visit, your browse rate, things like time onsite, if you’re not seeing high bounce rate, if you have got a page that’s clearly accessible and well targeted and well optimized on the page itself — then links are going to be the most powerful, if not one of the most powerful, elements to moving your rankings. But you’ve got to have a scalable, repeatable process to build links.

You need the same thing that we look for broadly in our marketing practices, which is that flywheel. Yes, it’s going to be hard to get things started. But once we do, we can find a process that works for us again and again. Each successive link that we get and each successive page whose rankings we’re trying to move gets easier and easier because we’ve been there before, we’ve done it, we know what works and what doesn’t work, and we know the ins and outs of the practice. That’s what we’re searching for.

When it comes to finding that flywheel, there are sort of tactics that fit into three categories that still do work. I’m not going to get into the individual specific tactics themselves, but they fall into these three buckets. What we’ve found is that for each individual niche, for each industry, for each different website and for each link builder, each SEO, each one of you out there, there’s a process or combination of processes that works best. So I’m going to dictate to you which tactics works best, but you’ll generally find them in these three buckets

Buckets:

One: one-to-one outreach. This is you going out and sending usually an e-mail, but it could be a DM or a tweet, an at reply tweet. It could be a phone call. It could be — I literally got one of these today — a letter in the mail addressed to me, hand-addressed to me from someone who’d created a piece of content and wanted to know if I would be willing to cover it. It wasn’t exactly up my alley, so I’m not going to. But I thought that was an interesting form of one-to-one outreach.

It could be broadcast. Broadcast is things like social sharing, where we’re broadcasting out a message like, “Hey, we’ve produced this. It’s finally live. We launched it. Come check it out.” That could go through bulk e-mail. It could go through an e-mail subscription. It could go through a newsletter. It could go through press. It could go through a blog.

Then there’s paid amplification. That’s things like social ads, native ads, retargeting, display, all of these different formats. Typically, what you’re going to find is that one-to-one outreach is most effective when you can build up those relationships and when you have something that is highly targeted at a single site, single individual, single brand, single person.

Broadcast works well if, in your niche, certain types of content or tools or data gets regular coverage and you already reach that audience through one of your broadcast mediums.

Paid amplification tends to work best when you have an audience that you know is likely to pick those things up and potentially link to them, but you don’t already reach them through organic channels, or you need another shot at reaching them from organic and paid, both.

Building a good process for link acquisition

Let’s end here with the process for link acquisition. I think this is kind of the most important element here because it helps us get to that flywheel. When I’ve seen successful link builders do their work, they almost all have a process that looks something like this. It doesn’t have to be exactly this, but it almost always falls into this format. There’s a good tool I can talk about for this too.

But the idea being the first step is opportunity discovery, where we figure out where the link opportunities that we have are. Step 2 is building an acquisition spreadsheet of some kind so that we can prioritize which links we’re going to chase after and what tactics we’re going to use. Step 3 is the execution, learn, and iterate process that we always find with any sort of flywheel or experimentation.

Step 1: Reach out to relevant communities

We might find that it turns out for the links that we’re trying to get relevant communities are a great way to acquire those links. We reach out via forums or Slack chat rooms, or it could be something like a private chat, or it could be IRC. It could be a whole bunch of different things. It could be blog comments.

Maybe we’ve found that competitive links are a good way for us to discover some opportunities. Certainly, for most everyone, competitive links should be on your radar, where you go and you look and you say, “Hey, who’s linking to my competition? Who’s linking to the other people who are ranking for this keyword and ranking for related keywords? How are they getting those links? Why are those people linking to them? Who’s linking to them? What are they saying about them? Where are they coming from?”

It could be press and publications. There are industry publications that cover certain types of data or launches or announcements or progress or what have you. Perhaps that’s an opportunity.

Resource lists and linkers. So there’s still a ton of places on the web where people link out to. Here’s a good set of resources around customer on-boarding for software as a service companies. Oh, you know what? We have a great post about that. I’m going to reach out to the person who runs this list of resources, and I’m going to see if maybe they’ll cover it. Or we put together a great meteorology map looking at the last 50 winters in the northeast of the United States and showing a visual graphic overlay of that charted against global warming trends, and maybe I should share that with the Royal Meteorological Society of England. I’m going to go pitch their person at whatever.ac.uk it is.

Blog and social influencers. These are folks who tend to run, obviously, popular blogs or popular social accounts on Twitter or on Facebook or on LinkedIn, or what have you, Pinterest. It could be Instagram. Potentially worth reaching out to those kinds of folks.

Feature, focus, or intersection sources. This one’s a little more complex and convoluted, but the idea is to find something where you have an intersection of some element that you’re providing through the content of your page that you seem to get a link from and there is intersection with things that other organizations or people have interest in.

So, for example, on my meteorology example, perhaps you might say, “Lots of universities that run meteorology courses would probably love an animation like this. Let me reach out to professors.” “Or you know what? I know there’s a data graphing startup that often features interesting data graphing stuff, and it turns out we used one of their frameworks. So let’s go reach out to that startup, and we’ll check out the GitHub project, see who the author is, ping that person and see if maybe they would want to cover it or link to it or share it on social.” All those kinds of things. You found the intersections of overlapping interest.

The last one, biz devs and partnerships. This is certainly not a comprehensive list. There could be tons of other potential opportunity to discover mechanisms. This covers a lot of them and a lot of the ones that tend to work for link builders. But you can and should think of many other ways that you could potentially find new opportunities for links.

Step 2: Build a link acquisition spreadsheet

Gotta build that link acquisition spreadsheet. The spreadsheet almost always looks something like this. It’s not that dissimilar to how we do keyword research, except we’re prioritizing things based on: How important is this and how much do I feel like I could get that link? Do I have a process for it? Do I have someone to reach out to?

So what you want is either the URL or the domain from which you’re trying to get the link. The opportunity type — maybe it’s a partnership or a resource list or press. The approach you’re going to take, the contact information that you’ve got. If you don’t have it yet, that’s probably the first thing on your list is to try and go get that. Then the link metrics around this.

There’s a good startup called BuzzStream that does sort of a system, a mechanism like this where you can build those targeted link outreach lists. It can certainly be helpful. I know a lot of folks like using things like Open Site Explorer and Followerwonk, Ahrefs, Majestic to try and find and fill in a bunch of these data points.

Step 3: Execute, learn, and iterate

Once we’ve got our list and we’re going through the process of actually using these approaches and these opportunity types and this contact information to reach out to people, get the links that we’re hoping to get, now we want to execute, learn, and iterate. So we’re going to do some forms of one-to-one outreach where we e-mail folks and we get nothing. It just doesn’t work at all. What we want to do is try and figure out: Why was that? Why didn’t that resonate with those folks?

We’ll do some paid amplification that just reaches tens of thousands of people, low cost per click, no links. Just nothing, we didn’t get anything. Okay, why didn’t we get a response? Why didn’t we get people clicking on that? Why did the people who clicked on it seem to ignore it entirely? Why did we get no amplification from that?

We can have those ideas and hypotheses and use that to improve our processes. We want to learn from our mistakes. But to do that, just like investments in content and investments in social and other types of investments in SEO, we’ve got to give ourselves time. We have to talk to our bosses, our managers, our teams, our clients and say, “Hey, gang, this is an iterative learning process. We’re going to figure out what forms of link building we’re good at, and then we’re going to be able to boost rankings once we do. But if we give up because we don’t give ourselves time to learn, we’re never going to get these results.”

All right, look forward to your thoughts on tactical link building and targeted link building. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Similar Posts:


Article Source: The Only Yard For The Internet Junkie
If you like all this stuff here then you can buy me a pack of cigarettes.

PostHeaderIcon 12 Awesome New Vive Demos We Played



Similar Posts:


Article Source: The Only Yard For The Internet Junkie
If you like all this stuff here then you can buy me a pack of cigarettes.

PostHeaderIcon How Processors Work



An in-depth look into what gives your computer its brain power

When asked about how a central processing unit works, you might say it’s the brain of the computer. It does all the calculations on math and makes logical decisions based on certain outcomes. However, despite being built upon billions of transistors for today’s modern high-end processors, they’re still made up on basic components and foundations. Here, we’ll go over what goes on in most processors and the foundations they’re built on.

This graphic is a block diagram of Intel’s Nehalem architecture that we can use to get an overview. While we won’t be going over this particular design (some of it is specific to Intel’s processors), what we’ll cover does explain most of what’s going on.

1 Hpw

The Hard Stuff: Components of a Processor

Most modern processors contain the following components:

  • A memory management unit, which handles memory address translation and access
  • An instruction fetcher, which grabs instructions from memory
  • An instruction decoder, which turns instructions from memory into commands that the processor understands
  • Execution units, which perform the operation; at the very least, a processor will have an arithmetic and logic unit (ALU), but a floating point unit (FPU) may be included as well
  • Registers, which are small bits of memory to hold important bits of data

The memory management unit, instruction fetcher, and instruction decoder form what is called the front-end. This is a carryover from the old days of computing, when front-end processors would read punch cards and turn the contents into tape reels for the actual computer to work on. Execution units and registers form the back-end.

Memory Management Unit (MMU)

The memory management unit’s primary job is to translate addresses from virtual address space to physical address space. Virtual address space allows the system to make programs believe the entire address space possible is available, even if physically it’s not. For instance, in a 32-bit environment, the system believes it has 4GB of address space, even if only 2GB of RAM is installed. This is to simplify programming since the programmer doesn’t know what kind of system will run the application.

The other job of the memory management unit is access protection. This prevents an application from reading or writing in another application’s memory address without going through the proper channels.

Instruction Fetcher and Decoder

As their names suggests, these units grab instructions and decode them into operations. Notable in modern x86 designs, the decoder turns the instructions into micro-operations that the next stages will work with. In modern processors, what gets processed into the decoder typically feeds into a control unit, which figures out the best way to execute the instructions. Some of the techniques that are employed include branch prediction, which tries to figure out what will be executed if a branch is to take place, and out-of-order execution, which rearranges instructions so they’re executed in the most efficient way.

Execution Units

The bare minimum a general processor will have is the arithmetic and logic unit (ALU). This execution unit works only with integer values and will do the following operations:

  • Add and subtract; multiplication is done by repeated additions and division is approximated with repeated subtractions (there’s a good article on this topic here)
  • Logical operations, such as OR, AND, NOT, and XOR
  • Bit shifting, which moves the digits left or right

A lot of processors will also include a floating point unit (FPU). This allows the processor to work on a greater range and higher precision of numbers that aren’t whole. Since FPUs are complex, often enough to be their own processor, they are often excluded on smaller low-power processors.

Registers

Registers are small bits of memory that hold immediately relevant data. There’s usually only a handful of them and they can hold data equal to the bit-size the processor was made for. So a 32-bit processor usually has 32-bit registers.

The most common registers are: one that holds the result of an operation, a program counter (this points to where the next instruction is), and a status word or condition code (which dictates the flow of a program). Some architectures have specialized registers to aid in operations. The Intel 8086, for example, has the Segment and Offset registers. These would be used to figure out address spaces in the 8086’s memory-mapping architecture.

A Note about Bits

Bits on a processor usually refers to the largest data size it can handle at once. It mostly applies to the execution unit. However, this does not mean that a processor is only limited to processing data of that size. An eight-bit processor can still process 16-bit and 32-bit numbers, but it takes at least two and four operations, respectively, to do so.

The Soft Stuff: Ideas and Designs in Processors

Over the years of computer design, more and more ideas and designs were realized. These were developed with the goal of making the processor more efficient at what it does, increasing its instructions per clock cycle (IPC) count.

Instruction Set Design

Instruction sets map numerical indexes to commands in a processor. These commands can be something as simple as adding two numbers or as complex as the SSE instruction RSQRTPS (as described in a help file: Compute Reciprocals of Square Roots of Packed Single-Precision Floating-Point Values).

In the early days of computers, memory was very slow and there wasn’t a whole lot of it, and processors were becoming faster and programs more complex. To save both on memory access and program size, instruction sets were designed with the following ideas:

  • Variable-length instructions, so that simpler operations could take up less space
  • Perform a wide variety of memory-addressing commands
  • Operations can be performed on memory locations themselves, in addition to using registers, or as part of the instruction

As memory performance progressed, computer scientists found that it was faster to break down the complex operations into simpler ones. Instructions also could be simplified to speed up the decoding process. This sparked the Reduced Instruction Set Computing (RISC) design idea. Reduced in this case means the time to complete an instruction is reduced. The old way was retroactively named Complex Instruction Set Computing (CISC). To summarize the ideas of RISC:

  • Uniform instruction length, to simplify decoding
  • Fewer and simple memory addressing commands
  • Operations can only be performed on data in registers or as part of the instruction

There have been other attempts at instruction set design. One of them is the Very Long Instruction Word (VLIW). VLIW crams multiple independent instructions into a single unit to be run on multiple execution units. One of the biggest stumbling blocks is that it requires the compiler to sort instructions ahead of time to make the most of the hardware, and most general purpose programs don’t sort themselves out very well. VLIW has been in use in Intel’s Itanium, Transmeta’s Crusoe, MCST’s Elbrus, AMD’s TeraCore, and NVIDIA’s Project Denver (sort of, it has similar characteristics)

Multitasking

Early on, computers could do only one thing at a time and once it got going, it would go until completion, or until there was a problem with the program. As systems became more powerful, an idea called “time sharing” was spawned. Time sharing would have the system work on one program and if something blocked it from continuing, such as waiting for a peripheral to be ready, the system saved the state of the program in memory, then moved on to another program. Eventually, it would come back to the blocked program and see if it had what it needed to run.

Time sharing exposed a problem: A program could unfairly hog the system, either because the program really had a long execution time or because it hung somewhere. So the next systems were built such that they would work on programs in slices of time. That is, every program gets to run for a certain amount of time and after the time slice is up, it moves on to another program automatically. If the time slices are small enough, this gives the impression that the computer is doing multiple things at once.

One important feature that really helped multitasking is the interrupt system. With this, the processor doesn’t need to constantly poll programs or devices if they have something ready; the program or device can generate a signal to tell the processor it’s ready.

Caching

Cache is memory in the processor that, while small in size, is much faster to access than RAM. The idea of caching is that commonly used data and instructions are stored in it and tagged with their address in memory. The MMU will first look in cache to see if what it’s looking for is in it. The more times the data is accessed, the closer its access time reaches cache speed, offering a boost in execution speed.

Normally, data can only reside in one spot in cache. A method to increase the chance of data being in cache is known as associativity. A two-way associative cache means data can be in two places, four-way means it can be in four, and so on. While it may make sense to allow data to just be anywhere in cache, this also increases the lookup time, which may negate the benefit of caching.

Pipelining

Pipelining is a way for a processor to increase its instruction throughput by way of mimicking how assembly lines work. Consider the steps to executing an instruction:

  1. Fetch instruction (IF)
  2. Decode instruction (ID)
  3. Execute instruction (EX)
  4. Access memory (MEM)
  5. Write results back (WB)

Early computers would process each instruction completely through these steps before processing the next instruction, as seen here:

2 Hpw

In 10 clock cycles, the processor is completely finished with two instructions. Pipelining allows the next instruction to start once the current one is done with a step. The following diagram shows pipelining in action:

3 Hpw

In the same 10 clock cycles, six instructions are fully processed, increasing the throughput threefold.

Branch Prediction

The major issue with pipelining is that if any branching has to be done, then instructions that were being processed in earlier stages have to be discarded since they no longer are going to be processed. Let’s take a look at an situation where this happens.

4 Hpw

The instruction CMP is a compare instruction, e.g., does x = y? This sets a flag of the result in the processor. Instruction BNE is “branch if not equal,” which checks this flag. If x is not equal to y, then the processor jumps to another location in the program. The following instructions (SUB, MUL, and DIV) have to be discarded because they’re no longer going to be executed. This creates a five-clock-cycle gap before the next instruction gets processed.

The aim of branch prediction is to make a guess at which instructions are going to be executed. There are several algorithms to achieve this, but the overall goal is to minimize the amount of times the pipeline has to clear because a branch took place.

Out-of-Order Execution

Out-of-order execution is a way for the processor to reorder instructions for efficient execution. Take, for example, a program that does this:

  1. x = 1
  2. y = 2
  3. z = x + 3
  4. foo = z + y
  5. bar = 42
  6. print “hello world!”

Let’s say the execution unit can handle two instructions at once. These instruction are then executed in the following way:

  1. x = 1, y = 2
  2. z = x + 3
  3. foo = z + y
  4. bar = 42, print “hello world!”

Since the value of “foo” depends on “z,” those two instructions can’t execute at the same time. However, by reordering the instructions:

  1. x = 1, y = 2
  2. z = x + 3, bar = 42
  3. foo = z + y, print “hello world!”

Thus an extra cycle can be avoided. However, implementing out-of-order execution is complex and the application still expects the instructions to be processed in the original order. This has normally kept out-of-order execution off processors for mobile and small electronics because the additional power consumption outweighs its performance benefits, but recent ARM-based mobile processors are incorporating it because the opposite is now true.

A Complex Machine Made Up of Simple Pieces

When looked at from a pure hardware perspective, a processor can seem pretty daunting. In reality, those billions of transistors that modern processors carry today can still be broken down into simple pieces or ideas that lay the foundation of how processors work. If reading this article leaves you with more questions than answers, a good place to get started learning more is Wikipedia’s
index on CPU technologies.

Similar Posts:


Article Source: The Only Yard For The Internet Junkie
If you like all this stuff here then you can buy me a pack of cigarettes.

Free premium templates and themes
Add to Technorati Favorites
Free PageRank Display
Categories
Archives
Our Partners
Related Links
Resources Link Directory Professional Web Design Template