I remember January 10, 2010, rather well: it was the day we lost a project’s complete history. We were using Subversion as our version control system, which kept the project’s history in a central repository on a server. And we were backing up this server on a regular basis—at least, we thought we were. The server broke down, and then the backup failed. Our project wasn’t completely lost, but all the historic versions were gone.
Shortly after the server broke down, we switched to Git. I had always seen version control as torturous; it was too complex and not useful enough for me to see its value, though I used it as a matter of duty. But once we’d spent some time on the new system, and I began to understand just how helpful Git could be. Since then, it has saved my neck in many situations.
During the course of this article, I’ll walk through how Git can help you avoid mistakes—and how to recover if they’ve already happened.Every teammate is a backup
Since Git is a distributed version control system, every member of our team that has a project cloned (or “checked out,” if you’re coming from Subversion) automatically has a backup on his or her disk. This backup contains the latest version of the project, as well as its complete history.
This means that should a developer’s local machine or even our central server ever break down again (and the backup not work for any reason), we’re up and running again in minutes: any local repository from a teammate’s disk is all we need to get a fully functional replacement.Branches keep separate things separate
When my more technical colleagues told me about how “cool” branching in Git was, I wasn’t bursting with joy right away. First, I have to admit that I didn’t really understand the advantages of branching. And second, coming from Subversion, I vividly remembered it being a complex and error-prone procedure. With some bad memories, I was anxious about working with branches and therefore tried to avoid it whenever I could.
It took me quite a while to understand that branching and merging work completely differently in Git than in most other systems—especially regarding its ease of use! So if you learned the concept of branches from another version control system (like Subversion), I recommend you forget your prior knowledge and start fresh. Let’s start by understanding why branches are so important in the first place.Why branches are essential
Back in the days when I didn’t use branches, working on a new feature was a mess. Essentially, I had the choice between two equally bad workflows:
(a) I already knew that creating small, granular commits with only a few changes was a good version control habit. However, if I did this while developing a new feature, every commit would mingle my half-done feature with the main code base until I was done. It wasn’t very pleasant for my teammates to have my unfinished feature introduce bugs into the project.
(b) To avoid getting my work-in-progress mixed up with other topics (from colleagues or myself), I’d work on a feature in my separate space. I would create a copy of the project folder that I could work with quietly—and only commit my feature once it was complete. But committing my changes only at the end produced a single, giant, bloated commit that contained all the changes. Neither my teammates nor I could understand what exactly had happened in this commit when looking at it later.
I slowly understood that I had to make myself familiar with branches if I wanted to improve my coding.Working in contexts
Any project has multiple contexts where work happens; each feature, bug fix, experiment, or alternative of your product is actually a context of its own. It can be seen as its own “topic,” clearly separated from other topics.
If you don’t separate these topics from each other with branching, you will inevitably increase the risk of problems. Mixing different topics in the same context:
Using branches gave me the confidence that I couldn’t mess up. In case things went wrong, I could always go back, undo, start fresh, or switch contexts.Branching basics
Branching in Git actually only involves a handful of commands. Let’s look at a basic workflow to get you started.
To create a new branch based on your current state, all you have to do is pick a name and execute a single command on your command line. We’ll assume we want to start working on a new version of our contact form, and therefore create a new branch called “contact-form”:$ git branch contact-form
Using the git branch command without a name specified will list all of the branches we currently have (and the “-v” flag provides us with a little more data than usual):$ git branch -v
You might notice the little asterisk on the branch named “master.” This means it’s the currently active branch. So, before we start working on our contact form, we need to make this our active context:$ git checkout contact-form
Git has now made this branch our current working context. (In Git lingo, this is called the “HEAD branch”). All the changes and every commit that we make from now on will only affect this single context—other contexts will remain untouched. If we want to switch the context to a different branch, we’ll simply use the git checkout command again.
In case we want to integrate changes from one branch into another, we can “merge” them into the current working context. Imagine we’ve worked on our “contact-form” feature for a while, and now want to integrate these changes into our “master” branch. All we have to do is switch back to this branch and call git merge:$ git checkout master $ git merge contact-form Using branches
I would strongly suggest that you use branches extensively in your day-to-day workflow. Branches are one of the core concepts that Git was built around. They are extremely cheap and easy to create, and simple to manage—and there are plenty of resources out there if you’re ready to learn more about using them.Undoing things
There’s one thing that I’ve learned as a programmer over the years: mistakes happen, no matter how experienced people are. You can’t avoid them, but you can have tools at hand that help you recover from them.
One of Git’s greatest features is that you can undo almost anything. This gives me the confidence to try out things without fear—because, so far, I haven’t managed to really break something beyond recovery.Amending the last commit
Even if you craft your commits very carefully, it’s all too easy to forget adding a change or mistype the message. With the —amend flag of the git commit command, Git allows you to change the very last commit, and it’s a very simple fix to execute. For example, if you forgot to add a certain change and also made a typo in the commit subject, you can easily correct this:$ git add some/changed/files $ git commit --amend -m "The message, this time without typos"
There’s only one thing you should keep in mind: you should never amend a commit that has already been pushed to a remote repository. Respecting this rule, the “amend” option is a great little helper to fix the last commit.
(For more detail about the amend option, I recommend Nick Quaranto’s excellent walkthrough.)Undoing local changes
Changes that haven’t been committed are called “local.” All the modifications that are currently present in your working directory are “local” uncommitted changes.
Discarding these changes can make sense when your current work is… well… worse than what you had before. With Git, you can easily undo local changes and start over with the last committed version of your project.
If it’s only a single file that you want to restore, you can use the git checkout command:$ git checkout -- file/to/restore
Don’t confuse this use of the checkout command with switching branches (see above). If you use it with two dashes and (separated with a space!) the path to a file, it will discard the uncommitted changes in a given file.
On a bad day, however, you might even want to discard all your local changes and restore the complete project:$ git reset --hard HEAD
This will replace all of the files in your working directory with the last committed revision. Just as with using the checkout command above, this will discard the local changes.
Be careful with these operations: since local changes haven’t been checked into the repository, there is no way to get them back once they are discarded!Undoing committed changes
Of course, undoing things is not limited to local changes. You can also undo certain commits when necessary—for example, if you’ve introduced a bug.
Basically, there are two main commands to undo a commit:(a) git reset
The git reset command really turns back time. You tell it which version you want to return to and it restores exactly this state—undoing all the changes that happened after this point in time. Just provide it with the hash ID of the commit you want to return to:$ git reset -- hard 2be18d9
The —hard option is the easiest and cleanest approach, but it also wipes away all local changes that you might still have in your working directory. So, before doing this, make sure there aren’t any local changes you’ve set your heart on.(b) git revert
The git revert command is used in a different scenario. Imagine you have a commit that you don’t want anymore—but the commits that came afterwards still make sense to you. In that case, you wouldn’t use the git reset command because it would undo all those later commits, too!
The revert command, however, only reverts the effects of a certain commit. It doesn’t remove any commits, like git reset does. Instead, it even creates a new commit; this new commit introduces changes that are just the opposite of the commit to be reverted. For example, if you deleted a certain line of code, revert will create a new commit that introduces exactly this line, again.
To use it, simply provide it with the hash ID of the commit you want reverted:$ git revert 2be18d9 Finding bugs
When it comes to finding bugs, I must admit that I’ve wasted quite some time stumbling in the dark. I often knew that it used to work a couple of days ago—but I had no idea where exactly things went wrong. It was only when I found out about git bisect that I could speed up this process a bit. With the bisect command, Git provides a tool that helps you find the commit that introduced a problem.
Imagine the following situation: we know that our current version (tagged “2.0”) is broken. We also know that a couple of commits ago (our version “1.9”), everything was fine. The problem must have occurred somewhere in between.
This is already enough information to start our bug hunt with git bisect:$ git bisect start $ git bisect bad $ git bisect good v1.9
After starting the process, we told Git that our current commit contains the bug and therefore is “bad.” We then also informed Git which previous commit is definitely working (as a parameter to git bisect good).
Git then restores our project in the middle between the known good and known bad conditions:
We now test this version (for example, by running unit tests, building the app, deploying it to a test system, etc.) to find out if this state works—or already contains the bug. As soon as we know, we tell Git again—either with git bisect bad or git bisect good.
Let’s assume we said that this commit was still “bad.” This effectively means that the bug must have been introduced even earlier—and Git will again narrow down the commits in question:
This way, you’ll find out very quickly where exactly the problem occurred. Once you know this, you need to call git bisect reset to finish your bug hunt and restore the project’s original state.A tool that can save your neck
I must confess that my first encounter with Git wasn’t love at first sight. In the beginning, it felt just like my other experiences with version control: tedious and unhelpful. But with time, the practice became intuitive, and gained my trust and confidence.
After all, mistakes happen, no matter how much experience we have or how hard we try to avoid them. What separates the pro from the beginner is preparation: having a system in place that you can trust in case of problems. It helps you stay on top of things, especially in complex projects. And, ultimately, it helps you become a better professional.References
Growing up, I learned there were two kinds of reviews I could seek out from my parents. One parent gave reviews in the form of a shower of praise. The other parent, the one with a degree from the Royal College of Art, would put me through a design crit. Today the reviews I seek are for my code, not my horse drawings, but it continues to be a process I both dread and crave.
In this article, I’ll describe my battle-tested process for conducting code reviews, highlighting the questions you should ask during the review process as well as the necessary version control commands to download and review someone’s work. I’ll assume your team uses Git to store its code, but the process works much the same if you’re using any other source control system.
Completing a peer review is time-consuming. In the last project where I introduced mandatory peer reviews, the senior developer and I estimated that it doubled the time to complete each ticket. The reviews introduced more context-switching for the developers, and were a source of increased frustration when it came to keeping the branches up to date while waiting for a code review.
The benefits, however, were huge. Coders gained a greater understanding of the whole project through their reviews, reducing silos and making onboarding easier for new people. Senior developers had better opportunities to ask why decisions were being made in the codebase that could potentially affect future work. And by adopting an ongoing peer review process, we reduced the amount of time needed for human quality assurance testing at the end of each sprint.
Let’s walk through the process. Our first step is to figure out exactly what we’re looking for.Determine the purpose of the proposed change
Our code review should always begin in a ticketing system, such as Jira or GitHub. It doesn’t matter if the proposed change is a new feature, a bug fix, a security fix, or a typo: every change should start with a description of why the change is necessary, and what the desired outcome will be once the change has been applied. This allows us to accurately assess when the proposed change is complete.
The ticketing system is where you’ll track the discussion about the changes that need to be made after reviewing the proposed work. From the ticketing system, you’ll determine which branch contains the proposed code. Let’s pretend the ticket we’re reviewing today is 61524—it was created to fix a broken link in our website. It could just as equally be a refactoring, or a new feature, but I’ve chosen a bug fix for the example. No matter what the nature of the proposed change is, having each ticket correspond to only one branch in the repository will make it easier to review, and close, tickets.
Set up your local environment and ensure that you can reproduce what is currently the live site—complete with the broken link that needs fixing. When you apply the new code locally, you want to catch any regressions or problems it might introduce. You can only do this if you know, for sure, the difference between what is old and what is new.Review the proposed changes
At this point you’re ready to dive into the code. I’m going to assume you’re working with Git repositories, on a branch-per-issue setup, and that the proposed change is part of a remote team repository. Working directly from the command line is a good universal approach, and allows me to create copy-paste instructions for teams regardless of platform.
To begin, update your local list of branches.git fetch
Then list all available branches.git branch -a
A list of branches will be displayed to your terminal window. It may appear something like this:* master remotes/origin/master remotes/origin/HEAD -> origin/master remotes/origin/61524-broken-link
The * denotes the name of the branch you are currently viewing (or have “checked out”). Lines beginning with remotes/origin are references to branches we’ve downloaded. We are going to work with a new, local copy of branch 61524-broken-link.
When you clone your project, you’ll have a connection to the remote repository as a whole, but you won’t have a read-write relationship with each of the individual branches in the remote repository. You’ll make an explicit connection as you switch to the branch. This means if you need to run the command git push to upload your changes, Git will know which remote repository you want to publish your changes to.git checkout --track origin/61524-broken-link
Ta-da! You now have your own copy of the branch for ticket 61524, which is connected (“tracked”) to the origin copy in the remote repository. You can now begin your review!
First, let’s take a look at the commit history for this branch with the command log.git log master
Sample output:Author: emmajane Date: Mon Jun 30 17:23:09 2014 -0400 Link to resources page was incorrectly spelled. Fixed. Resolves #61524.
This gives you the full log message of all the commits that are in the branch 61524-broken-link, but are not also in the master branch. Skim through the messages to get a sense of what’s happening.
Next, take a brief gander through the commit itself using the diff command. This command shows the difference between two snapshots in your repository. You want to compare the code on your checked-out branch to the branch you’ll be merging “to”—which conventionally is the master branch.git diff master How to read patch files
When you run the command to output the difference, the information will be presented as a patch file. Patch files are ugly to read. You’re looking for lines beginning with + or -. These are lines that have been added or removed, respectively. Scroll through the changes using the up and down arrows, and press q to quit when you’ve finished reviewing. If you need an even more concise comparison of what’s happened in the patch, consider modifying the diff command to list the changed files, and then look at the changed files one at a time:git diff master --name-only git diff master <filename>
Let’s take a look at the format of a patch file.diff --git a/about.html b/about.html index a3aa100..a660181 100644 --- a/about.html +++ b/about.html @@ -48,5 +48,5 @@ (2004-05) - A full list of <a href="emmajane.net/events">public + A full list of <a href="http://emmajane.net/events">public presentations and workshops</a> Emma has given is available
I tend to skim past the metadata when reading patches and just focus on the lines that start with - or +. This means I start reading at the line immediate following @@. There are a few lines of context provided leading up to the changes. These lines are indented by one space each. The changed lines of code are then displayed with a preceding - (line removed), or + (line added).Going beyond the command line
Using a Git repository browser, such as gitk, allows you to get a slightly better visual summary of the information we’ve looked at to date. The version of Git that Apple ships with does not include gitk—I used Homebrew to re-install Git and get this utility. Any repository browser will suffice, though, and there are many GUI clients available on the Git website.gitk
When you run the command gitk, a graphical tool will launch from the command line. An example of the output is given in the following screenshot. Click on each of the commits to get more information about it. Many ticket systems will also allow you to look at the changes in a merge proposal side-by-side, so if you’re finding this cumbersome, click around in your ticketing system to find the comparison tools they might have—I know for sure GitHub offers this feature.
Now that you’ve had a good look at the code, jot down your answers to the following questions:
Now is the time to start up your testing environment and view the proposed change in context. How does it look? Does your solution match what the coder thinks they’ve built? If it doesn’t look right, do you need to clear the cache, or perhaps rebuild the Sass output to update the CSS for the project?
Now is the time to also test the code against whatever test suite you use.
Depending on the context for this particular code change, there may be other obvious questions you need to address as part of your code review.
Do your best to create the most comprehensive list of everything you can find wrong (and right) with the code. It’s annoying to get dribbles of feedback from someone as part of the review process, so we’ll try to avoid “just one more thing” wherever we can.Prepare your feedback
Let’s assume you’ve now got a big juicy list of feedback. Maybe you have no feedback, but I doubt it. If you’ve made it this far in the article, it’s because you love to comb through code as much as I do. Let your freak flag fly and let’s get your review structured in a usable manner for your teammates.
For all the notes you’ve assembled to date, sort them into the following categories:
Based on this new categorization, you are ready to engage in passive-aggressive coding. If the problem is clearly a typo and falls into one of the first two categories, go ahead and fix it. Obvious typos don’t really need to go back to the original author, do they? Sure, your teammate will be a little embarrassed, but they’ll appreciate you having saved them a bit of time, and you’ll increase the efficiency of the team by reducing the number of round trips the code needs to take between the developer and the reviewer.
If the change you are itching to make falls into the third category: stop. Do not touch the code. Instead, go back to your colleague and get them to describe their approach. Asking “why” might lead to a really interesting conversation about the merits of the approach taken. It may also reveal limitations of the approach to the original developer. By starting the conversation, you open yourself to the possibility that just maybe your way of doing things isn’t the only viable solution.
If you needed to make any changes to the code, they should be absolutely tiny and minor. You should not be making substantive edits in a peer review process. Make the tiny edits, and then add the changes to your local repository as follows:git add . git commit -m "[#61524] Correcting <list problem> identified in peer review."
You can keep the message brief, as your changes should be minor. At this point you should push the reviewed code back up to the server for the original developer to double-check and review. Assuming you’ve set up the branch as a tracking branch, it should just be a matter of running the command as follows:git push
Update the issue in your ticketing system as is appropriate for your review. Perhaps the code needs more work, or perhaps it was good as written and it is now time to close the issue queue.
Repeat the steps in this section until the proposed change is complete, and ready to be merged into the main branch.Merge the approved change into the trunk
Up to this point you’ve been comparing a ticket branch to the master branch in the repository. This main branch is referred to as the “trunk” of your project. (It’s a tree thing, not an elephant thing.) The final step in the review process will be to merge the ticket branch into the trunk, and clean up the corresponding ticket branches.
Begin by updating your master branch to ensure you can publish your changes after the merge.git checkout master git pull origin master
Take a deep breath, and merge your ticket branch back into the main repository. As written, the following command will not create a new commit in your repository history. The commits will simply shuffle into line on the master branch, making git log −−graph appear as though a separate branch has never existed. If you would like to maintain the illusion of a past branch, simply add the parameter −−no-ff to the merge command, which will make it clear, via the graph history and a new commit message, that you have merged a branch at this point. Check with your team to see what’s preferred.git merge 61524-broken-link
The merge will either fail, or it will succeed. If there are no merge errors, you are ready to share the revised master branch by uploading it to the central repository.git push
If there are merge errors, the original coders are often better equipped to figure out how to fix them, so you may need to ask them to resolve the conflicts for you.
Once the new commits have been successfully integrated into the master branch, you can delete the old copies of the ticket branches both from your local repository and on the central repository. It’s just basic housekeeping at this point.git branch -d 61524-broken-link git push origin --delete 61524-broken-link Conclusion
This is the process that has worked for the teams I’ve been a part of. Without a peer review process, it can be difficult to address problems in a codebase without blame. With it, the code becomes much more collaborative; when a mistake gets in, it’s because we both missed it. And when a mistake is found before it’s committed, we both breathe a sigh of relief that it was found when it was.
Regardless of whether you’re using Git or another source control system, the peer review process can help your team. Peer-reviewed code might take more time to develop, but it contains fewer mistakes, and has a strong, more diverse team supporting it. And, yes, I’ve been known to learn the habits of my reviewers and choose the most appropriate review style for my work, just like I did as a kid.
Thanks to our sponsor, LessAccounting. You deserve simple bookkeeping to help you avoid the stresses of accounting.
mashable.com | Article Link | by James O'Brien
The days of paper textbooks seem destined to become as distant a memory as cursive handwriting. In 2014, big data is reshaping the way students receive curriculum and learn, and the tools of the new and digital classroom are changing the dynamics for educators.
"Data is changing the way people think," says Eileen Murphy Buckley, founder and CEO of ThinkCERCA, a company in the data-driven education space. "From critical accountability to teacher accountability to the way we arrange time, our learning spaces, technologies — data is disrupting everything."
The inclusion of the computer in K–12 classes is nothing new; they've been on desks since the days of Texas Instruments. In more recent times, however, pupils aren't turning to their screens to learn a little BASIC or play a round of Oregon Trail — they're increasingly experiencing data-driven teaching as a fully integrated part of a post-textbook, personalized academic process.
If you think about the impact of technology on our lives today, algorithms are analyzing our behavior — both on and offline — all the time. They shape what we do in the moment, and they often steer us toward what we do next.
Image Courtesey of Wikimedia Commons / Camelia.boban
Freelancers and self-employed business owners can choose from a huge number of conferences to attend in any given year. There are hundreds of industry podcasts, a constant stream of published books, and a never-ending supply of sites all giving advice. It is very easy to spend a lot of valuable time and money just attending, watching, reading, listening and hoping that somehow all of this good advice will take root and make our business a success.
However, all the good advice in the world won’t help you if you don’t act on it. While you might leave that expensive conference feeling great, did your attendance create a lasting change to your business? I was thinking about this subject while listening to episode 14 of the Working Out podcast, hosted by Ashley Baxter and Paddy Donnelly. They were talking about following through, and how it is possible to “nod along” to good advice but never do anything with it.
If you have ever been sent to a conference by an employer, you may have been expected to report back. You might even have been asked to present to your team on the takeaway points from the event. As freelancers and business owners, we don’t have anyone making us consolidate our thoughts in that way. It turns out that the way I work gives me a fairly good method of knowing which things are bringing me value.Tracking actionable advice
I’m a fan of the Getting Things Done technique, and live by my to-do lists. I maintain a Someday/Maybe list in OmniFocus into which I add items that I want to do or at least investigate, but that aren’t a project yet.
If a podcast is worth keeping on my playlist, there will be items entered linking back to certain episodes. Conference takeaways might be a link to a site with information that I want to read. It might be an idea for an article to write, or instructions on something very practical such as setting up an analytics dashboard to better understand some data. The first indicator of a valuable conference is how many items I add during or just after the event.
Having a big list of things to do is all well and good, but it’s only one half of the story. The real value comes when I do the things on that list, and can see whether they were useful to my business. Once again, my GTD lists can be mined for that information.
When tickets go on sale for that conference again, do I have most of those to-do items still sat in Someday/Maybe? Is that because, while they sounded like good ideas, they weren’t all that relevant? Or, have I written a number of blog posts or had several articles published on themes that I started considering off the back of that conference? Did I create that dashboard, and find it useful every day? Did that speaker I was introduced to go on to become a friend or mentor, or someone I’ve exchanged emails with to clarify a topic I’ve been thinking about?
By looking back over my lists and completed items, I can start to make decisions about the real value to my business and life of the things I attend, read, and listen to. I’m able to justify the ticket price, time, and travel costs by making that assessment. I can feel confident that I’m not spending time and money just to feel as if I’m moving forward, yet gaining nothing tangible to show for it.A final thought on value
As entrepreneurs, we have to make sure we are spending our time and money on things that will give us the best return. All that said, it is important to make time in our schedules for those things that we just enjoy, and in particular those things that do motivate and inspire us. I don’t think that every book you read or event you attend needs to result in a to-do list of actionable items.
What we need as business owners, and as people, is balance. We need to be able to see that the things we are doing are moving our businesses forward, while also making time to be inspired and refreshed to get that actionable work done.
The web doesn’t do “age” especially well. Any blog post or design article more than a few years old gets a raised eyebrow—heck, most people I meet haven’t read John Allsopp’s “A Dao of Web Design” or Jeffrey Zeldman’s “To Hell With Bad Browsers,” both as relevant to the web today as when they were first written. Meanwhile, I’ve got books on my shelves older than I am; most of my favorite films came out before I was born; and my iTunes library is riddled with music that’s decades, if not centuries, old.
(No, I don’t get invited to many parties. Why do you ask oh I get it)
So! It’s probably easy to look at “Pocket-Sized Design,” a lovely article by Jorunn Newth and Elika Etemad that just turned 10 years old, and immediately notice where it’s beginning to show its age. Written at a time when few sites were standards-compliant, and even fewer still were mobile-friendly, Newth and Etemad were urging us to think about life beyond the desktop. And when I first re-read it, it’s easy to chuckle at the points that feel like they’re from another age: there’s plenty of talk of screens that are “only 120-pixels wide”; of inputs driven by stylus, rather than touch; and of using the now-basically-defunct handheld media type for your CSS. Seems a bit quaint, right?
Looking past a few of the details, it’s remarkable how well the article’s aged. Modern users may (or may not) manually “turn off in-line image loading,” but they may choose to use a mobile browser that dramatically compresses your images. We may scoff at the idea of someone browsing with a stylus, but handheld video game consoles are impossibly popular when it comes to browsing the web. And while there’s plenty of excitement in our industry for the latest versions of iOS and Android, running on the latest hardware, most of the web’s growth is happening on cheaper hardware, over slower networks (PDF), and via slim data plans—so yes, 10 years on, it’s still true that “downloading to the device is likely to be [expensive], the processors are slow, and the memory is limited.”
In the face of all of that, what I love about Newth and Etemad’s article is just how sensible their solutions are. Rather than suggesting slimmed-down mobile sites, or investing in some device detection library, they take a decidedly standards-focused approach:Linearizing the page into one column works best when the underlying document structure has been designed for it. Structuring the document according to this logic ensures that the page organization makes sense not only in Opera for handhelds, but also in non-CSS browsers on both small devices and the desktop, in voice browsers, and in terminal-window browsers like Lynx.
In other words, by thinking about the needs of the small screen first, you can layer on more complexity from there. And if you’re hearing shades of mobile first and progressive enhancement here, you’d be right: they’re treating their markup—their content—as a foundation, and gently layering styles atop it to make it accessible to more devices, more places than ever before.
So, no: we aren’t using @media handheld or display: none for our small screen-friendly styles—but I don’t think that’s really the point of Newth and Etemad’s essay. Instead, they’re putting forward a process, a framework for designing beyond the desktop. What they’re arguing is for a truly device-agnostic approach to designing for the web, one that’s as relevant today as it was a decade ago.
Plus ça change, plus c’est la même chose.
http://innovationinsights.wired.com | Article Link | by Dror Ben-Naim, Smart Sparrow
One of the great ironies of online learning is that a tool created to foster personalized learning is actually quite impersonal, in practice. It doesn’t have to be that way.
MOOCs (Massive Open Online Course) are based on a simple premise: deliver free content from the world’s greatest professors to the masses, and a global community of students could take the same courses as students attending elite colleges and universities. The hope was that broad-based access to higher education would enable unprecedented numbers of learners to fulfill the democratic promise of higher education, social mobility and professional attainment.
It is now clear that the hype surrounding MOOCs has outpaced the model’s ability to deliver on the promise of a revolution in higher education. Initial data demonstrates that MOOCs have lived up to their name in terms of generating massive enrollments; however, completion rates — including introductory, lecture courses — hover in the low single digits.
These findings should not be surprising. MOOCs combine a set of existing tools that can be useful instructional supports, such as online lectures, social networks, and quizzes. But few professors would consider these technologies, together, as a substitute for the course experience.
Last month, Columbia Teachers College released a MOOC progress report, which took a close look at implementation challenges and barriers to success. The report stated that “while the potential for MOOCs to contribute significantly to the development of personalized and adaptive learning is high, the reality is far from being achieved.” To get there, “a great deal of coordination and collaboration among content experts, instructors, researchers, instructional designers, and programmers will be necessary.”
Image Courtesey of Wikimedia Commons / Cafefrog
Need is a curated retailer and lifestyle magazine for men. Featuring exclusive items from the world’s top designers.
“Why don’t we just use this plugin?” That’s a question I started hearing a lot in the heady days of the 2000s, when open-source CMSes were becoming really popular. We asked it optimistically, full of hope about the myriad solutions only a download away. As the years passed, we gained trustworthy libraries and powerful communities, but the graveyard of crufty code and abandoned services grew deep. Many solutions were easy to install, but difficult to debug. Some providers were eager to sell, but loath to support.
Years later, we’re still asking that same question—only now we’re less optimistic and even more dependent, and I’m scared to engage with anyone smart enough to build something I can’t. The emerging challenge for today’s dev shop is knowing how to take control of third-party relationships—and when to avoid them. I’ll show you my approach, which is to ask a different set of questions entirely.A web of third parties
I should start with a broad definition of what it is to be third party: If it’s a person and I don’t compensate them for the bulk of their workload, they’re third party. If it’s a company or service and I don’t control it, it’s third party. If it’s code and my team doesn’t grasp every line of it, it’s third party.
The third-party landscape is rapidly expanding. Github has grown to almost 7 million users and the WordPress plugin repo is approaching 1 billion downloads. Many of these solutions are easy for clients and competitors to implement; meanwhile, I’m still in the lab debugging my custom code. The idea of selling original work seems oddly…old-fashioned.
Yet with so many third-party options to choose from, there are more chances than ever to veer off-course.What could go wrong?
At a meeting a couple of years ago, I argued against using an external service to power a search widget on a client project. “We should do things ourselves,” I said. Not long after this, on the very same project, I argued in favor of a using a third party to consolidate RSS feeds into a single document. “Why do all this work ourselves,” I said, “when this problem has already been solved?” My inconsistency was obvious to everyone. Being dogmatic about not using a third party is no better than flippantly jumping in with one, and I had managed to do both at once!
But in one case, I believed the third party was worth the risk. In the other, it wasn’t. I just didn’t know how to communicate those thoughts to my team.
I needed, in the parlance of our times, a decision-making framework. To that end, I’ve been maintaining a collection of points to think through at various stages of engagement with third parties. I’ll tour through these ideas using the search widget and the RSS digest as examples.The difference between a request and a goal
This point often reveals false assumptions about what a client or stakeholder wants. In the case of the search widget, we began researching a service that our client specifically requested. Fitted with ajax navigation, full-text searching, and automated crawls to index content, it seemed like a lot to live up to. But when we asked our clients what exactly they were trying to do, we were surprised: they were entirely taken by the typeahead functionality; the other features were of very little perceived value.
In the case of the RSS “smusher,” we already had an in-house tool that took an array of feed URLs and looped through them in order, outputting x posts per feed in some bespoke format. They’re too good for our beloved multi-feed widget? But actually, the client had a distinctly different and worthwhile vision: they wanted x results from their array of sites in total, and they wanted them ordered by publication date, not grouped by site. I conceded.
It might seem like an obvious first step, but I have seen projects set off in the wrong direction because the end goal is unknown. In both our examples now, we’re clear about that and we’re ready to evaluate solutions.To dev or to download
Before deciding to use a third party, I find that I first need to examine my own organization, often in four particular ways: strengths, weaknesses, betterment, and mission.Strengths and weaknesses
The search task aligned well with our strengths because we had good front-end developers and were skilled at extending our CMS. So when asked to make a typeahead search, we felt comfortable betting on ourselves. Had we done it before? Not exactly, but we could think through it.
At that same time, backend infrastructure was a weakness for our team. We had happened to have a lot of turnover among our sysadmins, and at times it felt like we weren’t equipped to hire that sort of talent. As I was thinking through how we might build a feed-smusher of our own, I felt like I was tempting a weak underbelly. Maybe we’d have to set up a cron job to poll the desired URLs, grab feed content, and store that on our servers. Not rocket science, but cron tasks in particular were an albatross for us.Betterment of the team
When we set out to achieve a goal for a client, it’s more than us doing work: it’s an opportunity for our team to better themselves by learning new skills. The best opportunities for this are the ones that present challenging but attainable tasks, which create incremental rewards. Some researchers cite this effect as a factor in gaming addiction. I’ve felt this myself when learning new things on a project, and those are some of my favorite work moments ever. Teams appreciate this and there is an organizational cost in missing a chance to pay them to learn. The typeahead search project looked like it could be a perfect opportunity to boost our skill level.Organizational mission
If a new project aligns well with our mission, we’re going to resell it many times. It’s likely that we’ll want our in-house dev team to iterate on it, tailoring it to our needs. Indeed, we’ll have the budget to do so if we’re selling it a lot. No one had asked us for a feed-smusher before, so it didn’t seem reasonable to dedicate an R&D budget to it. In contrast, several other clients were interested in more powerful site search, so it looked like it would be time well spent.
We’ve now clarified our end goals and we’ve looked at how these projects align with our team. Based on that, we’re doing the search widget ourselves, and we’re outsourcing the feed-smusher. Now let’s look more closely at what happens next for both cases.Evaluating the unknown
The frustrating thing about working with third parties is that the most important decisions take place when we have the least information. But there are some things we can determine before committing. Familiarity, vitality, extensibility, branding, and Service Level Agreements (SLAs) are all observable from afar.Familiarity: is there a provider we already work with?
Although we’re going to increase the number of third-party dependencies, we’ll try to avoid increasing the number of third-party relationships.
Working with a known vendor has several potential benefits: they may give us volume pricing. Markup and style are likely to be consistent between solutions. And we just know them better than we’d know a new service.Vitality: will this service stick around?
The worst thing we could do is get behind a service, only to have it shut down next month. A service with high vitality will likely (and rightfully) brag about enterprise clients by name. If it’s open source, it will have a passionate community of contributors. On the other hand, it could be advertising a shutdown. More often, it’s somewhere in the middle. Noting how often the service is updated is a good starting point in determining vitality.Extensibility: can this service adapt as our needs change?
Not only do we have to evaluate the core service, we have to see how extensible it is by digging into its API. If a service is extensible, it’s more likely to fit for the long haul.
APIs can also present new opportunities. For example, imagine selecting an email-marketing provider with an API that exposes campaign data. This might allow us to build a dashboard for campaign performance in our CMS—a unique value-add for our clients, and a chance to keep our in-house developers invested and excited about the service.Branding: is theirs strong, or can you use your own?
White-labeling is the practice of reselling a service with your branding instead of that of the original provider. For some companies, this might make good sense for marketing. I tend to dislike white-labeling. Our clients trust us to make choices, and we should be proud to display what those choices are. Either way, you want to ensure you’re comfortable with the brand you’ll be using.SLAs: what are you getting, beyond uptime?
For client-side products, browser support is a factor: every external dependency represents another layer that could abandon older browsers before we’re ready. There’s also accessibility. Does this new third-party support users with accessibility needs to the degree that we require? Perhaps most important of all is support. Can we purchase a priority support plan that offers fast and in-depth help?
In the case of our feed-smusher service, there was no solution that ran the table. The most popular solution actually had a shutdown notice! There were a couple of smaller providers available, but we hadn’t worked with either before. Browser support and accessibility were moot since we’d be parsing the data and displaying it ourselves. The uptime concern was also diminished because we’d be sure to cache the results locally. Anyway, with viable candidates in hand, we can move on to more productive concerns than dithering between two similar solutions.Relationship maintenance
If someone else is going to do the heavy lifting, I want to assume as much of the remaining burden as possible. Piloting, data collection, documentation, and in-house support are all valuable opportunities to buttress this new relationship.
As exciting as this new relationship is, we don’t want to go dashing out of the gates just yet. Instead, we’ll target clients for piloting and quarantine them before unleashing it any further. Cull suggestions from team members to determine good candidates for piloting, garnering a mix of edge-cases and the norm.
If the third party happens to collect data of any kind, we should also have an automated way to import a copy of it—not just as a backup, but also as a cached version we can serve to minimize latency. If we are serving a popular dependency from a CDN, we want to send a local version if that call should fail.
If our team doesn’t have a well-traveled directory of provider relationships, the backstory can get lost. Let a few months pass, throw in some personnel turnover, and we might forget why we even use a service, or why we opted for a particular package. Everyone on our team should know where and how to learn about our third-party relationships.
We don’t need every team member to be an expert on the service, yet we don’t want to wait for a third-party support staff to respond to simple questions. Therefore, we should elect an in-house subject-matter expert. It doesn’t have to be a developer. We just need somebody tasked with monitoring the service at regular intervals for API changes, shutdown notices, or new features. They should be able to train new employees and route more complex support requests to the third party.
In our RSS feed example, we knew we’d read their output into our database. We documented this relationship in our team’s most active bulletin, our CRM software. And we made managing external dependencies a primary part of one team member’s job.DIY: a third party waiting to happen?
Stop me if you’ve heard this one before: a prideful developer assures the team that they can do something themselves. It’s a complex project. They make something and the company comes to rely on it. Time goes by and the in-house product is doing fine, though there is a maintenance burden. Eventually, the developer leaves the company. Their old product needs maintenance, no one knows what to do, and since it’s totally custom, there is no such thing as a community for it.
Once you decide to build something in-house, how can you prevent that work from devolving into a resented, alien dependency?
All of these considerations apply to our earlier example, the typeahead search widget. Most germane is the provision to “beware the big.” When I say “big,” I mean that relative to what usually works for a given team. In this case, it was a deliverable that felt very familiar in size and scope: we were being asked to extend an open-source CMS. If instead we had been asked to make a CMS, alarms would have gone off.Look before you leap, and after you land
It’s not that third parties are bad per se. It’s just that the modern web team strikes me as a strange place: not only do we stand on the shoulders of giants, we do so without getting to know them first—and we hoist our organizations and clients up there, too.
Granted, there are many things you shouldn’t do yourself, and it’s possible to hurt your company by trying to do them—NIH is a problem, not a goal. But when teams err too far in the other direction, developers become disenfranchised, components start to look like spare parts, and clients pay for solutions that aren’t quite right. Using a third party versus staying in-house is a big decision, and we need to think hard before we make it. Use my line of questions, or come up with one that fits your team better. After all, you’re your own best dependency.
We all want our websites to be fast. We optimize images, create CSS sprites, use CDNs, cache aggressively, and gzip and minimize static content. We use every trick in the book.
But we can still do more. If we want faster outcomes, we have to think differently. What if, instead of leaving our users to stare at a spinning wheel, waiting for content to be delivered, we could predict where they wanted to go next? What if we could have that content ready for them before they even ask for it?
We tend to see the web as a reactive model, where every action causes a reaction. Users click, then we take them to a new page. They click again, and we open another page. But we can do better. We can be proactive with prebrowsing.The three big techniques
Steve Souders coined the term prebrowsing (from predictive browsing) in one of his articles late last year. Prebrowsing is all about anticipating where users want to go and preparing the content ahead of time. It’s a big step toward a faster and less visible internet.
Browsers can analyze patterns to predict where users are going to go next, and start DNS resolution and TCP handshakes as soon as users hover over links. But to get the most out of these improvements, we can enable prebrowsing on our web pages, with three techniques at our disposal:
Now let’s dive into each of these separately.DNS prefetching
Whenever we know our users are likely to request a resource from a different domain than our site, we can use DNS prefetching to warm the machinery for opening the new URL. The browser can pre-resolve the DNS for the new domain ahead of time, saving several milliseconds when the user actually requests it. We are anticipating, and preparing for an action.
Doing this informs the browser of the existence of the new domain, and it will combine this hint with its own pre-resolution algorithm to start a DNS resolution as soon as possible. The entire process will be faster for the user, since we are shaving off the time for DNS resolution from the operation. (Note that browsers do not guarantee that DNS resolution will occur ahead of time; they simply use our hint as a signal for their own internal pre-resolution algorithm.)
But exactly how much faster will pre-resolving the DNS make things? In your Chrome browser, open chrome://histograms/DNS and search for DNS.PrefetchResolution. You’ll see a table like this:
This histogram shows my personal distribution of latencies for DNS prefetch requests. On my computer, for 335 samples, the average time is 88 milliseconds, with a median of approximately 60 milliseconds. Shaving 88 milliseconds off every request our website makes to an external domain? That’s something to celebrate.
But what happens if the user never clicks the button to access the cdn.example.com domain? Aren’t we pre-resolving a domain in vain? We are, but luckily for us, DNS prefetching is a very low-cost operation; the browser will need to send only a few hundred bytes over the network, so the risk incurred by a preemptive DNS lookup is very low. That being said, don’t go overboard when using this feature; prefetch only domains that you are confident the user will access, and let the browser handle the rest.
Look for situations that might be good candidates to introduce DNS prefetching on your site:
DNS prefetching is currently supported on IE11, Chrome, Chrome Mobile, Safari, Firefox, and Firefox Mobile, which makes this feature widespread among current browsers. Browsers that don’t currently support DNS prefetching will simply ignore the hint, and DNS resolution will happen in a regular fashion.Resource prefetching
We can go a little bit further and predict that our users will open a specific page in our own site. If we know some of the critical resources used by this page, we can instruct the browser to prefetch them ahead of time:
The browser will use this instruction to prefetch the indicated resources and store them on the local cache. This way, as soon as the resources are actually needed, the browser will have them ready to serve.
Unlike DNS prefetching, resource prefetching is a more expensive operation; be mindful of how and when to use it. Prefetching resources can speed up our websites in ways we would never get by merely prefetching new domains—but if we abuse it, our users will pay for the unused overhead.
Let’s take a look at the average response size of some of the most popular resources on a web page, courtesy of the HTTP Archive:
On average, prefetching a script file (like we are doing on the example above) will cause 16kB to be transmitted over the network (without including the size of the request itself). This means that we will save 16kB of downloading time from the process, plus server response time, which is amazing—provided it’s later accessed by the user. If the user never accesses the file, we actually made the entire workflow slower by introducing an unnecessary delay.
Here are some situations where, due to the likelihood of the user visiting a specific page, you can prefetch resources ahead of time:
What about going even further and asking for an entire page? Let’s say we are absolutely sure that our users are going to visit the about.html page in our site. We can give the browser a hint:<link rel="prerender" href="http://example.com/about.html">
This time the browser will download and render the page in the background ahead of time, and have it ready for the user as soon as they ask for it. The transition from the current page to the prerendered one would be instantaneous.
Needless to say, prerendering is the most risky and costly of these three techniques. Misusing it can cause major bandwidth waste—especially harmful for users on mobile devices. To illustrate this, let’s take a look at this chart, also courtesy of the HTTP Archive:
In June of this year, the average number of requests to render a web page was 96, with a total size of 1,808kB. So if your user ends up accessing your prerendered page, then you’ve hit the jackpot: you’ll save the time of downloading almost 2,000kB, plus server response time. But if you’re wrong and your user never accesses the prerendered page, you’ll make them pay a very high cost.
When deciding whether to prerender entire pages ahead of time, consider that Google prerenders the top results on its search page, and Chrome prerenders pages based on the historical navigation patterns of users. Using the same principle, you can detect common usage patterns and prerender target pages accordingly. You can also use it, just like resource prefetching, on questionnaires or surveys where you know users will complete the workflow in a particular order.
At this time, prerendering is only supported on IE11, Chrome, and Chrome Mobile. Neither Firefox nor Safari have added support for this technique yet. (And as with resource prefetching, you can check prebrowsing.com to test whether this technique is supported in your browser.)A final word
Sites like Google and Bing are using these techniques extensively to make search instant for their users. Now it’s time for us to go back to our own sites and take another look. Can we make our experiences better and faster with prefetching and prerendering?
Browsers are already working behind the scenes, looking for patterns in our sites to make navigation as fast as possible. Prebrowsing builds on that: we can combine the insight we have on our own pages with further analysis of user patterns. By helping browsers do a better job, we speed up and improve the experience for our users.
nytimes.com | Article Link | by Steve Kolowich
WASHINGTON — One narrative that has driven widespread interest in free online courses known as MOOCs is that they can help educate the world. But critics say that the courses mostly draw students who already hold traditional degrees.
When Coursera, the largest provider of MOOCs, published a blog post about how a professor had used one of its online courses to teach refugees near the Kenya-Somalia border, it sounded to some like a satire of Silicon Valley’s naïve techno-optimism: Hundreds of thousands of devastated Africans stranded in a war zone? MOOCs to the rescue!
Still, details of the experiment paint a more nuanced picture, highlighting the challenges that MOOC providers face in trying to make sophisticated online courses work in deprived settings.
Barbara Moser-Mercer, a cognitive psychologist at the University of Geneva, ran the refugee experiment and wrote Coursera’s blog post about it. But in an interview, as well as a more formal article she wrote about the experiment for a European conference on MOOCs, she expanded on the logistical issues encountered.
Ms. Moser-Mercer’s background is translation and interpretation; she still sometimes serves as an interpreter for the United Nations. Mostly, though, she works in conflict zones. She is founder and director of InZone, a group that tries to intervene in “higher-education emergencies.”
Image Courtesey of Wikimedia Commons / DFID - UK Department for International Development