On the Supreme Court, the First Amendment, websites and wedding cakes

From the perspective of someone who supports LGBTQ+ rights, I am not exactly a fan of this recent SCOTUS decision. Simultaneously, as a web designer/developer, I think it is extremely important to recognize the difference between this type of work and a more commodity-oriented business.

Web design and development is creative work, requiring a personal input of time and energy, thought and consideration. It’s not a public storefront, selling premade goods. It’s not even a construction or contractor type job, where you may have disagreements with the client’s worldview, but the work itself is (generally) philosophically and politically neutral.

As a web designer/developer, I absolutely choose which types of projects I do or don’t want to work on, and I insist on maintaining the right to refuse to take on a project whose mission or purpose does not align with my values.

To put a finer point on it: I think it’s worth distinguishing between a web designer who is actively collaborating with their client to build something custom for them, vs. a DIY-type system where the client is provided with tools to build a site for themselves. In other words, for the type of work I do in client services, I should have the right to refuse to participate in a project whose purpose I disagree with. But if I were, for instance, Squarespace, selling access to a service I built, which lets users create their own sites, then I should not have the right to decide who can or can’t use the service. (As long as what they’re doing with it is legal.)

I can get even more specific to my own situation here. In addition to my client services, I also sell a product: a WordPress plugin for integrating calendars (e.g. Google Calendar, Office 365, etc.) into your website. I do not, and I strongly believe should not, decide which types of sites are allowed to use my plugin. I know there are customers of the plugin whose businesses/organizations do things I do not particularly agree with. But there I am selling a finished product — a commodity. Anyone who wants to buy it (again, as long as they’re not doing something illegal — although I won’t even know that, unless I specifically take the time to investigate them, post-sale) is welcome to do so. But if those same people approached me in my client services role to help them build their website itself, I would/should have the right to turn down the project if I didn’t want to do it.

Of course, there’s another layer to this: Why did this even become a court case? As I understand it, the designer here is the plaintiff, and doesn’t actually run a wedding website business yet. She is clearly doing this to make a political statement. I’m not sure how much the Colorado law was really pushed on her, or if it even technically applies to her (hypothetical) business. For me, and I think a lot of other people in my shoes, it’s a simple enough practical matter to turn down projects you don’t want to work on — regardless of why you don’t want to work on them. As a creative worker, you are essentially selling your time, not a commodity, and you have a limited supply. It’s a simple matter of being “too busy” to take on the project. There may be laws saying a business can’t discriminate, but surely there cannot be any laws that say a business has to work more hours than humanly possible.

So, what’s the upshot here? Well, it kind of seems like this case isn’t really about the First Amendment rights of business owners like me at all, because I’m skeptical that those were ever really under any threat. But it definitely opens the floodgates for business of all types — including those that truly are “open to the public” and selling commodity goods — to discriminate against potential LGBTQ+ customers. And that’s why I’m opposed to the current SCOTUS super-majority: their aggressive efforts at rolling back civil rights. That’s what this case is really about.

The Computer Course #1: Bits

For a brief introduction to this blog series, start here.


Let’s start at the beginning. Not the Big Bang, or even the first electronic computer, but rather at the most basic element of digital computing: the binary switch.

Binary switches

What is a binary switch? It’s a switch that has two settings: on and off. An on/off switch is the basis of everything in computers. Everything a computer does is created by a series of binary switches… a lot of binary switches.

In the early days of computers, these binary switches were vacuum tubes. But vacuum tubes are big and just like light bulbs, they can burn out. A computer with enough vacuum tubes to do useful calculations could fill a warehouse and would need constant maintenance to replace the burned-out vacuum tubes.

Eventually the vacuum tubes were replaced with transistors, which are much smaller and far more reliable. Over time the transistors became smaller and smaller, and today more transistors can fit on a silicon chip the size of your fingernail than the number of vacuum tubes that would fill up a warehouse 50 or 60 years ago.

Bits and bytes

The smallest unit in computing is called a bit. A bit is a very small piece of information: the state of a binary switch. Is it on or is it off? Now combine trillions of these switches and you have the capabilities of a modern computer, to store huge amounts of data, display photographs, show videos, make music, play games, write software.

But a huge collection of bits by itself is hard to work with. So we combine bits into larger groups. The next unit is called a byte, and it consists of 8 bits. While a bit can only store 2 possible states: on or off, a byte can store all of the possible permutations of those 8 on/off switches:

2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 = 256

Just 8 bits taken together can produce 256 possible values for a byte. Bytes are the smallest bits of information we typically work with in computers. In the early days of personal computers, before graphical displays, a set of 256 characters called ASCII was used to represent all of the upper and lowercase letters in the Latin alphabet, along with numbers, punctuation marks, accented letters and special characters like spaces and line breaks. Today’s computers use various “multi-byte” character sets such as UTF-8 to display millions of different letters from every alphabet and language, plus a huge number of other symbols and even emoji.

Bases

When most humans do math, we use decimal numbers, also known as “base 10”. It is believed that we settled on base 10 numbers because we have 10 fingers! We have 10 different digits we use to represent numbers:

0 1 2 3 4 5 6 7 8 9

And we combine those digits to make bigger numbers:

10 100 1,234,567,890

In elementary school we learn about “places” in numbers that have more than one digit. From right to left, we add digits to represent ten of the next smaller unit.

Thousands Hundreds Tens Ones Total
5 1 = 51
1 2 3 = 123
1 0 0 0 = 1000

But base 10 is not the only way of handling numbers. Remember that everything in computers is based on binary switches, so computers use binary numbers also known as “base 2”.

In base 10 we have ten digits. In base 2 we have two digits:

0 1

The 0 and 1 correspond to “off” and “on” for the binary switches. You can make binary numbers have multiple digits just like decimal numbers, but remember that you only have 0 and 1 to work with, so the places are different.

In decimal numbers, each new place equals ten times the value of the place before it, because it represents ten of that unit. Likewise in binary numbers, each new place equals two times the value of the place before it:

Eights Fours Twos Ones Total Decimal Equivalent
1 0 = 10 2
1 0 1 = 101 5
1 0 0 0 = 1000 8

You can see that it takes many more digits in binary to represent the same value from decimal numbers, and very quickly it becomes hard for humans to read a binary number. For instance, what is the decimal equivalent of this binary number?

1001011010

It is only 602 in decimal numbers.

Exponents

You might have noticed that in both binary and decimal numbers, each place in a multi-digit number represents the number of possible digits times the next smaller unit, which is the number of digits times its next smaller unit, and so on. In decimal numbers it works this way:

Place Unit Multiplied
1 Ones 1
2 Tens 10 x 1
3 Hundreds 10 x 10 x 1
4 Thousands 10 x 10 x 10 x 1

All of those 1’s are not really necessary for the multiplication, so let’s just remove them, and while we’re at it, knock down the place numbers by one each as well:

Exponent Value Multiplied
0 1 1
1 10 10
2 100 10 x 10
3 1000 10 x 10 x 10
4 10,000 10 x 10 x 10 x 10

In math, multiplying a number times itself a certain number of times is called an exponent. You may have done squares or cubes in math class, which is represented like this:

102 = 100
103 = 1000

Note that the exponent — the small number to the right of the main number — is the same as the number in the first column of the table above. We talk about “squares” and “cubes” but in general with exponents you will say “to the nth power”, so 10 squared is also “10 to the 2nd power” or 10 cubed is “10 to the 3rd power”.

Powers of two work in the same way, with the results being equal to the place names in a binary number. Because the powers of two are so commonly used in so many aspects of computing, it can be useful to memorize as many of them as possible:

Exponent Binary Value Decimal Value Bytes
20 1 1 1 B
21 10 2 2 B
22 100 4 4 B
23 1000 8 8 B
24 10000 16 16 B
25 100000 32 32 B
26 1000000 64 64 B
27 10000000 128 128 B
28 100000000 256 256 B
29 1000000000 512 512 B
210 10000000000 1024 1 KB
211 100000000000 2048 2 KB
212 1000000000000 4096 4 KB
213 10000000000000 8192 8 KB
214 100000000000000 16,384 16 KB
215 1000000000000000 32,768 32 KB
216 10000000000000000 65,536 64 KB

You might have noticed that in decimal notation, each power of 10 is equal to 1 followed by that number of 0s. (For example, 100 is 102 or a 1 followed by two 0s.) As you can see here, the same is true for the powers of two in binary notation.

You might also notice that in the column showing the number of bytes, I switched from B (for “bytes”) to KB (for “kilobytes”) at 1024. One kilobyte equals 1024 bytes. (Even though “kilo” means one thousand, in computing the prefix is applied at 1024 instead, to stay consistent with the binary numbers.) One kilobyte is 1024 bytes; one megabyte is 1024 kilobytes (or 1024 x 1024 = 1,048,576 bytes); one gigabyte is 1024 megabytes (1,048,576 kilobytes or 1,073,741,824 bytes), and so on.

Hexadecimals

Reading binary numbers is easy for computers — in fact, it’s pretty much all they do! But since it’s so difficult for humans, we convert these binary numbers into a base that is easier to read. Unfortunately, binary numbers don’t correspond nicely with decimal numbers — there’s no “tens” place in binary — so we use “base 16” numbers, also known as hexadecimal.

But there’s a problem for us in writing hexadecimal numbers. We’ve run out of digits! Our decimal system only includes ten digits, so to go beyond 9, we switch to letters. The 16 digits available to us in hexadecimal are:

0 1 2 3 4 5 6 7 8 9 A B C D E F

Since there’s a 16s place in binary numbers — in binary, 16 translates to 10000 — we can easily convert binary to hexadecimal. More easily than to decimal, anyway.

As we learned earlier, one byte includes 256 possible combinations of 0 or 1. In binary numbers, that’s a range from 00000000 to 11111111. In decimal numbers that’s a range of 0 to 255 — kind of a weird place to stop (and note that it’s 255 instead of 256, because of zero!) — but in hexadecimal, 255 is written as FF. We’ve “filled up” both places. 256 written in hexadecimal is 100.

32-bit addressing

If you’re familiar with video game systems, you may know about the “8-bit” systems from the 1980s, the “16-bit” systems from the 1990s, and the “32-bit” and “64-bit” systems that followed. These “bit” numbers represent the smallest chunks of bits used in the processes those systems run, and also corresponds to “memory addressing” — basically, a map that lets the system’s processor find information that is currently stored in the system’s memory. It’s how a computer (or a video game system) remembers what it’s doing!

32-bit addressing allows for 232 possible “locations” in memory. That’s a big number! 4,294,967,296 to be exact. It’s the same as 4 gigabytes. There are a lot of implications for this number: it means that 4 GB is the largest amount of memory a 32-bit system can support (although some clever tricks were devised to get around this limitation). This is also the largest number (integer) a 32-bit system can work with.

But remember, a 32-bit system has to be able to work with negative numbers, too, which means the biggest number a 32-bit system can really handle is 2,147,483,647. (If you do the math, you might think I’m off by 1, but don’t forget that zero is a number too! That also leaves one extra bit for remembering whether the number is positive or negative.)

Here’s something kind of fun to know about 32-bit numbers: UNIX-based computers calculate dates and times as a number of seconds before or after the “UNIX epoch”. UNIX was invented in the early 1970s, so they decided that 1 should represent midnight on January 1, 1970.

Sometimes you might come across a strange date and time on a computer: December 31, 1969 11:59:59 PM. Where’d that come from? Well… if time is stored in seconds, and 1 is midnight on January 1, 1970, then what do you think 0 would equal?

The downside of storing dates and times as number of seconds before or after midnight on the first day of 1970 is that it doesn’t really let you go very far. Specifically, it gets you a bit more than 68 years in either direction.

The Y2K38 bug

Many computer programs in the 20th century were written using 2-digit years instead of 4-digit years. This was to save space when memory and hard disks were really expensive. They’re not anymore. But as the year 2000 approached, a lot of computer programs were still only using 2 digits for the year, and everyone became worried about the “Y2K” bug. What would happen when the year rolled over from 99 to 00? People worried that computer systems would think it was 1900. Lots of people spent lots of weeks and months rewriting computer software to avoid the bug, and in the end it turned out not to be much of a problem at all.

But UNIX systems have a different problem. 32-bit processors running UNIX can only calculate dates for 68 years after 1970 before they run out of numbers. The exact date when 32-bit dates will break is January 19, 2038.

Fortunately, many if not most UNIX systems are now running on 64-bit processors. These processors can handle digits up to 264. You might think that’s only buying us another 68 years, but remember how quickly exponents grow. 232 is a little over 4 billion, but 264 is so big it’s usually written in scientific notation, as 1.84 x 1019. That’s a little over 18 quintillion. It’s enough to allow a 64-bit processor to access 18 exabytes (18 billion gigabytes) of memory, and means UNIX won’t run out of seconds to count for 584 billion years!

IP addresses

32-bit addressing is used somewhere else that anyone who’s spent much time online has encountered: IP addresses. Every device connected to the Internet is assigned an IP address. Much like memory addressing within a computer, IP addresses allow different devices connected to the Internet to communicate with each other.

Since an IP address is a 32-bit number, there are a little over 4 billion possible IP addresses available. With over 7 billion people in the world, we’re at risk of running out! IPv4, the version of IP addresses we are all most familiar with, has this limitation, but a new protocol called IPv6 allows for 2128 addresses. That’s an incomprehensibly huge number. But since IPv4 and IPv6 are not compatible with each other, the transition has been slow, and, just like with the 4 GB memory limitation of 32-bit computers, some tricks have been developed to get around this limitation, namely Network Address Translation, which allows all devices connected to a local network to share the same “external” IP address, while having different “internal” IP addresses within the network. A router is a specialized computer that creates a bridge between these internal and external networks.

An IP address could be represented in many different ways. Since it is really a 32-digit binary number, it could be represented as a 32-digit string of 0s and 1s. Or it could be represented as eight hexadecimal numbers. But in practice we represent an IP address as a group of four decimal numbers, separated by periods. 32 bits is equal to 4 bytes, and each of the four digits in an IP address is one of those bytes. That’s why each of the numbers in an IP address is in the range of 0 to 255.

But actually, it’s 0 to 254, because certain numbers have special roles in IP addressing. Something called the subnet mask is used by routers to determine whether a given IP address is within their own networks or not. Subnet masks allow the router to ignore a certain number of digits in the IP address. A subnet mask of 255.255.255.0 tells the router that any IP address that has its first three numbers the same as the router’s own IP address is on the local network, and only the last number is used to separate the devices on the network. (That also means that at most the router’s network can support 254 devices, including the router itself.)

Some IP address “blocks” are reserved for use within internal networks. This allows huge numbers of devices to use the same IP addresses — not conflicting with each other because they’re only using them internally — and is the main way we can manage to have far more than 4 billion devices connected to the Internet at once around the world.

The IP blocks reserved for internal network use are:

10.x.x.x
169.254.x.x
172.16.x.x
192.168.x.x

Domain names

One final thought relating to IP addresses, but a bit off topic from the “bits” where we started:

IP addresses are hard to remember.

In the early days of the Internet, someone (Paul Mockapetris) realized this and invented DNS — the Domain Name System.

Domain names are easy to remember, like “google.com”. The underlying system that makes them work is a network of domain name servers located all around the world, which keep a set of “zone files” that store information about which IP addresses each of these domains correspond to. Any time you type a URL in your web browser, the first thing that happens is your computer looks up the domain name portion of the URL.

It first checks its own cache of DNS information, and if you haven’t already visited this domain recently, it looks it up with a DNS server that you’re configured to use. Your DNS server may be the “authority” on some domains, and it also keeps its own cache of domains that you or anyone else who’s using it has recently looked up. If the domain name is in its cache, it tells your computer. If not, the DNS server sends a request to the authority server for the domain, gets the IP, stores it in its cache, and sends it to your computer, which also stores it in its cache.

All of this happens in a tiny fraction of a second. And once it’s done, your computer knows the IP address of the computer it’s trying to talk to, and it sends a request for the web page (for example) to that IP address, and the computer on the other end responds with the requested information.

The Computer Course: An Introduction

This summer, my 11-year-old son is starting an informal internship here at Room 34 Creative Services, where I will be giving him some hands-on experience building websites, and also giving a series of — hopefully interesting — lectures on the basics of various aspects of computing.

Last Friday was his first day, and also the first lecture. Entitled “Bits”, it was an introduction to the most basic elements of computing — binary switches, or bits — and following through the mathematical implications of binary numbers to the specific applications of hex code colors in HTML/CSS and 32-bit IP addresses — along with a supplementary discussion of domain names and how they relate to IPs. (That supplementary discussion being the original intended topic of the whole lecture, but I digressed… or, I guess, regressed.)

I have decided to begin a companion series of blog posts, grouped under the new category The Computer Course, not replicating word for word the lectures themselves (since the lectures are mostly off the cuff), but using the same basic outlines as the lectures, and adapting the content in a way that suits the medium.

Google: anatomy of a (half-assed) web redesign

There are many things Google is good at. Internet search and targeted advertising clearly being the top two. I use and appreciate several of Google’s products, especially Gmail, Google Reader and Chrome. But I only use Gmail as a reliable email provider with great spam filtering; I hate the web interface, and check my mail using the native mail clients on my Mac and iPhone. I use Google Reader solely to manage my subscriptions, whereas I actually read my RSS feeds, on all of my devices, with Reeder. And the only times I fire up Chrome are when I need to use Flash, per John Gruber. In general, I like Google’s products for the power of their underlying technologies, just as I hate them for their miserable user interfaces.

I think there are very few people who would consider design to be one of Google’s strong suits, from their traditionally un-designed home page, to their hideous logo (which, nonetheless, went through several apparently well-, or at least extensively-, considered revisions), to the notorious case where, engineers to the core, they logically weighed the relative merits of 41 shades of blue.

If you actually use any of Google’s websites directly, you’ve surely noticed in the last 24 hours that there has been a redesign. The most distinctive feature is the jarring black bar now at the top of all (well, most) pages. Personally I’d prefer something a little more subtle, but it’s tolerable, and presumably achieves its goal of getting your attention by being the only solid black area on your computer screen.

What really bothers me about this redesign is the lack of internal consistency as you dig deeper. To wit, let’s have a look at the landing pages of Google’s three biggest search tools (as determined by their placement in the black bar): Web, Images and Search:

The main things I notice about the main Google (Web) search page compared to the previous version are that the logo is slightly smaller (and appears to have been refined in terms of the extent of 1997-era Photoshop effects applied to it, although I think that change happened a few months ago), and that the “Google Search” and “I’m Feeling Lucky” buttons have been redesigned. They have very slightly rounded corners, an extremely subtle off-white gradient, and are set in dark gray Arial bold 11-point (or so) type.

On Google Images, the logo appears to be basically the same (although perhaps a bit more dithered), but it is much higher on the page. The search box itself is darker and has a drop shadow. The “Search Images” button is larger, has sharp corners and a more intense gradient, and is set in black Arial, larger and normal weight. If I’m not mistaken, this is how the buttons on most Google sites looked prior to yesterday’s redesign, so this appears mainly to be a case of Google Images not keeping up with the changes happening elsewhere.

The page is also cluttered up with instructions and a rather arbitrary set of four sample images. I never bothered to read that text or figure out why the images were there until just now as I was writing this article. Being able to perform a visual search by dragging a sample image into the search box is a really cool idea, but anecdotally I would suggest Google has a daunting challenge in educating users about it, if making it the only thing on the page besides the search box itself still doesn’t get the user’s (i.e. my) attention. Maybe their insistence on using undifferentiated plain text (while it might make Jakob Nielsen proud) for everything is part of the problem.

Google Videos is really the odd man out. A smaller logo, set too far down on the page, and a bright blue search button with no text, just a magnifying glass icon, that would look more at home on a Windows XP start screen than on a Google page. (Astute observers will also note from these screenshots that Google Videos, unlike Google Images and Google Web, displays a glowing focus state on the search box, which is due to the lack of :focus { outline: none; } on the CSS for that element.)

I realize this blue button is more of the direction Google’s heading and I do like it visually, even if I don’t think the search button needs to be so prominent on a page that contains very little else. But the thing that bothers me is the overall inconsistency between these tools.

Consistency is a big buzzword for me. To me it is absolutely the most important thing to consider in good UX and UI design. It doesn’t matter how novel your design elements are; if you present them consistently users will quickly learn how to use them and will gain confidence with your tools. They will also gain expectations that you then have to manage. These do impose limitations on you in the future, sure, but they also relieve you of the burden of having to reinvent every page.

Consistency demands a good style guide, something that is easy to overlook. And just as important as having the style guide is having the commitment to using it. That’s something even a company as big as Google clearly struggles with.