Internet Explorer and the centered background SVG problem

We are rapidly approaching, I believe, a time when we no longer need to worry about things that don’t work in Internet Explorer.

But we’re not there yet.

I’ve started using SVG images on the web a lot, especially for logos, which I tend to put into a flexible-width container at the top left corner of pages. Which is great, except IE has this annoying habit of resizing the canvas for SVGs to fill the container they’re in. The whole logo appears on the page, and scaled properly… but it’s horizontally centered in the container, rather than flush against the left edge.

There appear to be many possible solutions, but for a solution that is 100% fluid/responsive (that is, it will scale with the container), this does the trick.

It does involve manually editing the SVG code, but it’s simple and you only need to do it once as part of the image prep process. In the <svg> tag, look for the viewbox attribute. It will most likely consist of four numbers… 0 0 followed by the width and height of the canvas. After this attribute, simply add width="x" height="y" with the same x and y values from the viewbox attribute.

It works!

Noted for future reference (by me): How to reset the MySQL root password in Ubuntu 18.04

As a bleeding-edge early adopter, here in June 2018 I am already using Ubuntu 18.04 LTS for new sites I’m setting up for clients. (How daring!)

I ran aground this afternoon with a new server setup, because I couldn’t log into phpMyAdmin as root (or, therefore, as anyone, since I hadn’t set up my own user yet).

All of the old familiar ways I had been trying, and the tutorials I had referred to for at least the past two years, were not working. So then I specifically searched for a solution for Ubuntu 18.04 and found this excellent tutorial.

First off, mysql_secure_installation wasn’t working. That was one of the “old familiar ways” I had already tried thrice. THRICE I TELL YOU!

The key, I think, was these two unexpected lines:

$ sudo mkdir -p /var/run/mysqld
$ sudo chown mysql:mysql /var/run/mysqld

Because that folder didn’t exist and/or didn’t have the proper permissions, some other steps were failing silently or giving error messages that failed to point me in the direction of the actual problem.

The other thing to note is that with the version of MySQL included in Ubuntu 18.04, these are the proper MySQL commands to run:

USE mysql;
UPDATE user SET authentication_string=PASSWORD("linuxconfig.org") WHERE User='root';
UPDATE user SET plugin="mysql_native_password" WHERE User='root';

And it is actually important to run both of those UPDATE commands, because in the tutorial the results displayed show that the first one updated a record for them, while the second didn’t. I had already run the first command (but not the second) in one of my failed updates. So when I ran these, the first one didn’t update any records and the second one did.

How to modify WooCommerce to prevent users from selecting UPS shipping for P.O. Box addresses

Anyone who’s dealt with e-commerce in any capacity probably knows that UPS won’t deliver to P.O. boxes. Well, technically they can’t deliver to P.O. boxes. And apparently they’ll forward packages on to the box owner’s physical address, but they charge a big extra fee to do it. So, you want to avoid it.

Unfortunately, WooCommerce and its UPS Shipping add-on do not account for this, and will accept UPS orders to P.O. box addresses. Not good.

The official WooCommerce developer documentation has an article on how to block P.O. box shipping, but it applies to all shippers. Not what we want.

Also, I’m not sure if the documentation is outdated or what, but their code sample didn’t work for me with the latest version (3.4.3) of WooCommerce, because of the wc_add_notice() function.

I’ve modified the original code to add a check for UPS shipping, and also to use the $errors variable. (I also considered removing the global $woocommerce; line since it seems unnecessary, but I didn’t take the time to test whether or not it’s definitely safe to remove, so I left it in.)

add_action('woocommerce_after_checkout_validation', function($data, $errors) {
  global $woocommerce;
  if (isset($data['shipping_method'][0]) && strpos($data['shipping_method'][0], 'ups') === 0) {
    $address1 = (isset($data['shipping_address_1'])) ? $data['shipping_address_1'] : $data['billing_address_1'];
    $address2 = (isset($data['shipping_address_2'])) ? $data['shipping_address_2'] : $data['billing_address_2'];

    $replace = array(” “, “.”, “,”);
    $address1 = strtolower(str_replace($replace, '', $address1));
    $address2 = strtolower(str_replace($replace, '', $address2));

    if (strstr($address1, 'pobox') || strstr($address2, 'pobox')) {
      $errors->add('shipping', __('Sorry, UPS cannot deliver to P.O. boxes. Please enter a street address or choose another shipping method.' . $datadump, 'woocommerce'));
    }
  }
}, 10, 2);

Important notes:

1. This code may not immediately work for you; I believe the 'ups' string in the conditional line may vary depending on your Shipping Classes settings, so you may need to investigate exactly what values are returned in $data['shipping_method']. Since this code is fired off by an AJAX call, it can be difficult to debug. I was able to crudely debug it by commenting out the conditional, then appending print_r($data) to the error string.

2. This is using an anonymous function, so it won’t work in PHP versions below 5.3. But you’re not using a PHP version that old, are you? ;)

3. The original version checked the address line 1 and the postcode field, rather than address lines 1 and 2. I’ve United States-ified my code because that’s what I needed. If you’re part of the other 95% of the world, you may need to add that back in, with appropriate adjustments to the nested conditional. (I’m not really sure if this issue is as UPS-specific outside the US, so my modifications may not be relevant.)

Why Capitalism Is Stupid: A Case Study

Note that I didn’t say bad, or evil, but stupid.

Before we go any further, let me state that I have never studied economics, and I’ve only taken one intro-level philosophy class. The topics I’m bringing up here are steeped in both, and I know I’m out of my element.

Capitalism has, as a core principle, a belief that competition drives innovation and growth, which helps a society to thrive; whether its helping society to thrive is intentional or just a consequence is debatable. And in practice capitalism is just as susceptible to corruption as communism — both fail as a result of the boundless greed of the powerful. But for the moment let’s not dwell on the big picture… let’s just look at one example of how capitalism can be… well, stupid.

I live in Minneapolis, a large city of 425,000. Along with St. Paul (pop. 310,000) it is the core of a metro area of 3.6 million. Like all large cities, Minneapolis is divided into a number of distinct communities. The community I live in is called Longfellow, and combined with neighboring Nokomis, the immediate area has a population of around 65,000 people.

Unlike many other parts of the city, Longfellow-Nokomis has always resisted the heavy encroachment of large chain businesses. Of course we have a Target, a scattering of McDonald’s and Subway locations, etc. But for the most part, the businesses here are small and local.

In fact, in an area comprising about 13 square miles, home to 15% of the entire city’s population, there is only one Caribou Coffee, and until recently, zero Starbucks (not counting the one inside the Target… which is a capitalism story for another blog post). Of course we have dozens of local independent coffee houses, but only one each of the two big chains.

And they’re right across the street from each other.

How does it benefit the citizens of the community to have a Caribou and a Starbucks within sight of each other, when the vast majority of residents of the area don’t live within walking distance of either one? Who does a decision like this really serve? How does this help society to thrive?

What we’re looking at here is not a failure of capitalism in principle, but an example of how it fails in practice, as power is consolidated in the hands of a few greedy, powerful corporations.

I’m sure this is the point where a (21st century) Republican dutifully says, “But how can a corporation be greedy? Greed is a human emotion, and corporations are businesses, not people.” Oh right, corporations are only people when it comes to exercising their right to free speech. (And political money equals speech.) I’m sure few, if any, of the individuals within these corporations are ruthlessly greedy. But they don’t need to be. The system is built on a principle whose logical consequence is that increasing profits outweighs any other considerations. That could be defined as greed.

One could argue that Caribou and Starbucks have grown to that tipping point in capitalism where they are no longer focused on competition through innovation, but on stifling competition through consolidation of power. Nothing better exemplifies to me capitalism’s absurd failure than a business opening its first location in a large, heavily populated area within feet of its rival.

Sadly, I’m sure these corporations did extensive research and determined that the best location in Longfellow-Nokomis for a major chain coffee house was right at this spot, even if there was already another one right there. And I bet both will do booming business, because… honestly? Most of us just don’t question it.

Now get in the car. I want some coffee.

Getting Google to remove fake hack URLs from its indexes for your site

As a web developer/systems admin, dealing with a hacked site is one of the most annoying parts of the job. Partly that’s on principle… you just shouldn’t have to waste your time on it. But also because it can just be incredibly frustrating to track down and squash every vector of attack.

Google adds another layer of frustration when they start labeling your site with a “This site may be hacked” warning.

A lot of times, this is happening because the hack invented new URLs under your domain that Google indexed, and for various reasons, Google may not remove these pages from its index after it crawls your thoroughly-cleaned site, even though those URLs are no longer there and are not in your sitemap.xml file. This issue may be exacerbated by the way your site handles redirecting users when they request a non-existent URL. Be sure your site is returning a 404 error in those cases… but even a 404 error may not be enough to deter Google from keeping a URL indexed, because the 404 might be temporary.

410 Gone

Enter the 410 Gone HTTP status. It differs from 404 in one key way. 404 says, “What you’re looking for isn’t here.” 410 says, “What you’re looking for isn’t here and never will be again, so stop trying!”

Or, to put it another way…

A quick way to find (some of) the pages on your site that Google has indexed is to head on over to Google (uh, yeah, like you need me to provide a link) and just do an empty site search, like this:

site:blog.room34.com

Look for anything that doesn’t belong. And if you find some things, make note of their URLs.

A better way of doing this is using Google Search Console. If you run a website, you really need to set yourself up on Google Search Console. Just go do it now. I’ll wait.

OK, welcome back.

Google Search Console lets you see URLs that it has indexed. It also provides helpful notifications, so if Google finds your site has been hacked, it will let you know, and even provide you with (some of) the affected URLs.

Now, look for patterns in those URLs.

Why look for patterns? To make the next step easier. You’re going to edit your site’s .htaccess file (assuming you’re using Apache, anyway… sorry I’m not 1337 enough for nginx), and set up rewrite rules to return a 410 status for these nasty, nasty URLs. And you don’t want to create a rule for every URL if you can avoid it.

When I had to deal with this recently, the pattern I noticed was that the affected URLs all had a query string, and each query string started with a key that was one of two things: either a 3-digit hexadecimal number, or the string SPID. With that observation in hand, I was able to construct the following code to insert into the .htaccess file:

# Force remove hacked URLs from Google
RewriteCond %{QUERY_STRING} ^([0-9a-f]{3})=
RewriteRule (.*) – [L,R=410]
RewriteCond %{QUERY_STRING} ^SPID=
RewriteRule (.*) – [L,R=410]

Astute observers (such as me, right now, looking back on my own handiwork from two months ago) may notice that these could possibly be combined into one. I think that’s true, but I also seem to recall that regular expressions work a bit differently in this context than I am accustomed to, so I kept it simple by… um… keeping it more complicated.

The first RewriteCond matches any query string that begins with a key consisting of a 3-digit hex number. The second matches any query string that begins with a key of SPID. Either way, the response is a 410 Gone status, and no content.

Make that change, then try to cajole Google into recrawling your site. (In my case it took multiple requests over several days before they actually recrawled, even though they’re “supposed” to do it every 48-72 hours.)

Good luck!