TIL-logo-graphic Created with Sketch.

Today I Learned

A day-to-day accumulation of knowledge: nuggets we discover that others may find useful. As our mum always used to say, "sharing is caring".

WebAudio: Dezippering removed in Chrome

Whilst archiving a site recently, I noticed the following warning in Chrome dev tools which looked pretty urgent:

[Deprecation] GainNode.gain.value setter smoothing is deprecated and will be removed in M64, around January 2018. Please use setTargetAtTime() instead if smoothing is needed. See [https://www.chromestatus.com/features/5287995770929152] for more details.
Animation.initializeWaveform @ animations.js:44
AVUgenTwo.js:41 

I dutifully followed the Chrome Status feature link to read about dezippering. You what?!

Dezippering is actually quite simple, it just refers to the idea of smoothly transitioning to a value, rather than setting it immediately. That means this is not the immediate assignment you might think (similar to the attack length in an ADSR audio envelope)

gainNode.gain.value = 0.6;

Chrome used to do this dezippering by default/magic when you set a value. In line with the spec, they removed the behaviour.

To clean up and remove the warning you’ll need to use either set setValueAtTime or setTargetAtTime. The latter was useful to me to remove audible clicks now that the magic dezippering/attack phase had been removed:

gainNode.gain.setTargetAtTime(self.gainValue, audioCtx.currentTime, self.dezipSeconds);

oscillator.frequency.setValueAtTime(self.oscillatorFrequency, audioCtx.currentTime); // value in hertz

The full commit is on Github.

Checking for Meltdown and Spectre Vulnerabilities

So here we are, all F00F’ed again, this time with Meltdown and Spectre vulnerabilities. There are some good papers, posts and distro-specific help but I wanted to verify that whatever updates we took (for various EC2 Linux AMIs, Centos7 and Ubuntu workstations) actually worked.

A thread on the Gentoo forums led to the proof-of-concept test Am-I-affected-by-Meltdown. A git pull + make is all you need:

git pull https://github.com/raphaelsc/Am-I-affected-by-Meltdown.git
cd Am-I-affected-by-Meltdown
make
./meltdown-checker

Testing it locally on my 4.14.12 laptop gave the following output:

System not affected (take it with a grain of salt though as false negative may be reported for specific environments; Please consider running it once again).

Note: run multiple times!

I’ve seen the following on unpatched / patched-but-not-restarted EC2 Linux AMIs:

0xffffffff812168f0 -> That's SyS_poll
System affected! Please consider upgrading your kernel to one that is patched with KAISER

and a similar 0xffffffff81202560 -> That's SyS_read on an unpatched Centos 7.

If you have a custom/bespoke kernel enable the following flag:

CONFIG_PAGE_TABLE_ISOLATION=y

Otherwise your upstream kernel/firmware/microcode updates should have you covered.

UPDATE from Gil Tene: ensure you’re running with PCID (Processor-Context ID):

grep pcid /proc/cpuinfo

Amazon AMI dropping log messages

We set up an sftp upload location on an Amazon Linux AMI EC2 instance (a workaround for the 5GB S3 limit for uploads with grant options), configuring OpenSSH like so:

    Match Group sftp
    ChrootDirectory %h
    ForceCommand internal-sftp

It didn’t work. Nothing in the logs. The following started to appear after bumping the level in /etc/ssh/sshd_config to LogLevel DEBUG3:

rsyslogd-2177: imuxsock begins to drop messages from pid 5287 due to rate-limiting
rsyslogd-2177: imuxsock lost 32 messages from pid 5287 due to rate-limiting

Eh? Turns out the imuxsock adapter rate limits logging messages via:

$IMUXSockRateLimitInterval x
$IMUXSockRateLimitBurst y

Configuring /etc/rsyslogd.conf and restarting… changed nothing. A coffee and browse of the rsyslog website later I noticed there are 3 stable branches: v8, v7 and v5. What does the AMI use?

$ rsyslogd -v
rsyslogd 5.8.10

Wow, old skool. From rsyslog.com:

Use this documentation with care! It describes the heavily outdated version 5, which was actively developed around 2010 and is considered dead by the rsyslog team for many years now.

Applying the old configuration option:

$SystemLogRateLimitInterval 0

…the errors started appearing in the logs. (Underlying issue? Permissions, naturally).

The Villagers Rejoice

Type Alias vs Union Types

Declaring a record based type alias in Elm gives you a constructor function and can be used to add context to your code. Type aliases based on literals (or primitive types) do not and that can be confusing if you try to reference that type alias in another module.

Based on some fundamental reading of Abstract Data Types in FP and more specifically, how to use Product Types (aka Union Types) in Elm I learnt that refactoring a type from

type alias CellPosition =
    (Int, Int)

type alias CellRange = 
    List CellPosition

to

type CellPosition
    = CellPosition ( Int, Int )

type alias CellRange =
    { start : CellPosition
    , end : CellPosition
    }

allows us to add more type safety and context in our code. Before:

sourceCellPosition : CellPosition
sourceCellPosition =
    ( 0, 0 )

someRange : CellRange
someRange = 
    [ ( 5, 2 ), ( 10, 3 ) ] 

after, our code has more clarity:

sourceCellPosition : CellPosition
sourceCellPosition =
    CellPosition ( 0, 0 )

someRange : CellRange
someRange = 
 CellRange
    (CellPosition ( 5, 2 ))
    (CellPosition ( 10, 3 ))

See PR11 for the full refactor.

Gradle build dependency report task

If you have spent any time using Gradle you probably already know of the built-in dependencies tasks which is very useful when you want to know exactly which dependencies end up on the classpath of your application. For each configuration in your project it prints out the following:

  • list of all top level dependencies including dependencies on subprojects
  • nested list of all transitive dependencies
  • the requested version of each dependency and the one actually used if there are conflicts for that dependency

I was always missing a similar task for the build classpath configuration, that is dependencies declared in buildSrc block, dependencies of plugins used and so on. It turns out that since Gradle 2.10 there is one that does exactly that and it’s called buildEnvironment.

Deferred decoding in Elm - using Decode.value

I’d seen the Decode.Value type and its decoder Decode.value used in a few examples but never really understood them.

That all changed when I needed to defer/delegate some decoding. My app has a number of data adapters defined in JSON in generic form:

            "adapter": {
                "type_": "METRIC",
                "config": {
                    "sourceCell": [1, 0],
                    "targetCell": [2, 1]
                }
            },

These adapters can be passed specific configuration that only makes sense in their context.

Elm is strongly typed, even at its boundaries (which I love in the wild west of front end land). This means your types used as decode targets must be specific which seemed at odds with my requirements of generic config.

After some reading around I realised I needed a 2 step approach. The first would give me a concrete adapter type but crucially leave its config untouched. The second would allow something more contextual to interrogate the untouched JSON config.

My initial decode does something like this. Notice that it decodes the config map using a generic Dict of Decode.Value types (untouched, raw JSON).

This works because the Config type is defined that way.

I can then decode its config later upon use. Win!

Practical JSON+LD Examples

Structured data documentation is not massively intuitive (especially for non-techies like me), and the examples provided by Google or schema.org don’t always cover what I need. For those of us who aren’t yet able to write JSON-LD as if we’d been “@type”: “expert”, “description”: “writing JSON in our mother’s womb”, good examples are damned useful. So worry not microdata nerds and nerdettes, help is at hand!

https://jsonld.com/ has a bunch of real-world practical examples for everyone to rip off, er, take inspiration from. Combined with Google’s structured data testing tool it’s a great resource for creating useful and valid code to boost your technical SEO.

Getting Liquibase logging under control

The number of logging frameworks in the JVM space is a laughing stock. There are three “standard” frameworks for logging I know of (listed from most to least sensible):

What adds insult to injury is that library authors often invent their own logging frameworks. One library that has a custom logging framework is Liquibase and what I personally find an interesting choice is that by default it logs to stderr.

But fear not - if you are a smart person and like to have the liberty to choose the underlying logging framework and you route all your logging through SLF4J, you can easily get Liquibase to do the same. Simply add liqiubase-slf4j from Matt Bertolini as a runtime dependency to your project and job done:

dependecies {
    runtime "com.mattbertolini:liquibase-slf4j:2.0.0"
}

How to add organization to a maven pom in gradle

Using the DSL as below will produce a nasty object reference in your pom

developer {
   name 'Anthony Robinson'
   email 'tony@energizedwork.com'
   organization 'Energized Work'
   organizationUrl 'https://www.energizedwork.com/'
}
<developer>
  <name>Anthony Robinson</name>
  <email>tony@energizedwork.com</email>
  <organization>org.apache.maven.model.Organization@244a42dc</organization>
  <organizationUrl>https://www.energizedwork.com/</organizationUrl>
</developer>

Assign the organization to get the desired result

developer {
   name 'Anthony Robinson'
   email 'tony@energizedwork.com'
   organization = 'Energized Work'
   organizationUrl 'https://www.energizedwork.com/'
}
<developer>
  <name>Anthony Robinson</name>
  <email>tony@energizedwork.com</email>
  <organization>Energized Work</organization>
  <organizationUrl>https://www.energizedwork.com/</organizationUrl>
</developer>

Stopping Terminal/Console Scroll

Try as I might, I find myself in front of a linux terminal just as much nowadays as 10 years ago (and 10 years before that). I’m still tail-ing logs, running commands, less-ing configuration files or using diagnostic tools like iotop or dstat. I usually use the xfce terminal which is comfortable and robust.

Frequently I spot something interesting fly past but if it scrolls off-screen, or there is some kind of progress indicator (e.g. gradle’s new progress bar <======-------> 50% EXECUTING [1h 31m 29s]) it can be annoying to get back to it - especially if the terminal re-anchors at the bottom on every update.

To make life easier:

  • freeze scrolling using Ctrl+s
  • navigate up and down using Shift+PgUp / Shift+PgDn
  • re-enable use Ctrl+q.

Thanks as ever to The Great Stack Overflow.

Go forth and scroll appropriately.

MSSQL locks primary key index on full table scan

In our project we had a particularly badly designed associative table with records that contained id in table A, id in table B and a record id which was also a primary key.

We also had a transaction with two queries:

  • select rows from the associative table with a given table A id value
  • delete rows from the associative table with a given table A id value

When multiple such transactions where running in parallel we were observing deadlocks even though they interacted with different rows in that table and one of the transactions was being killed. After deadlock analysis performed as described in my previous post it turned out that the first select query was obtaining an update lock on the primary key index which prevented the delete query from execution because it would modify that index.

The solution was to get rid of the primary key column which was entirely unnecessary and use a composite primary key consisting of d in table A and id in table B instead.

Debugging deadlocks in MSSQL

If you ever experience a deadlock in MSSQL and you wish to gain insight into why it occurred then there is a relatively easy way to do that. All you need to do is execute this SQL query against the DB in which the deadlock happened as linked from this video.

The query will return rows containing a column with deadlock graph xml amongst others with the following information:

  • what queries were being executed as part of the deadlocked transactions when the deadlock happened
  • which resources which were part of the deadlock where locked by the deadlocked transactions

There is also another video which explains how to read the deadlock graph xml file.

Integrity checking script and style resources

Many companies choose to host content such as Javascript and CSS files on third-party servers such as CDNs, primarily to improve performance or conserve bandwidth.

While the majority of CDNs should be well-secured, how do you protect against ones that are not? What about a rogue CDN employee injecting malware into your JavaScript files?

You could perform server-side testing of these resources against known hashes, or use a client-side JavaScript solution, but both of these require work to get up and running.

Now, there’s a really easy way: Subresource Integrity.

This involves adding an integrity attribute to script and link elements to specify the expected hash of the file (currently limited to SHA256, SHA384, and SHA512).

Here’s an example:

<script
    src="https://example.com/example-framework.js"
    integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"
></script>

Some servers support delivering extra Content Security Policy headers that tell browsers only to load resource files that have the new attribute in place, and that pass the integrity check:

Content-Security-Policy: require-sri-for script;
Content-Security-Policy: require-sri-for style;

See MDN for more information on this.

Current browser support (November 2017) is patchy. For the integrity attribute:

  • Chrome 45
  • Firefox 43
  • Safari 11
  • Opera 32

For the CSP directive:

  • Firefox 49

Discovering HTML - the OUTPUT tag

The folks over at CSS Tricks have an article about the <output> HTML tag. I’d never heard of it until today - but it’s been around for years now!

It’s syntax is similar to the <label> HTML tag in that it takes a for attribute to associate it with an <input> HTML tag.

<label for="myInput">Label</label>
<input type="range" id="myInput" min="0" max="100" />
<output for="myInput"></output>

It seems like it was designed to provide semantic layout for the likes of screen readers. You still need to wire everything up with Javascript to get something useful happening when you interact with the input though.

Chrome testing with slow speed network

In Google Chrome developer tools you can adjust the network speed when you are testing page loads.

It is under the Network tab on the far right.

Change from Online to Slow 3G to see how people will experience your site on a mobile device with slow 3G connection. You can also set your own custom speed parameters if you want to test it out with other settings

Window resizing in Elm

So you want to render some UI component based up on the browser window dimensions? You’ll need to bind to a Window.resizes subscription to update your model & view.

Try this out live in Ellie or read below for details.

$ elm-package install elm-lang/window

-- MODEL

type alias Model = {
  height: Int
  , width : Int
}

-- VIEW

view: Model -> Html Msg
view model =
  let
    str =
      if model.height == 0 && model.width == 0 then
        "Resize the window"
      else
        toString model
  in
    Html.text str

-- UPDATE

type Msg
  = ResizeWindow Int Int

update: Msg -> Model -> (Model, Cmd Msg)
update msg model =
  case msg of
    ResizeWindow h w ->
      ({model | height = h, width = w} , Cmd.none)


-- SUBSCRIPTIONS
subscriptions: Model -> Sub Msg
subscriptions model =
  Window.resizes (\{height, width} -> ResizeWindow height width)

JRuby SystemCallError running Compass under Docker

We have a project that uses the Gradle Compass plugin to convert Stylus to CSS as part of the build process. Locally all was well but on the Docker CI build runner container we were seeing a SystemCallError coming from staleness_checker.rb. It turns out the error was being generated from the following call:

    css_mtime = File.mtime(css_file)

Eh?

JRuby uses jnr-ffi to load native libraries for various functions - including stat-ing files. Cooking up a (somewhat) simpler test we can see the underlying issue:

java -Djruby.native.verbose=true -jar ~/.gradle/caches/modules-2/files-2.1/org.jruby/jruby-complete/9.1.13.0/8903bf42272062e87a7cbc1d98919e0729a9939f/jruby-complete-9.1.13.0.jar  -e "File.stat 'missing file'"
Failed to load native POSIX impl; falling back on Java impl. Stacktrace follows.
java.lang.UnsatisfiedLinkError: Error loading shared library libcrypt.so.1: No such file or directory
    at jnr.ffi.provider.jffi.NativeLibrary.loadNativeLibraries(NativeLibrary.java:87)

Given that jruby-complete bundles jnr-ffi, a simpler solution was to symlink libcrypt so that it could be picked up rather than mess around trying different versions:

ln -s /usr/lib/libcrypto.so.1.0.0 /lib/libcrypt.so.1

Re-running our test gives the correct output:

java -Djruby.native.verbose=true -jar ~/.gradle/caches/modules-2/files-2.1/org.jruby/jruby-complete/9.1.13.0/8903bf42272062e87a7cbc1d98919e0729a9939f/jruby-complete-9.1.13.0.jar  -e "File.stat 'missing file'"
Successfully loaded native POSIX impl.
Errno::ENOENT: No such file or directory - missing file
    stat at org/jruby/RubyFile.java:938
  <main> at -e:1

The build passes and the villagers rejoice.

Javascript and ISO 8601 dates

And now for a scare before bedtime….

We were transferring an ISO-8601 date (not date time) across the wire to an Ember single-page application. Given we were using the minimal format e.g. 1970-01-01 with no timezone we had assumed that it would be deserialized in the local timezone. We were wrong. According to the ever-helpful MDN web docs:

Support for ISO 8601 formats differs in that date-only strings (e.g. “1970-01-01”) are treated as UTC, not local.

Looking at the Ember DS docs on DS.DateTransform we can see from date.js that in our case the value is being constructed using return new Date(serialized);.

The solution was to override this in our project (app/transforms/date.js) and build up the desired local date like so:

      let utcDate = new Date(serialized);
      return new Date(utcDate.getUTCFullYear(), utcDate.getUTCMonth(), utcDate.getUTCDate());

Indexed Lists in Elm

Lists in Elm are not indexed, unlike Arrays (lists are linked lists under the hood).

So, say you need to look up a colour for a UI line chart widget based upon the index of the data row you’re rendering:

lineColours : Array String
lineColours =
    Array.fromList ([ "red", "blue", "green", "purple", "orange" ])


getLineColour : Int -> String
getLineColour index =
    Array.get index lineColours |> Maybe.withDefault "black"

You can transform your data list into a list of indexed Tuples that you can map over, using the index to look up a colour:

indexedData =
    Array.toIndexedList (Array.fromList [("JAN", 100), ("FEB", 200)])

renderLines =
    List.map
        (\( index, dataTuple ) ->
            generateLineData dataTuple
                |> renderLine (getLineColour (index))
        )
        indexedData

Full src on Github and more about List vs Array.

DOCKER STORAGE ENGINE ON CENTOS7

We’re using GitLab with a dedicated runner for a project which has numerous build pipelines for various UI, API, data and infrastructure services. We decided to try a dedicated box to see if we could reduce the end-to-end duration by providing guaranteed resource. However we had a number of kernel faults using Docker under load on CentoOS7, first using btrfs and then running on ext4.

It turns out that Docker has a preferred storage engine list and we were ending up with devicemapper. You can check what storage engine Docker is using via the docker system info command. To set the storage driver on CentOS7, create /etc/docker/daemon.json (bloody systemd) with the following contents:

{
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}

Bounce the service service docker restart and voilà:

$ docker system info 2>&1 | grep -A3 Storage  
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true

So far we’re seeing much better pipeline throughput than devicemapper and no halts (at least not yet).

Remove/Delete Azure Scaleset extensions

If the updatePolicy for a scaleset is set to manual you need to update the scaleset and its instances in separate commands. For example, to completely remove an extension from a scaleset, delete the extension (which changes the scaleset model) then update the instances.

az vmss extension delete --name LinuxDiagnostic --resource-group myRG --vmss-name myVMSS
az vmss update-instances --resource-group myRG --name myVMSS --instance-ids *

When a extension has been removed from a scaleset, but the instances haven’t been updated, you may get the following error message from the Azure CLI if you try to add the extension again.

“'Operation 'PUT' is not allowed on VM extension 'LinuxDiagnostic' since it is marked for deletion. You can only retry the Delete operation (or wait for an ongoing one to complete).'

Quickly reset Postgres

I’m using the standard postgres docker container for my tests, and I’ve found it’s quicker to reset the data than to spin up a new container and initialise the schema each time. The following will ensure that you reset the schema to its state immediately post initialisation after a test run, without having to drop the db - so any existing connections do not have to be killed.

Initialise the db - liquibase, flyway, db-migrate whatever. Then:

docker exec <container_id_or_name> \
  pg_dump \
  --username=postgres \
  --format=custom \
  --file=/tmp/base_state \
  <db_name>

After a test:

docker exec <container_id_or_name> \
  psql \
  --username=postgres \
  --dbname=<db_name> \
  --command='drop owned by <db_owner>'
docker exec <container_id_or_name> \
  pg_restore \
  --username=postgres \
  --dbname=<db_name> \
  /tmp/base_state

If you’re on the JVM & using test containers to manage the postgres container then db_name and db_owner will both be test by default and the equivalent is:

def container = new PostgreSQLContainer()
container.start()
// do initialisation - flyway, liquibase etc.
container.execInContainer(
  "pg_dump",
  "--username=postgres",
  "--format=custom",
  "--file=/tmp/base_state", "test"
)
// after each test
container.execInContainer(
  "psql",
  "--username=postgres",
  "--dbname=test",
  "--command=drop owned by test"
)
container.execInContainer(
  "pg_restore",
  "--username=postgres",
  "--dbname=test",
  "/tmp/base_state"
)

Low shared memory in Docker causes random issues

With Docker containers you may run into spurious failures if you don’t increase the shared memory size to something larger than the default 64MB (e.g. docker run —shm-size=”1g” …). I burned several hours debugging and searching on weird failures on more than one occasion only to find that increasing the shared memory size is the solution. Concrete examples:
1) 5-10% of geb test suite failed unpredictably- mostly the more involved tests. I remember having to set shm on Circle CI and Gitlab
2) running ZoneMinder in the container with USB cameras attached to the host and mapped into the container - only the first camera worked reliably until I increased shm.

Terraform External Data to Generate Azure SAS

Sometimes when a Terraform resource is being executed you may need to run a sub command to provide an input. The resource below creates a VM Extension for diagnostics

resource "azurerm_virtual_machine_extension" "setup_manager_diagnostics" {
  name                       = "LinuxDiagnostic"
  resource_group_name        = "C-137"
  virtual_machine_name       = "Cronenberg"
  publisher                  = "Microsoft.Azure.Diagnostics"
  type                       = "LinuxDiagnostic"
  type_handler_version       = "3.0"
  auto_upgrade_minor_version = "false"
  settings                  = <<EOF
  {
    "StorageAccount": "${azurerm_storage_account.my_storage_account.name}",
    "ladCfg": "{...}"
  }
EOF
  protected_settings = <<EOF
  {
    "storageAccountName": "${azurerm_storage_account.swarm_storage_account.name}",
    "storageAccountSasToken": "I want to generate a SAS here"
  }
EOF
}

The value for storageAccountSasToken can be generated during execution by referencing an External Data source.

data "external" "diagnostics-generate-sas" {
  program = ["/bin/bash", "generate_sas.sh", "${azurerm_storage_account.my_storage_account.name}"]
}
#generate_sas.sh
#!/usr/bin/env bash

set -euo pipefail

STORAGE_ACCOUNT_NAME=$1
STORAGE_ACCOUNT_TOKEN=$(az storage account generate-sas --account-name $STORAGE_ACCOUNT_NAME --expiry 2020-12-31T23:59Z --permissions wlacu --resource-types co --services bt -o tsv)

echo "{\"storageAccountSasToken\": \"$STORAGE_ACCOUNT_TOKEN\"}"

You can then simply reference the output of this script like any other resource in Terraform

  protected_settings = <<EOF
  {
    "storageAccountName": "${azurerm_storage_account.swarm_storage_account.name}",
    "storageAccountSasToken": "${data.external.diagnostics-generate-sas.result.storageAccountSasToken}"
  }
EOF

Arrays and loops in Stylus

I recently needed to colourise the background of a chart. The design showed 5 colours - but the number of rows on the chart was dynamic. For this I settled on using the :nth-child CSS pseudo-class MDN definition for :nth-child() and ended up with something like this:

.progress-bar-chart:nth-child(5n-4) { background-color: red; }
.progress-bar-chart:nth-child(5n-3) { background-color: green; }
.progress-bar-chart:nth-child(5n-2) { background-color: blue; }
.progress-bar-chart:nth-child(5n-1) { background-color: gold; }
.progress-bar-chart:nth-child(5n) { background-color: wheat; }

Stylus (like LESS and SASS) is a dynamic style sheet syntax that compiles to CSS. It’s something we use across a lot of projects at Energized Work and from a front-end perspective is a huge time saver. I wanted to explore what options within Stylus could be used to represent the same output. I settled on the following:

chartColours = (wheat gold blue green red)
.progress-bar-chart
  for num in (4..0)
    &:nth-child(5n-{num})
      background-color: chartColours[(num)]

There are some hard-coded values in the code above that are less than ideal - it would be nice to just have a simple array of colours and have the rest be dynamic (something for another day).

HTTPS-Only in Heroku

So the lovely people at Heroku can now provide automated certificate management using LetsEncrypt, the free antidote to the scam that is internet SSL certificate authority.

However, just enabling certificate management doesn’t mean that plain ol’ http:// requests won’t make it through to your app. You’re going to have to do the protocol redirect yourself (in your app, not actually you). Since Heroku provides SSL termination via it’s mystery router layer I was at a loss to understand what I was supposed to use to determine whether the user was coming in over http or https.

Thankfully they provide an X-Forwarded-Proto HTTP header that contains the original request protocol (e.g. ‘http’ or ‘https’). Mystery solved.

Many thanks to Jake Trent for this one.

Continent Codes

We’ve been working with geographical information recently. It’s common to send around ISO-3166 2-letter country codes (e.g. ‘GB’, ‘US’, ‘DE’) but did you know there are also 2-letter continent codes?

  • AF Africa
  • AN Antartica
  • AS Asia
  • OC Australia (Oceania)
  • EU Europe
  • NA North America
  • SA South America

The great thing about standards is that there are so many to choose from.

~ Anon

Styling placeholder text: some gotchas

Following Jeff’s post, here are some styling gotchas.

With this markup:

<link href="https://fonts.googleapis.com/css?family=Oswald:400" rel="stylesheet">
<input type="text" placeholder="Input placeholder">
<textarea placeholder="Textarea placeholder"></textarea>

and this CSS:

/*WebKit / standards*/
/*Firefox*/
/*IE*/
/*Edge*/
:placeholder-shown,
::-moz-placeholder,
:-ms-input-placeholder,
::-ms-input-placeholder {
    font-family: Oswald, sans-serif;
    font-size: 30px;
    color: green;
    text-transform: uppercase;
}

you’d expect styled placeholders in Chrome, Safari, Firefox, IE, and Edge. You’d be mistaken!

You have to separate the styles:

/*WebKit / standards*/
:placeholder-shown {
    font-family: Oswald, sans-serif;
    font-size: 30px;
    color: green;
    text-transform: uppercase;
}

/*Firefox*/
::-moz-placeholder {
    font-family: Oswald, sans-serif;
    font-size: 30px;
    color: green;
    text-transform: uppercase;
}

/*IE*/
:-ms-input-placeholder {
    font-family: Oswald, sans-serif;
    font-size: 30px;
    color: green;
    text-transform: uppercase;
}

/*Edge*/
::-ms-input-placeholder {
    font-family: Oswald, sans-serif;
    font-size: 30px;
    color: green;
    text-transform: uppercase;
}

Sadly, not all of the styles used above work everywhere:

  • Firefox honours everything, but you’ll need to set opacity:1 to get a matching colour
  • IE 10 & 11 honour everything
  • Chrome and Safari (and presumably other WebKit browsers) don’t honour the color style, unless you add a ::placeholder selector with another color rule
  • Edge 14 & 15 don’t honour either the font-family or font-size styles

See CSS Tricks for more.

Chaining Elm decoders

So you know your JSON field is a string but say you want to validate it and turn it into a concrete Elm type. Instead of simply using Decode.string we write our own decoder that uses Decode.andThen.

type Renderer
    = TABLE
    | LINE
baseWidgetDecoder : Decoder (a -> Widget a)
baseWidgetDecoder =
    decode Widget
        |> required "name" Decode.string
        |> required "dataSources" (Decode.list DataSource.decoder)
        |> required "adapter" adapterDecoder
        |> required "renderer" rendererDecoder
rendererDecoder : Decoder Renderer
rendererDecoder =
    Decode.string
        |> Decode.andThen
            (\str ->
                case str of
                    "TABLE" ->
                        Decode.succeed TABLE

                    "LINE" ->
                        Decode.succeed LINE

                    somethingElse ->
                        Decode.fail <| "Unknown renderer: " ++ somethingElse
            )

Json-Decode docs

Looping in Ansible

Looping over a configured set of properties in Ansible can be achieved by using the with_dict property.

my-inventory.yml:

packages: 
  my_first_package: 1.0.1
  my_second_package: 1.0.2

my-task.yml:

--
- name: install packages
  become: yes
  shell: >
    ./install_package.sh
      --name {{ item.key }}
      --version {{ item.value }}
  with_dict: "{{ packages }}"

Azure Monitor Alerts require a unique Name

When using the Azure CLI to create alerts for our VMs and Scalesets the following error was returned: “Can not update target resource id during update”.

for target in ${targets[@]}; do
    az monitor alert create \
        --condition "Percentage CPU > 80 avg 5m" \
        --name "High CPU" \
        --resource-group $RG \
        --target $target
done

This is because the “name” of the alert has to be unique - the error message could be much clearer. Appending the target name to the alert name was a simple workaround

[update] the “name” needs to be unique within a resource group.

for target in ${targets[@]}; do
    az monitor alert create \
        --condition "Percentage CPU > 80 avg 5m" \
        --name "High CPU | $(basename $target)" \
        --resource-group $RG \
        --target $target
done

Transaction isolation levels in Azure SQL Server

We have recently discovered a nasty bug in our application where we would send back a HTTP response before committing the transaction that spans the whole request. This would manifest itself in subsequent requests not seeing changes made in the DB by previous requests for a short period of time.

When I went ahead to write a test before putting a fix in place I could not by any means reproduce the issue against a locally running dockerized SQL Server instance. After a while I realised that this is due to a difference in behaviour for TRANSACTION_READ_COMMITTED isolation level between the local instance and the one in Azure. Locally, this level seems to block any reads from a table modified by a previous, uncommitted transaction whereas such reads in Azure SQL server are not blocked and return a view of data from before the uncomitted transaction. Bumping the transaction isolation level to TRANSACTION_REPEATABLE_READ when connecting to Azure SQL seems to bring the bahviour in line with the one for the dockerized instance under the TRANSACTION_READ_COMMITTED level.

Argument error when ETS table already exists

Not very sporting! Pretty obvious here but the scenario occurred in a way that was less obvious the ETS table existed :/

iex(2)> :ets.new(:session_msg_ids, [:named_table, :public])
:session_msg_ids
iex(3)> :ets.new(:session_msg_ids, [:named_table, :public])
** (ArgumentError) argument error
    (stdlib) :ets.new(:session_msg_ids, [:named_table, :public])

Shouts to @benwilson512 on the Elixir Slack channel for pointing that one out.

Testing code that calls System.exit()

We’ve got an end-to-end test that verifies that our Dropwizard application successfully starts up by calling its main method. We want to check that our Guice bindings are configured properly and the app will actually start up when deployed. The problem is that Dropwizard calls System.exit(1) when there is an exception while starting up the app. This causes the test JVM to exit and our Gradle build to fail with a cryptic error like:

Process ‘Gradle Test Executor 1’ finished with non-zero exit value 1

Turns out that thanks to the very useful ExpectedSystemExit JUnit rules from System Rules project there is a quick fix for the problem:

@Rule
ExpectedSystemExit exit = ExpectedSystemExit.none()

After adding the above to your test it will fail if System.exit() is called instead of the test JVM being killed.

Traditional scroll position behaviour with Ember

By default Ember preserves the scroll position across page transitions. This is desirable when building single page applications with nested views, however, this behaviour seems strange when building a ‘traditional’ website as it means pages might (and did) load scrolled half way down 🤔.

Luckily the community comes to the rescue. There is an Ember addon called ember-router-scroll that provides that traditional website feeling that I was after.

Placeholder text for HTML input fields

The idea of using a <label> element to describe the purpose of an HTML input is (hopefully) common practice - yet I experience many situations where this can be enhanced.

The placeholder attribute allows some extra text to be displayed in the input element when no user-entered content has been entered (and has been supported for ages):

<input type="text" placeholder="Your email address..." class="field" />

The example above will show the text Your email address... inside the input element until the user enters some content into the input.

Need to style the placeholder text? The pseudo-class syntax varies depending on the browser you are targeting (alas this is not yet standardised). The following shows how you can target the placeholder text (referring to the HTML used above) on many of the “modern” browsers (note the single colon used when targeting Internet Explorer):

<style>
.field::-webkit-input-placeholder { /* Chrome/Safari */
  color: gold;
}
.field::-moz-placeholder { /* Firefox */
  color: gold;
}
.field:-ms-input-placeholder { /* IE */
  color: gold;
}
</style>

Read more on applying CSS to the placeholder attribute over at CSS Tricks

Intellij Running a test case until failure

An industry best practice for testing is to use randomised data in your tests. Occasionally your random data throws up a test that on CI fails nondeterministically. In Intellij once you run a test once, you can then Edit Configurations… on your test, and change the Repeat from running Once to Until Failure. You might have to limit your memory settings in Intellij to replicate the problem, as your computer is likely to be a lot faster than the CI’s.

Validating Maps in JSON Schema

So I had a map in a JSON payload and I wanted to write a JSON schema to validate it because, y’know, keeping different things working together. I discovered the patternProperties validation keyword which can match properties based on a regex (ok, ok, so shoot me).

"properties": {
    "myLovelyMap": {
      "type": "object",
      "patternProperties": {
        "^my-leet-map-key$": {
          "enum": [ 
             "ALLOWED_VALUE_1",
             "ALLOWED_VALUE_2"
           ]
        }
      },
      "additionalProperties": false
    }
  }

Number HTML inputs and maxlength

You can use the maxlength attribute in a normal HTML input to restrict the number of characters that a user can type in to the element. If you decide to switch the input to be of type number (maybe to ensure a numeric keyboard shows on mobile devices), then the maxlength attribute is ignored completely.

<input type="text" maxlength="10"/>

restricts the input to 10 characters.

<input type="number" maxlength="10"/>

completely ignores the 10 character restriction.

The Match Operator

So the = operator in elixir is not the assignment operator, but rather the match operator. It’s pretty funky. You can use it to ‘destructure’ information really easily, e.g.

iex> {a, b, c} = {:hello, "world", 42}
{:hello, "world", 42}

and match on specific values, e.g.

iex> {:ok, result} = {:ok, 13}
{:ok, 13}

Very cool. You should check it out some.