Stephen A. Fuqua (saf)

a Bahá'í, software engineer, and nature lover in Austin, Texas, USA

On July 13, a new statue was placed in the U.S. Capitol: Mary McLeod Bethune. Reading the news, I knew that I had heard this name - yet knew nothing about her. Who was this woman, the first African American to be so honored in the Hall of Statues?

Born into a large family on her parents’ farm in 1875 (she was the fifteenth child), she was taught early to look to the Bible for guidance and comfort, despite the family’s illiteracy. With help from a benefactress, she enrolled in school at the age of ten and eventually went on to collegiate study. Oft quoted as saying, “[t]he whole world opened up to me when I learned to read,” she went on to live an exceptional life of courage and action on behalf all people, most particularly her fellow African Americans and especially women of color.

Mary Mcleod Bethune

By 1904 she was in southern Florida, where she established a school and a hospital. That school eventually developed into today’s Bethune-Cookman University. Her mission reached beyond local concerns, as reflected in her voluminous writings published in papers and journals across the country, and in her civic engagement. She served in executive capacities with the National Urban League and the NAACP, and the National Council of Negro Women. An adviser to multiple U.S. Presidents, she was apparently the only woman of color present at the founding of the United Nations in 1945.

What drove this powerful woman? What gave her the strength to face down the Klan, to push for equality of rights and dignity within the Church halls and the halls of power, and to declare to her fellow Black people in the U.S. that “we, too, are Americans,” encouraging them to stand “shoulder to shoulder with all other groups of Americans, in defending the ideals of this country”? (1) In her own words:

“Love, not hate has been the fountain of my fullness. In the streams of love that spring up within me, I have built my relationships with all mankind. When hate has been projected toward me, I have known that the person who extended it lacked spiritual understanding. I have had great pity and compassion for them. Our Heavenly Father pitieth each one of us when we fail to understand. Jesus said of those who crucified Him,

‘Father, forgive them, For they know not what they do.’

Because I have not given hate in return for hate, and because of my fellow-feeling for those who do not understand, I have been able to overcome hatred and gain the confidence and affection of people. Faith and love have been the most glorious and victorious defense in this “warfare” of life, and it has been my privilege to use them, and make them substantial advocates of my cause, as I press toward my goals whether they be spiritual or material ones.” (2)

Faith and love refilled her as she continually emptied herself, just as Shoghi Effendi guided the Bahá‘ís to do when he wrote, “We must be like the fountain or spring that is continually emptying itself of all that it has and is continually being refilled from an invisible source. To be continually giving out for the good of our fellows undeterred by fear of poverty and reliant on the unfailing bounty of the Source of all wealth and all good—this is the secret of right living.” (3)

Her statue bears the epitaph, “Invest in the human soul, who knows, it may be a diamond in the rough.” So remarkably like Bahá‘u’lláh’s pronouncement, “Regard man as a mine rich in gems of inestimable value. Education can, alone cause it to reveal its treasures, and enable mankind to benefit therefrom.”(4) Her own drive to be educated revealed her gems, it is clear, and enabled humanity to benefit therefrom.

Bethune was a devout Christian, a seminarian whose words and deeds testified to a belief in the vitality and importance of a Christ-centered life. While I have no inkling of her feelings about the Bahá‘í Faith, she was certainly aware of it (see advertisement below). To my way of thinking, her life serves as a wonderful example of what it means to live a “true Bahá‘í life”: pairing worship and service, championing the cause of the oneness of humanity, contributing to the prevalent discourses of society and to the social and economic development of the community, raising up the voices of women.

Advertisement from 1929


Citations

  1. Bethune, Mary McLeod. “We, Too, Are Americans!”, Pittsburgh Courier, January 17, 1941. p. 8.
  2. Bethune, Mary McLeod. “Mary McLeod Bethune”, American Spiritual Autobiographies: Fifteen Self-Portraits, edited by Louis Finkelstein. (New York, NY: Harper & Brothers, 1948). pp. 182-190.
  3. Rabbani, Shoghi Effendi. Directives from the Guardian. (India/Hawaii, 1973 edition). p 87.
  4. Bahá‘u’lláh. _Gleanings From the Writings of Bahá‘u’lláh. (US Bahá‘í Publishing Trust, 1990 pocket-size edition). p 346.

Photo by Addison Scurlock. Courtesy of Smithsonian Institution, National Museum of American History, Archives Center. Accessed courtesy of Flickr

Newspaper advertisement: The Brooklyn Daily Eagle, Brooklyn, New York. 22 Oct 1929. p 33.

Bibliography

Other works consulted:

  • Bethune, Mary McLeod. “Stepping Aside . . . at Seventy-four”, Women United, October 1949, pp. 14-15.
  • Bethune, Mary McLeod. “God Leads the Way, Mary”, Christian Century, Vol. 69, 23 July 1952. pp. 851-52.
  • Richards, Emma. “Historic first as Mary McLeod Bethune statue installed at the U.S. Capitol”, University of Florida News. July 13, 2022. https://news.ufl.edu/2022/07/mary-mcleod-bethune/, accessed July 16, 2022.
  • Michals, Debra. “Mary McLeod Bethune”, National Women’s History Museum. 2015. www.womenshistory.org/education-resources/biographies/mary-mcleod-bethune, accessed July 16, 2022.

The Ed-Fi Tech Congress in Phoenix, of April 2018, was a sink or swim moment for me, as I had just started working for the Ed-Fi Alliance. Among the first people I met was a representative from one of the big technology companies. The conversation quickly turned to the question of how to deal with data when the vendor would not send it directly into the Ed-Fi ODS/API. He asked me, “why not just put it in a data lake?” To which I had no reply. Nearly four years later, at last I can give a reasonable reply.

Continue reading on wwww.ed-fi.org

Diagram of extract from Ed-Fi API to Data Lake

Prompted by a class I’m taking, I decided to try running Python from Windows Subsystem for Linux (WSL; actually, WSL2 to be specific). Installing Python in Ubuntu on Windows was relatively easy, though I did run into a couple of little problems with running poetry. Bigger challenge: running graphical user interfaces (GUIs) from WSL. Here are some quick notes from my experience.

Screenshot showing a small program displaying the operating system name Screenshot shows that I’m running Windows 10, and shows a small GUI window opened from both Powershell and from Bash using the same Python script.

First Things: Installing Python in Ubuntu

Assuming you are already running Ubuntu in WSL, then the following commands will help install Python (all run from your Ubuntu/bash prompt, of course):

sudo apt update
sudo apt -y upgrade
sudo apt install python3 python3-pip

This will make the python3 command available in your path. I’m a fan of using Poetry instead of Pip for dependency management. It can be installed in the normal Poetry way.

I have a thing about typing python instead of python3, so I created an alias in Bash: alias python=python3. However, Poetry does not execute commands through Bash, so the command failed with an interesting error message [Errno 2] No such file or directory: b'/usr/share/PowerShell/python'. Wonder why it looked in a PowerShell directory?

Not surprisingly, there are others who like to type one character less:

sudo apt install python-is-python3

Now the python command works as desired, from Bash and from Poetry.

Enabling a Graphical User Interface

Executing a Python-based GUI app from WSL seems… a bit odd… but let’s run with it, shall we? Because it is a requirement. We will need to use the tk toolkit for this class. If I understand correctly, it is included in Python 3.9+. But I have 3.8. Most likely I could find a way to upgrade to 3.9, but I don’t have a compelling reason yet, and the following command will install the tk support.

sudo apt install python3-tk

Next: how does WSL open a GUI window in Windows 10? You need an X-Windows compatible server for that. There are several proprietary and open source options available. I chose to go with the open source VcXsrv, which I installed in Windows (not WSL) via Chocolatey: choco install vcxsrv.

Once installed, you need to run it via the XLaunch command, which will be available in the Windows start menu. This Stack Overflow post has good suggestions for launching it correctly. I had to read through the first few posts to get the steps right. The application prompts you for configuration. Key values to use:

  • First dialog: multiple windows, display number 0
  • Second dialog: Start client
  • Third: optional clipboard, native OpenGL yes (sounds good anyway), and disable access control (unless you really want to go about configuring a user). For the OpenGL support, you will need to set an environment variable in Bash before trying to launch an application: export LIBGL_ALWAYS_INDIRECT=1.

The answers mention opening the Windows Defender firewall to VcXSrv. The way they do this in the Stack Overflow post might be dangerous, especially in combination with disabling access control. A potentially safer* way is to simply allows WSL2’s network interface to access your local server. That means you are not opening your firewall to the Internet. This can be done with the following command, run from PowerShell in administrative mode:

New-NetFirewallRule -DisplayName "WSL" -Direction Inbound  -InterfaceAlias
"vEthernet (WSL)" -Action Allow

* I have not been in the business of writing firewall rules since the early 2000’s, so while I think this is correct, I might be mistaken. Please think through your security posture carefully before following this path.

Finally, back at the Bash prompt, you need to set the DISPLAY environment variable so that the X-Windows commands will be redirected to Windows. This variable will need to access Windows through the network, addressing the Windows computer by server name or IP address. Typically one might think of using “localhost”. However, WSL2 runs in an isolated network inside of Windows and it does not recognize your Windows as “localhost”. So for this command you must use the WSL2 instance’s current IP address. Here is a convenient command that will read your IP address into the DISPLAY environment variable. The zero at the end assumes that VcXSrv was configured to run on display 0:

export DISPLAY=$(awk '/nameserver / {print $2; exit}' /etc/resolv.conf 2>/dev/null):0

Now poetry run python -m tkinter should launch a little demonstration.

And for a more interesting demonstration, generating the windows shown in the image above:

import os
from platform import uname
from tkinter import Tk, ttk

root = Tk()
frm = ttk.Frame(root, padding=10)
frm.grid()

ttk.Label(frm, text=f"This window is running from {uname().system}").grid(column=0, row=0)
ttk.Button(frm, text="Quit", command=root.destroy).grid(column=0, row=1)
root.mainloop()

Setting Environment Variables on Startup

Two environment variables were created in this process. It would be tedious to come back to this post and recopy them every time a new Ubuntu/Bash shell is opened. Linux has a simple way of dealing with this: the .profile file contains instructions that run every time you open a command prompt. There is also a .bashrc file that runs next, whenever you run Bash (there are other shells that you could switch to, though Bash is the most popular). Edit either one.

You will need to use a text editor such as nano, vim, or code (if you don’t have it, typing code will automatically start the install of Visual Studio Code). All are excellent editors. Those who are new to Linux will probably feel more comfortable starting up Visual Studio Code. I use it all the time. But I also use the command line frequently when I only need to edit one file. Knowing how to use nano or vim is a wonderful skill to develop. Of the two, nano is easier to learn, and vim is more powerful. Whichever editor you choose, open the file like so: code ~/.profile. The ~ instructs the operating system to look for the file in your home directory.

Once you figure out which editor to use, just add the following two lines at the bottom of the .profile file:

export LIBGL_ALWAYS_INDIRECT=1
export DISPLAY=$(awk '/nameserver / {print $2; exit}' /etc/resolv.conf 2>/dev/null):0

Save that. Once saved, you can immediately invoke it, without starting a new window, with this command: source ~/.profile.

Screenshot of today's InterfaithNews.Net

“Reacting to religious fanaticism and the challenges of advancing and sustaining a more equitable civilization, a global interfaith movement has sprung from the grassroots of religion and spirituality. InterfaithNews.Net (INN) seeks to chronicle this movement by focusing primarily on positive interfaith and religious news, events, and resources.”

That was the mission of a little newsletter and website that Joel Beversluis and I started, with support from the North American Interfaith Network (NAIN) and the United Religions Initiative (URI), in 2002. Would that I could remember where he and I first met; perhaps at the URI North America summit of 2001 in Salt Lake City. Regardless, our time of collaboration was all too short.

By winter of 2002, he dropped out of contact; I was only to learn why when I heard of his passing in March of 2003: cancer. He left behind a grieving family, friends, and countless acquaintances. Those of us who knew him from interfaith work were inspired by his dedication to publishing and distributing books and articles that promoted inter-religious co-existence; the Sourcebook of the World’s Religions remains a magnum opus with its broad coverage of religions primarily from the voices of practitioners.

What could I do but carry on? Though I did not have his media resources or contact network, for four more years I tried to continue this legacy. From time to time I would have editorial help from a few other collaborators, but I did not know how to nurture those relationships and was all too used to going it alone. So I would mine the URI and NAIN mailing lists, Worldwide Faith News, Religion News Service, and other sources for whatever material I could find that was interfaith / inter-religious and (generally) represented cooperation and dialogue, rather than animosity and division.

By the summer of 2006 my responsibilities in the local Bahá'í community had increased and my time for interfaith work consequently diminished. Thus the October, 2006 electronic newsletter was the last. There was one more article in 2007, and then &helllip; a regrettable mistake: I cancelled the Internet hosting and lost my backups. Like so many other web sites over the years, it was lost to the world.

Fast forward: some muse breathed into me a desire to look through the site again. And I remembered: the Wayback Machine, which is an attempt to capture the whole of the World Wide Web. Here I could find, in imperfect form and detail, old copies of the site. As a digital anthropologist, I have cleaned and restored all that I could. No new life has been breathed into it; it is merely a memorialization, a museum-ification, of what had once been an important part of my avocational life.

At last, in time for 2022, the memorial is complete: https://www.safnet.com/inn


Original design Original Design

Second design Second Design

The third design looked much like the current memorial design at the top of the page.

Goal: setup PowerShell Core and .NET for development in Ubuntu running in Windows subsystem for Linux (WSL). And a few other tools.

Motivation: porting PowerShell scripts for .NET development on Linux, thus enabling more programmers to develop on a certain codebase and enabling use of Linux-based containers for continuous integration and testing.

1. Install Ubuntu and PowerShell (Core) 7

Read Install PowerShell 7 On WSL and Ubuntu, which nicely covers not only PowerShell but WSL as well. Be sure to download the powershell-?.?.?-linux-x64.tar.gz file for a typical Windows machine.

TIP: the author shows use of a pre-release over version 7. Head over to the GitHub repo’s release page to find the latest relevant release.

If you want to verify the SHA256 hash after download, then run the following in your Bash prompt:

openssl sha256 <file>

2. Install .NET

First, make sure you know which Ubuntu distribution version you have with this command:

lsb_release -a

Now read Install the .NET SDK or the .NET Runtime on Ubuntu. Make sure you follow both sets of instructions for your version: the instruction that has you download a .deb file, and the instructions for installing the SDK.

The examples in this article show installation of .NET Framework 6.0. You can easily change the commands to 5.0 or 3.1 as needed.

3. Git

See Get started using Git on Windows Subsystem for Linux

4. Bonus: Install Custom Certificate

Most users will not have this situation… needing to install a custom certificate in WSL, e.g. for a corporate network.

Assuming you have the certificate file, you’ll need to know which kind of file you have. Not sure? See What are the differences between .pem, .cer and .der?

Now install it, with help from How do I install a root certificate?.

5. Try It Out

I have previously cloned Ed-Fi AdminApp into c:\source\edfi\AdminApp on my machine. Instead of re-cloning it into the Linux filesystem, I’ll use the Windows version (caution: you could run into line feed problems this way; I have my Windows installation permanently set to Linux-style LF line feeds).

cd /mnt/c/source/edi/AdminApp
git fetch origin
git checkout origin/main
pwsh ./build.ps1

And… the build failed, but that is hardly surprising, given that no work has been done to support Linux in this particular build script. Tools are in place to start get this fixed up.

Screenshot of terminal window

Author Neal Stephenson, in his essay “In the Beginning… Was the Command Line,” memorably compares our graphical user interfaces to Disney theme parks: “It seems as if a hell of a lot might be being glossed over, as if Disney World might be putting one over on us, and possibly getting away with all kinds of buried assumptions and muddled thinking. And this is precisely the same as what is lost in the transition from the command line interface to the GUI. (p52)

With new programmers whose experience has been entirely mediated through an IDE like Visual Studio or Eclipse, I have sometimes wondered if they are understanding the “buried assumptions” and suffering from “muddled thinking” due to their lack of understanding of the basic command line operations that underlie the automation provided in the IDE. I still recall when I was that young developer, who had started with nothing but the command line, and realized that Visual Studio had crippled my ability to know how to build and test .NET Framework solutions (setting up an automated build process in Cruise Control helped cure me of that).

Screenshot of CLI-based hacking in The Matrix

Many developers eventually learn the command line options, and the level of comfort probably varies greatly depending on the language. This article is dedicated to those who are trying to understand the basics across a number of different languages. It is also dedicated to IT workers who are approaching DevOps from the Ops perspective, which is to say with less familiarity with the developer’s basic toolkit.


TIP: an IDE is simply a GUI that “integrates” the source code text editor with menus for various commands and various panels to help you see many different types of additional project information all on one screen.

The Command Line Interface

To be clear, this article is about typing commands rather than clicking on them. It is the difference between pulling up a menu in the IDE:

Screenshot of Visual Studio showing the build solution command

and knowing how to do this with just the keyboard in your favorite shell:

Screenshot of dotnet build

ASIDE: what do I mean by shell? That’s just the name of the command line interpreter in an operating system. Windows has cmd.exe (based on the venerable MS-DOS) and PowerShell. Linux and Unix systems have a proliferation of shells, most famously Bash.

Why would you want to use the shell when there is an easier way by clicking in the IDE?

  1. Perhaps counter-intuitively, it can actually feel more productive to keep the hands on the keyboard and type instead of moving back and forth between keyboard and mouse. There are probably studies that prove, and maybe even some that disprove, this assertion.
  2. Developing hands-on experience with the command line operations can lead to more control and deeper insights compared to using the IDE or GUI. Imagine the difference between learning to drive by hand and learning “to drive” by just telling your car where to go and what to do. What if the automation fails and you need to take over?
  3. Speaking of automation: some tools will help you fully automate a process just by recording your work as you click around. These might be fine. But again, I find that there is more control when you can write out the automation process at a low level. You get more precision and it is easier to diagnose problems.
  4. Occasionally we will find ourselves in a situation where a GUI is unavailable. This did not happen very often for people on Windows or MacOSX for the past several decades, but the emergence of Docker for development work has really helped bring the non-graphical world back to the foreground even for programmers working on Windows.
  5. Its what the cool kids are doing.

On that last point: honestly, I learned Linux back in the ‘90’s because I thought it was cool. That might be a terrible reason. But it is honest. Thankfully I didn’t have the same impression of smoking!

So the shell is a command line interface. And when we build specialized programs that are run from the shell, we often call them “command line interfaces” (or CLI for short) as distinguished from “graphical user interfaces”. In the screenshots above, we see the dotnet CLI compared to the Visual Studio GUI.

Common Software Build Operations

Build or Compile

Programming languages can be divided into those that are interpreted and those that are compiled. Interpreted code, often called scripts, are written in plain text and executed by an interpreter that translates the text into machine instructions on the fly. Compiled code must be translated from plain text into machine instructions by a compiler before it can be executed. This tends to give compiled code an advantage in performance, as the machine instructions are better optimized. But this comes at the cost of having to wait for the compilation process to complete before you can test the code, whereas interpreted code can be tested as soon as it has been written, with no intermediate step. Another difference is that the compiled code requires instructions on how to combine files, usually provided through a special configuration file.

Both paradigms are good. And both have command line interfaces that control many aspects of the programming experience. For the purpose of this article, the primary difference between them is the compile or build command that is not used for interpreted languages. In one sense, the CLI for compiled code essentially exists for the specific purpose of compiling that code so that it becomes executable, whereas CLI’s for interpreted code are there for the purpose of execution. Everything else they do is just convenience.

Interpreted Example Compiled Example
Language Python Visual C++
Source Code
print("hello world")
#include <iostream>
int main() { std::cout << "Hello World!\n"; }
        
Project File not applicable
 <Project DefaultTargets="Build" ToolsVersion="16.0"
  xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <ItemGroup>
    <ProjectConfiguration Include="Debug|Win32">
        <Configuration>Debug</Configuration>
        <Platform>Win32</Platform>
    </ProjectConfiguration>
    <ProjectConfiguration Include="Release|Win32">
        <Configuration>Release</Configuration>
        <Platform>Win32</Platform>
    </ProjectConfiguration>
  </ItemGroup>
  <Import Project="$(VCTargetsPath)\Microsoft.Cpp.default.props" />
  <PropertyGroup>
    <ConfigurationType>Application</ConfigurationType>
    <PlatformToolset>v142</PlatformToolset>
  </PropertyGroup>
  <Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
  <ItemGroup>
    <Compile Include="main.cpp" />
  </ItemGroup>
  <Import Project="$(VCTargetsPath)\Microsoft.Cpp.Targets" />
</Project>    
Compile Command not applicable
msbuild example.vcxproj
Run Command
python main.py
.\debug\example.exe

Here are some sample build commands using various tools for compiled languages:

> # Java - simplest example
> javac myfile.java

> # Java and related languages - using Maven
> mvn compile

> # DotNet Core / .NET Framework 5+
> dotnet build

> # C and C++, old school
> make

TIPS: Every shell has a prompt indicating that it is ready for you to type input after the prompt character; > and $ are two common prompt characters. Thus when retyping the command, you would type “make” instead of typing the literal text “> make”. The # symbol is commonly used to indicate that this is a comment, and the command line interpreter will ignore that line.

Package Management

Modern software often uses purpose-built components developed by other people, a little like buying tomato sauce and pasta at the store instead of making them from scratch. To minimize the size of a software product’s source code, those components, which are also called “dependencies”, are not usually distributed with the source code. Instead the source code has some system for describing the required components, and both CLI and GUI tools have support for reading the catalog of components and downloading them over the Internet. These components are often called packages and the process of downloading them is called restoring or installing (as in “restoring the package files that were not distributed with the source code”).

Sample commands:

$ # .NET Framework 2 through 4.8
$ nuget restore

$ # DotNet Core and .NET Framework 5+
$ dotnet restore

$ # Node.js
$ npm install

$ # Python
$ pip install -r requirements.txt

The package definitions themselves are a sort of source code, and the “packaging” is usually a specialized form of zip file. A little like a compiled program, the package file needs to be assembled from constituent parts and bundled into a zip file. This process is usually called packaging. Then the package can be shared to a central database so that others can discover it; this is called publishing or pushing the package.

The following table lists out some of the common dependency management tools and a description of the file containing the dependency list for some of the most popular programming languages. Note that some languages / frameworks have multiple options.

Language or Framework Management Tool File
.NET Framework (C#, F#, VB) 1 through 4.8 NuGet packages.config *
DotNet Core / .NET Framework 5+ (C#, F#, VB) NuGet *.csproj
Java, Go, Groovy, Kotlin, Scala Maven pom.xml
Java, Groovy, Kotlin, Scala Gradle build.gradle, or build.gradle.kts etc.
Python PIP requirements.txt *
Python Poetry pyproject.toml
Node.js (JavaScript, TypeScript) NPM package.json
Node.js (JavaScript, TypeScript) Yarn package.json
Ruby Gems *.gemspec

… and I’ve left out more for Ruby, PHP, and other languages for brevity.

* In most of these cases, the dependency list is integrated directly into the main project file, except where noted with an asterisk.

Testing and Other Concerns

Most programming languages have packages that allow the developer to build automated tests directly into the source code. Normally when you run the software you don’t want to run the tests. So execution of the tests is another command that can be run through a CLI or an IDE.

Software is prone not just to bugs, which are (we hope) detected by automated tests, but there are also automated ways to evaluate coding style and quality. These processes are yet more bits of software, and they typically have a CLI. They include “linters”, “type checkers”, and more.

Many of these tools are standalone executable CLI’s. Here are some example commands for various tasks and languages:

> # Run NUnit style automated tests in .NET Framework code
> nunit3-console.exe someProjectName.dll

> # In a DotNet Core / .NET Framework 5+ project, can run these with tests
> dotnet test

> # Python has a concept called a "virtual environment". If you are
> # "in the virtual environment" you can run:
> pytest

> # Or if you use the Poetry tool, it will prepare the virtual environment
> # on your behalf. Longer command, but it does simplify things over all.
> poetry run pytest

> # And here's a Python lint program that checks the quality of the code:
> poetry run flake8

Project Files

Earlier I mentioned project files. These are always used with compiled code, and used with some interpreted languages as well in order to help manage and create packages. These project files provide information include:

  • Name of the software
  • Version number
  • The project’s dependencies
  • Compiler options
  • Configuration for how to bundle the application into a package

Many of these project files allow you to build additional commands, sometimes very sophisticated ones. A simple example is the set of scripts in a Node.js package.json file:

{
  "scripts": {
    "prebuild": "rimraf dist",
    "build": "nest build && copyfiles src/**/*.graphql dist",
    "format": "prettier --write \"src/**/*.ts\" \"test/**/*.ts\"",
    "start": "yarn build && nest start",
  }
}

Typing either node run start or yarn start (if you have yarn installed) will cause two commands to run: yarn build and nest start. The first of these refers back to the second script, so yarn start is implicitly running nest build && copyfiles src/**/*.graphql dist before running nest start. Each of these is a command line operation, and the scripts here simplify the process of using them. Yes we have a bit of abstraction, but it is all right there in front of us in plain text and therefore relatively easy to dig in and understand the details.

Project files can become rather complex, and some project files are rarely edited by hand. This is particularly true of the msbuild / dotnet project files for Microsoft’ .NET Framework / DotNet Core. For the purpose of this article, it is enough to know that project files exist and sometimes they include scripts or “targets” that can be run from the command line.

Command Line Arguments and Options

We’ve already seen arguments in several examples above. Here are some more:

Command Number of Arguments Argument list
tsc index.ts 1 “index.ts”
npm run start 2 “run”, “start”
dotnet nuget push -k abcd12345 4 “nuget”, “push”, “-k”, “abcd12345”

The last example introduces something new: command line options. An argument that begins with -, --, or sometimes / signals that an “optional argument” is being provide. Note that the first argument is usually a verb, like “start”, “run”, or “compile”, and we can refer to that verb as the command. That last example was also specialized in that the word “nuget” appears before the verb “push”; this is an interesting hybrid command where the dotnet CLI tool is being used to run nuget commands.

In this case, the -k could also be written in a longer form as --api-key. Having both a short and a long form of optional argument is very common. The string that follows -k, “abcd12345”, is the option’s value.

Some CLI’s have bewildering array of commands and options. This is where you start to see the value of the GUI / IDE: at some point it is simpler to just click a few times than remember how to type out a long command. Maven (> mvn), for example, has so many commands that I can’t find a single list containing them all. The DotNet Core tool (> dotnet) also has a lot of commands, each of which has its own options, but these at least are centrally documented.

To find more documentation, you can usually do a web search like “mvn cli”. Or, most tools have help available through a help command or an option:

$ sometool help
$ sometool -h
$ sometool --help
$ sometool /h

Just try those with the tool you are trying to learn about and see what happens.

Wrap-Up

Armed with this surface level knowledge, a new programmer or IT operations staff person will hopefully have enough background information to understand the basic operational practices for developing high quality software. And in understanding those basics from the perspective of the command line, the tasks and challenges of continuous integration will, perhaps, feel a bit less daunting. Or at the least, you’ll know a little more about what you don’t know, which is a good step toward learning.

Addendum

The first image in this article is a screengrab from the film The Matrix. That came out when I was actively working as a Linux system administrator, and I was thrilled to recognize that Trinity was exploiting a real-world security vulnerability that I had, a few months before, fixed by upgrading the operating system kernel on several servers.

“Infrastructure as Code”, or IaC if you prefer TLAs, is the practice of configuring infrastructure components in text files instead of clicking around in a user interface. Last year I wrote a few detailed articles on IaC with TeamCity (1, 2, 3). Today I want take a step back and briefly address the topic more broadly, particularly with respect to continuous integration (CI) and delivery (CD): the process of automating software compilation, testing, quality checks, packaging, deployment, and more.

Continuous Integration and Delivery Tools

To level set, this article is about improving the overall developer and organizational experience of building (integration) and deploying (delivery) software on platforms such as:

  • Your local workstation
  • Ansible
  • Azure DevOps
  • CircleCI
  • CruiseControl
  • GitHub Actions
  • GitLab
  • GoCD
  • Jenkins
  • Octopus Deploy
  • TeamCity
  • TravisCI

Personally, I have some experience with perhaps half of these. While I believe the techniques discussed are widely applicable, I do not know the details in all cases. Please look carefully at your toolkit to understand its advantages and limitations.

Philosophy

Why?

Many tools provide useful GUIs that allow you to, more-or-less quickly, setup a CI/CD process through point, click, and typing in some small attributes like project name. Until you get used to it, writing code instead of clicking might actually take longer. So why do it?

  • Repeatability - what happens when you need to transfer the instructions to another computer/server? Re-apply a text file vs. click around all over again.
  • Source control:
    • Keep build configuration along with the application source code.
    • Easily revert to a prior state.
    • Sharing is caring.
  • Peer review - much easier to review a text file (especially changes!) than look around in a GUI.
  • Run locally - might be nice to run the automated process locally before committing source code changes.
  • Testing - or the flip side, might be nice to test the automation process locally before putting it into the server.
  • Documentation - treat the code as documentation.

Programming Style

The code in an IaC might project might not be executable (imperative); it may instead be declarative configuration that that describes the desired state and lets the tool figure out how to achieve it. Examples of each:

  • Imperative: Bash, Python, PowerShell, Kotlin (a bit of a hybrid), HCL, etc
  • Declarative: JSON, YAML, XML, ini, proprietary, etc

Which style, and which type of file (bash vs. powershell, json vs xml) will largely depend on the application you are interacting with and your general objectives. Often times you won’t get to choose between them. Many tasks can rely on declarative configuration, especially using YAML. But that is not well suited for tasks like setting up a remote service through API calls, which might require scripting in an imperative language.

Universalizing

Every platform has its own approach. Following the simplest path, you can often get up-and-running with a build configuration in the tool very quickly — but that effort will not help you if you need to change tools or if you want to run the same commands on your development workstation.

How do you avoid vendor lock-in? “Universalize” — create a process that can be transported to any tool with ease. This likely means writing imperative scripts.

Image of NUnit configuration with caption "how do i run this locally?"

The screenshot above is from TeamCity, showing a build runner step for running NUnit on a project. A developer who does not know how to run NUnit at the command line will not be able to learn from this example. Furthermore, the configuration process in another tool may look completely different. Instead of using hte “NUnit runner” in TeamCity, we can write a script and put it in the source code repository. Since NUnit is a .NET unit testing tool, and most .NET development is done on Windows systems, PowerShell is often a good choice for this sort of script. Configuring TeamCity (or Jenkins, etc) to run that script should be trivial and easy to maintain.

Examples of IaC Tools and Processes

While this article is about continuous integration and delivery, it is worth noting the many different types of tools that support an IaC mindset. Here is a partial list of tools, with the configuration language in parenthesis.

  • IIS: web.config, applicationHost.config (XML)
  • Containers: Dockerfiles, Docker Compose, Kubernetes (YAML, often calling imperative scripts)
  • VM Images: Packer (JSON or HCL)
  • Configuration: Ansible (YAML), AWS CloudFormation (JSON), Terraform (JSON or HCL), Puppet, Salt, Chef, and so many more.
  • Network Settings: firewalls, port configurations, proxy servers, etc. (wide variety of tools and styles)

Generally, these tools use declarative configuration scripts that are composed by hand rather than through a user interface, although there are notable exceptions (such as IIS’s inetmgr GUI).

At some point, vendor lock-in does happen: there are no tools (that I know of) for defining a job with a universal language that applies to all of the relevant platforms. TerraForm might come the closest. There are also some tools that can define continuous integration processes generically and output scripts for configuring several different platforms. For better or worse, I tend to be leary of getting too far away from the application’s native configuration code, for fear that I’ll miss out on important nuances.

Real World Examples of Continuous Integration Scripts

PowerShell and .NET

Command line examples using Ed-Fi ODS AdminApp’s build.ps1 script:

$ ./build.ps1 build -BuildConfiguration release -Version "2.0.0" -BuildCounter 45
$ ./build.ps1 unittest
$ ./build.ps1 integrationtest
$ ./build.ps1 package -Version "2.0.0" -BuildCounter 45
$ ./build.ps1 push -NuGetApiKey $env:nuget_key

Any of those commands can easily be run in any build automation tool. What are these commands doing? The first command is for the build operation, and it calls function Invoke-Build:

function Invoke-Build {
    Write-Host "Building Version $Version" -ForegroundColor Cyan

    Invoke-Step { InitializeNuGet }
    Invoke-Step { Clean }
    Invoke-Step { Restore }
    Invoke-Step { AssemblyInfo }
    Invoke-Step { Compile }
}

Side-note: Invoke-Step seen here, and Invoke-Execute seen below, are custom functions that (a) create a domain-specific language for writing a build script, and (b) setup command timing and logging to the console for each operation.

This function in turn is calling a series of other functions. If you are a .NET developer, you’ll probably recognize these steps quite readily. Let’s peak into the last function call:

function Compile {
    Invoke-Execute {
        dotnet --info
        dotnet build $solutionRoot -c $Configuration --nologo --no-restore

        $outputPath = "$solutionRoot/EdFi.Ods.AdminApp.Web/publish"
        $project = "$solutionRoot/EdFi.Ods.AdminApp.Web/"
        dotnet publish $project -c $Configuration /p:EnvironmentName=Production -o $outputPath --no-build --nologo
    }
}

Now we see the key operations for compilation. In this is specific case, the development team actually wanted to run two commands, and even before running them they wanted to capture log output showing the version of dotnet in use. Any developer can easily run the build script to execute the same sequence of actions, without having to remember the detailed command options. And any tool should be able to run a PowerShell script with ease.

Python

Command line examples using LMS-Toolkit’s build.py:

$ python ./build.py install schoology-extractor
$ python ./build.py test schoology-extractor
$ python ./build.py coverage schoology-extractor
$ python ./build.py coverage:html schoology-extractor

As the project in question (LMS Toolkit) is a set of Python scripts, and because we wanted to use a scripting language that is well supported in both Windows and Linux, we decided to use Python instead of a shell script.

There is a helper function for instructing the Python interpreter to run a shell command:

def _run_command(command: List[str], exit_immediately: bool = True):

    print('\033[95m' + " ".join(command) + '\033[0m')

    # Some system configurations on Windows-based CI servers have trouble
    # finding poetry, others do not. Explicitly calling "cmd /c" seems to help,
    # though unsure why.

    if (os.name == "nt"):
        # All versions of Windows are "nt"
        command = ["cmd", "/c", *command]

    script_dir = os.path.dirname(os.path.realpath(sys.argv[0]))

    package_name = sys.argv[2]

    package_dir = os.path.join(script_dir, "..", "src", package_name)
    if not os.path.exists(package_dir):
        package_dir = os.path.join(script_dir, "..", "utils", package_name)

        if not os.path.exists(package_dir):
            raise RuntimeError(f"Cannot find package {package_name}")

    result = subprocess.run(command, cwd=package_dir)

    if exit_immediately:
        exit(result.returncode)

    if result.returncode != 0:
        exit(result.returncode)

And then we have the individual build operations, such as running unit tests with a code coverage report:

def _run_coverage():
    _run_command([
        "poetry",
        "run",
        "coverage",
        "run",
        "-m",
        "pytest",
        "tests",
    ], exit_immediately=False)
    _run_command([
        "poetry",
        "run",
        "coverage",
        "report",
    ], exit_immediately=False)

Reading this is a little strange at first, because the Python subprocess.run function is expecting an array of commands rather than a single string. Hence the command poetry run coverage report becomes the array ["poetry", "run", "coverage", "report"]. But here’s the thing: once you write the script, anyone can run it repeatedly, on any system that has the necessary tools installed, without having to learn and remember the detailed syntax of the commands that are being executed under the hood.

TypeScript

The JavaScript / TypeScript world provides npm, which is a great facility for running build operations.

Using Ed-Fi Project Buzz, you can run commands like the following:

$ npm install
$ npm run build
$ npm run test
$ npm run test:ci

The npm run XYZ commands are invoking scripts defined in the package.json file:

{
    "build": "nest build && copyfiles src/**/*.graphql dist",
    "test": "jest",
    "test:cov": "jest --coverage",
    "test:debug": "node --inspect-brk -r tsconfig-paths/register -r ts-node/register node_modules/.bin/jest --runInBand",
    "test:ci": "SET CI=true && SET TEAMCITY_VERSION = 1 && yarn test --testResultsProcessor=jest-teamcity-reporter--reporters=jest-junit",
}

Look at that debug command! Imagine having to type that in manually instead of just running npm run test:debug. Yuck!

Real World Examples of Tool Automation Scripts

That is, examples of scripts for automating the software that will run the integration and/or delivery process.

Octopus Deploy Operations

I can distinctly recall seeing advertisements for Octopus Deploy that castigated the use of YAML. While they have long supported JSON import and export of configurations, those JSON files were not very portable: they could only interoperate with the same Octopus version that created them.

Octopus has been coming around to deployment process as code. It appears that they’re embracing the philosophy extolled in this article. The referenced article doesn’t give examples of how to work with Octopus itself; instead it just tells you to use the .NET SDK. Which is what we’ve done in the example below. Also of note: as of May 2021, the roadmap shows that Git-integration is under development. This feature would, if I understand correctly, enable changes in the Octopus Deploy UI to be saved directly into Git source control. That’s a great step! I do not see any indication of what language will be used or whether changes can be scripted and then picked up by Octopus Deploy instead of vice versa.

In the Ed-Fi ODS/API application there’s a PowerShell script that imperatively creates channels and releases, and deploys releases, on Octopus Deploy — all without having to click around in the user interface. The following example imports the module; runs a command to install the Octopus command line client (typically a one-time operation), and then it creates a new deployment channel:

$ Import-Module octopus-deploy-management.psm1
$ Install-OctopusClient
$ $parms = @{
     ServerBaseUrl="https://..."
     ApiKey="API-............"
     Timeout=601
     Project="Ed-Fi ODS Shared Instance (SQL Server)"
     Channel="testing"
  }
$ Invoke-OctoCreateChannel @parms

And here’s the body of the Invoke-OctoCreateChannel function, which is running the .NET SDK command line tool:

$params = @(
    "--project", $Project,
    "--channel", $Channel,
    "--update-existing",
    "--server", $ServerBaseUrl,
    "--apiKey", $ApiKey,
    "--timeout", $Timeout
)

Write-Host -ForegroundColor Magenta "& dotnet-octo create-channel $params"
&$ToolsPath/dotnet-octo create-channel $params

TeamCity

TeamCity build configurations can be automated with either XML or Kotlin. The latter is my preferred approach, and I’ve talked about it in three prior blog posts:

  1. Getting Started with Infrastructure as Code in TeamCity
  2. Splitting TeamCity Kotlin Into Multiple Files
  3. Template Inheritance with TeamCity Kotlin

GitHub Actions

Intrinsically YAML-driven, the following example from the Ed-Fi LMS Toolkit demonstrates the use of the Python script that is described above. For brevity’s sake I’ve removed steps that prepare the container by setting up the right version of Python and performing some other optimization activities.

# SPDX-License-Identifier: Apache-2.0
# Licensed to the Ed-Fi Alliance under one or more agreements.
# The Ed-Fi Alliance licenses this file to you under the Apache License, Version 2.0.
# See the LICENSE and NOTICES files in the project root for more information.

name: Canvas Extractor - Publish
on:
  workflow_dispatch

jobs:
  publish-canvas-extractor:
    name: Run unit tests and publish
    runs-on: ubuntu-20.04
    steps:
      - name: Checkout code
        uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f

        ...

      - name: Install Python Dependencies
        run: python ./eng/build.py install canvas-extractor

        ...

      - name: Run Tests
        run: python ./eng/build.py test canvas-extractor

      - name: Publish
        env:
          TWINE_USERNAME: $
          TWINE_PASSWORD: $
        run: python ./eng/build.py publish canvas-extractor

Conclusion

Taking the approach of Infrastructure-as-Code is all about shifting from a point-and-click mindset to a programming mindset, with benefits such as source control, peer review, and repeatability. Most continuous integration and delivery tools will support this paradigm. Many tools offer specialized commands that hide some of the complexity of running a process. While these can get you up-and-running quickly, over-utilization of such commands can lead to a tightly-coupled system, making it painful to move to another system. Scripted execution of integration and delivery steps (“universalizing”) can lead to more loosely-coupled systems while also enabling developers to run the same commands locally as would run on the CI/CD server.

References

Useful references for learning more about Infrastructure-as-code:

More generally, the use of IaC represents a “DevOps mindset”: developers thinking more about operations, and operations acting more like developers. To the benefit of both. Good DevOps references include:

License

All code samples shown here are from projects manged by the Ed-Fi Alliance, and are used under the terms of the Apache License, version 2.0.

‘Ed-Fi is open’: thus the Ed-Fi Alliance announced its transition from a proprietary license to the open source Apache License, version 2.0, in April, 2020 (FAQ). Moving to an open source license is a clear commitment to transparency: anyone can see the source code, and the user community knows that their right to use that code can never be revoked. But this change is about more than just words: as the list of contributions below demonstrates, embracing open source is also about participation.

In this second year of #edfiopensource we are asking ourselves – and the community – what comes next? What can we do, together, to unlock further innovation and deliver more tools that make use of student data in new, practical, and transformative ways?

Continue reading on wwww.ed-fi.org

Elephant and dog

It looks like a beautiful morning in Austin, Texas, from the comfort of my feeder-facing position on the couch. Later in the afternoon I will get out and enjoy it on my afternoon walk with All Things Considered. As I write these lines a bully has been at work: a Yellow-Rumped Warbler (Myrtle) has been chasing the other birds away. Thankfully this greedy marauder was absent for most of the morning, as I read portions of Dr. J. Drew Lanham’s The Home Place, Memoirs of a Colored Man’s Love Affair with Nature.

Lanham, who also penned the harrowing-yet-humorous 9 Rules for the Black Birdwatcher, shares a compelling and beautifully-written story of family and place — at least, those are the key themes of first third of the book that I’ve read thus far. Appropriate to this day of reflection and remembrance for one of our great American heroes, Dr. Martin Luther King, Jr, it is a story of forces and people who shaped this scientist, a Black man from the South who learned to love nature from the first-hand experiences of playing, watching, listening, chopping, and hoeing on the edge of the South Carolina piedmont.

Understanding that one man’s experience, views, and insights can never encapsulate those of an entire amorphous people, it is nevertheless critical that we all spend time getting to better know and understand the forces that shape our myriad cultures and the people who emerge from them. As we become more familiar with “others,” “they” become “we” and “we” become self-aware. Becoming self-aware, we recognize the truth of Dr. King’s famous saying:

“We are caught in an inescapable network of mutuality, tied in a single garment of destiny. Whatever affects one directly, affects all indirectly.”

Being aware of our mutuality, believing in it deeply, we can make better choices about how to live well with and for everyone on this planet, both those alive today and those yet to be born.


A passage of beautiful prose from pages 1-2 of The Home Place, to give you a taste of what is in store. After describing his ethno-racial heritage — primarily African American with an admixture of European , American Indian, Asian, “and Neanderthal tossed in” — he remarks,

“But that’s only a part of the whole: There is also the red of miry clay, plowed up and planted to pass a legacy forward. There is the brown of spring floods rushing over a Savannah River shoal. There is the gold of ripening tobacco drying in the heat of summer’s last breath. There are endless rows of cotton’s cloudy white. My plumage is a kaledeiscopic rainbow of an eternal hope and the deepest blue of despair and darkness. All of these hues are me; I am, in the deepest sense, colored.”


Birds seen at the “backyard” feeder this morning while reading. Photos are a few weeks old but all of these species were observed today. © Tania Homayoun, some rights Creative Commons:
by-nc-nd

Black-Crested Titmouse
Black-Crested Titmouse


Carolina Wren
Carolina Wren


Hermit Thrush
Hermit Thrush


Orange-crowned Warbler
Orange-crowned Warbler


Ruby-crowned Kinglet
Ruby-crowned Kinglet


Yellow-rumped Warbler
Yellow-rumped Warbler


Also seen: Northern Cardinal, American Robin, Bewick’s Wren.