Stephen A. Fuqua (saf)

a Bahá'í, software engineer, and nature lover in Austin, Texas, USA

Goal: setup PowerShell Core and .NET for development in Ubuntu running in Windows subsystem for Linux (WSL). And a few other tools.

Motivation: porting PowerShell scripts for .NET development on Linux, thus enabling more programmers to develop on a certain codebase and enabling use of Linux-based containers for continuous integration and testing.

1. Install Ubuntu and PowerShell (Core) 7

Read Install PowerShell 7 On WSL and Ubuntu, which nicely covers not only PowerShell but WSL as well. Be sure to download the powershell-?.?.?-linux-x64.tar.gz file for a typical Windows machine.

TIP: the author shows use of a pre-release over version 7. Head over to the GitHub repo’s release page to find the latest relevant release.

If you want to verify the SHA256 hash after download, then run the following in your Bash prompt:

openssl sha256 <file>

2. Install .NET

First, make sure you know which Ubuntu distribution version you have with this command:

lsb_release -a

Now read Install the .NET SDK or the .NET Runtime on Ubuntu. Make sure you follow both sets of instructions for your version: the instruction that has you download a .deb file, and the instructions for installing the SDK.

The examples in this article show installation of .NET Framework 6.0. You can easily change the commands to 5.0 or 3.1 as needed.

3. Git

See Get started using Git on Windows Subsystem for Linux

4. Bonus: Install Custom Certificate

Most users will not have this situation… needing to install a custom certificate in WSL, e.g. for a corporate network.

Assuming you have the certificate file, you’ll need to know which kind of file you have. Not sure? See What are the differences between .pem, .cer and .der?

Now install it, with help from How do I install a root certificate?.

5. Try It Out

I have previously cloned Ed-Fi AdminApp into c:\source\edfi\AdminApp on my machine. Instead of re-cloning it into the Linux filesystem, I’ll use the Windows version (caution: you could run into line feed problems this way; I have my Windows installation permanently set to Linux-style LF line feeds).

cd /mnt/c/source/edi/AdminApp
git fetch origin
git checkout origin/main
pwsh ./build.ps1

And… the build failed, but that is hardly surprising, given that no work has been done to support Linux in this particular build script. Tools are in place to start get this fixed up.

Screenshot of terminal window

Author Neal Stephenson, in his essay “In the Beginning… Was the Command Line,” memorably compares our graphical user interfaces to Disney theme parks: “It seems as if a hell of a lot might be being glossed over, as if Disney World might be putting one over on us, and possibly getting away with all kinds of buried assumptions and muddled thinking. And this is precisely the same as what is lost in the transition from the command line interface to the GUI. (p52)

With new programmers whose experience has been entirely mediated through an IDE like Visual Studio or Eclipse, I have sometimes wondered if they are understanding the “buried assumptions” and suffering from “muddled thinking” due to their lack of understanding of the basic command line operations that underlie the automation provided in the IDE. I still recall when I was that young developer, who had started with nothing but the command line, and realized that Visual Studio had crippled my ability to know how to build and test .NET Framework solutions (setting up an automated build process in Cruise Control helped cure me of that).

Screenshot of CLI-based hacking in The Matrix

Many developers eventually learn the command line options, and the level of comfort probably varies greatly depending on the language. This article is dedicated to those who are trying to understand the basics across a number of different languages. It is also dedicated to IT workers who are approaching DevOps from the Ops perspective, which is to say with less familiarity with the developer’s basic toolkit.

TIP: an IDE is simply a GUI that “integrates” the source code text editor with menus for various commands and various panels to help you see many different types of additional project information all on one screen.

The Command Line Interface

To be clear, this article is about typing commands rather than clicking on them. It is the difference between pulling up a menu in the IDE:

Screenshot of Visual Studio showing the build solution command

and knowing how to do this with just the keyboard in your favorite shell:

Screenshot of dotnet build

ASIDE: what do I mean by shell? That’s just the name of the command line interpreter in an operating system. Windows has cmd.exe (based on the venerable MS-DOS) and PowerShell. Linux and Unix systems have a proliferation of shells, most famously Bash.

Why would you want to use the shell when there is an easier way by clicking in the IDE?

  1. Perhaps counter-intuitively, it can actually feel more productive to keep the hands on the keyboard and type instead of moving back and forth between keyboard and mouse. There are probably studies that prove, and maybe even some that disprove, this assertion.
  2. Developing hands-on experience with the command line operations can lead to more control and deeper insights compared to using the IDE or GUI. Imagine the difference between learning to drive by hand and learning “to drive” by just telling your car where to go and what to do. What if the automation fails and you need to take over?
  3. Speaking of automation: some tools will help you fully automate a process just by recording your work as you click around. These might be fine. But again, I find that there is more control when you can write out the automation process at a low level. You get more precision and it is easier to diagnose problems.
  4. Occasionally we will find ourselves in a situation where a GUI is unavailable. This did not happen very often for people on Windows or MacOSX for the past several decades, but the emergence of Docker for development work has really helped bring the non-graphical world back to the foreground even for programmers working on Windows.
  5. Its what the cool kids are doing.

On that last point: honestly, I learned Linux back in the ‘90’s because I thought it was cool. That might be a terrible reason. But it is honest. Thankfully I didn’t have the same impression of smoking!

So the shell is a command line interface. And when we build specialized programs that are run from the shell, we often call them “command line interfaces” (or CLI for short) as distinguished from “graphical user interfaces”. In the screenshots above, we see the dotnet CLI compared to the Visual Studio GUI.

Common Software Build Operations

Build or Compile

Programming languages can be divided into those that are interpreted and those that are compiled. Interpreted code, often called scripts, are written in plain text and executed by an interpreter that translates the text into machine instructions on the fly. Compiled code must be translated from plain text into machine instructions by a compiler before it can be executed. This tends to give compiled code an advantage in performance, as the machine instructions are better optimized. But this comes at the cost of having to wait for the compilation process to complete before you can test the code, whereas interpreted code can be tested as soon as it has been written, with no intermediate step. Another difference is that the compiled code requires instructions on how to combine files, usually provided through a special configuration file.

Both paradigms are good. And both have command line interfaces that control many aspects of the programming experience. For the purpose of this article, the primary difference between them is the compile or build command that is not used for interpreted languages. In one sense, the CLI for compiled code essentially exists for the specific purpose of compiling that code so that it becomes executable, whereas CLI’s for interpreted code are there for the purpose of execution. Everything else they do is just convenience.

Interpreted Example Compiled Example
Language Python Visual C++
Source Code
print("hello world")
#include <iostream>
int main() { std::cout << "Hello World!\n"; }
Project File not applicable
 <Project DefaultTargets="Build" ToolsVersion="16.0"
    <ProjectConfiguration Include="Debug|Win32">
    <ProjectConfiguration Include="Release|Win32">
  <Import Project="$(VCTargetsPath)\Microsoft.Cpp.default.props" />
  <Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
    <Compile Include="main.cpp" />
  <Import Project="$(VCTargetsPath)\Microsoft.Cpp.Targets" />
Compile Command not applicable
msbuild example.vcxproj
Run Command

Here are some sample build commands using various tools for compiled languages:

> # Java - simplest example
> javac

> # Java and related languages - using Maven
> mvn compile

> # DotNet Core / .NET Framework 5+
> dotnet build

> # C and C++, old school
> make

TIPS: Every shell has a prompt indicating that it is ready for you to type input after the prompt character; > and $ are two common prompt characters. Thus when retyping the command, you would type “make” instead of typing the literal text “> make”. The # symbol is commonly used to indicate that this is a comment, and the command line interpreter will ignore that line.

Package Management

Modern software often uses purpose-built components developed by other people, a little like buying tomato sauce and pasta at the store instead of making them from scratch. To minimize the size of a software product’s source code, those components, which are also called “dependencies”, are not usually distributed with the source code. Instead the source code has some system for describing the required components, and both CLI and GUI tools have support for reading the catalog of components and downloading them over the Internet. These components are often called packages and the process of downloading them is called restoring or installing (as in “restoring the package files that were not distributed with the source code”).

Sample commands:

$ # .NET Framework 2 through 4.8
$ nuget restore

$ # DotNet Core and .NET Framework 5+
$ dotnet restore

$ # Node.js
$ npm install

$ # Python
$ pip install -r requirements.txt

The package definitions themselves are a sort of source code, and the “packaging” is usually a specialized form of zip file. A little like a compiled program, the package file needs to be assembled from constituent parts and bundled into a zip file. This process is usually called packaging. Then the package can be shared to a central database so that others can discover it; this is called publishing or pushing the package.

The following table lists out some of the common dependency management tools and a description of the file containing the dependency list for some of the most popular programming languages. Note that some languages / frameworks have multiple options.

Language or Framework Management Tool File
.NET Framework (C#, F#, VB) 1 through 4.8 NuGet packages.config *
DotNet Core / .NET Framework 5+ (C#, F#, VB) NuGet *.csproj
Java, Go, Groovy, Kotlin, Scala Maven pom.xml
Java, Groovy, Kotlin, Scala Gradle build.gradle, or build.gradle.kts etc.
Python PIP requirements.txt *
Python Poetry pyproject.toml
Node.js (JavaScript, TypeScript) NPM package.json
Node.js (JavaScript, TypeScript) Yarn package.json
Ruby Gems *.gemspec

… and I’ve left out more for Ruby, PHP, and other languages for brevity.

* In most of these cases, the dependency list is integrated directly into the main project file, except where noted with an asterisk.

Testing and Other Concerns

Most programming languages have packages that allow the developer to build automated tests directly into the source code. Normally when you run the software you don’t want to run the tests. So execution of the tests is another command that can be run through a CLI or an IDE.

Software is prone not just to bugs, which are (we hope) detected by automated tests, but there are also automated ways to evaluate coding style and quality. These processes are yet more bits of software, and they typically have a CLI. They include “linters”, “type checkers”, and more.

Many of these tools are standalone executable CLI’s. Here are some example commands for various tasks and languages:

> # Run NUnit style automated tests in .NET Framework code
> nunit3-console.exe someProjectName.dll

> # In a DotNet Core / .NET Framework 5+ project, can run these with tests
> dotnet test

> # Python has a concept called a "virtual environment". If you are
> # "in the virtual environment" you can run:
> pytest

> # Or if you use the Poetry tool, it will prepare the virtual environment
> # on your behalf. Longer command, but it does simplify things over all.
> poetry run pytest

> # And here's a Python lint program that checks the quality of the code:
> poetry run flake8

Project Files

Earlier I mentioned project files. These are always used with compiled code, and used with some interpreted languages as well in order to help manage and create packages. These project files provide information include:

  • Name of the software
  • Version number
  • The project’s dependencies
  • Compiler options
  • Configuration for how to bundle the application into a package

Many of these project files allow you to build additional commands, sometimes very sophisticated ones. A simple example is the set of scripts in a Node.js package.json file:

  "scripts": {
    "prebuild": "rimraf dist",
    "build": "nest build && copyfiles src/**/*.graphql dist",
    "format": "prettier --write \"src/**/*.ts\" \"test/**/*.ts\"",
    "start": "yarn build && nest start",

Typing either node run start or yarn start (if you have yarn installed) will cause two commands to run: yarn build and nest start. The first of these refers back to the second script, so yarn start is implicitly running nest build && copyfiles src/**/*.graphql dist before running nest start. Each of these is a command line operation, and the scripts here simplify the process of using them. Yes we have a bit of abstraction, but it is all right there in front of us in plain text and therefore relatively easy to dig in and understand the details.

Project files can become rather complex, and some project files are rarely edited by hand. This is particularly true of the msbuild / dotnet project files for Microsoft’ .NET Framework / DotNet Core. For the purpose of this article, it is enough to know that project files exist and sometimes they include scripts or “targets” that can be run from the command line.

Command Line Arguments and Options

We’ve already seen arguments in several examples above. Here are some more:

Command Number of Arguments Argument list
tsc index.ts 1 “index.ts”
npm run start 2 “run”, “start”
dotnet nuget push -k abcd12345 4 “nuget”, “push”, “-k”, “abcd12345”

The last example introduces something new: command line options. An argument that begins with -, --, or sometimes / signals that an “optional argument” is being provide. Note that the first argument is usually a verb, like “start”, “run”, or “compile”, and we can refer to that verb as the command. That last example was also specialized in that the word “nuget” appears before the verb “push”; this is an interesting hybrid command where the dotnet CLI tool is being used to run nuget commands.

In this case, the -k could also be written in a longer form as --api-key. Having both a short and a long form of optional argument is very common. The string that follows -k, “abcd12345”, is the option’s value.

Some CLI’s have bewildering array of commands and options. This is where you start to see the value of the GUI / IDE: at some point it is simpler to just click a few times than remember how to type out a long command. Maven (> mvn), for example, has so many commands that I can’t find a single list containing them all. The DotNet Core tool (> dotnet) also has a lot of commands, each of which has its own options, but these at least are centrally documented.

To find more documentation, you can usually do a web search like “mvn cli”. Or, most tools have help available through a help command or an option:

$ sometool help
$ sometool -h
$ sometool --help
$ sometool /h

Just try those with the tool you are trying to learn about and see what happens.


Armed with this surface level knowledge, a new programmer or IT operations staff person will hopefully have enough background information to understand the basic operational practices for developing high quality software. And in understanding those basics from the perspective of the command line, the tasks and challenges of continuous integration will, perhaps, feel a bit less daunting. Or at the least, you’ll know a little more about what you don’t know, which is a good step toward learning.


The first image in this article is a screengrab from the film The Matrix. That came out when I was actively working as a Linux system administrator, and I was thrilled to recognize that Trinity was exploiting a real-world security vulnerability that I had, a few months before, fixed by upgrading the operating system kernel on several servers.

“Infrastructure as Code”, or IaC if you prefer TLAs, is the practice of configuring infrastructure components in text files instead of clicking around in a user interface. Last year I wrote a few detailed articles on IaC with TeamCity (1, 2, 3). Today I want take a step back and briefly address the topic more broadly, particularly with respect to continuous integration (CI) and delivery (CD): the process of automating software compilation, testing, quality checks, packaging, deployment, and more.

Continuous Integration and Delivery Tools

To level set, this article is about improving the overall developer and organizational experience of building (integration) and deploying (delivery) software on platforms such as:

  • Your local workstation
  • Ansible
  • Azure DevOps
  • CircleCI
  • CruiseControl
  • GitHub Actions
  • GitLab
  • GoCD
  • Jenkins
  • Octopus Deploy
  • TeamCity
  • TravisCI

Personally, I have some experience with perhaps half of these. While I believe the techniques discussed are widely applicable, I do not know the details in all cases. Please look carefully at your toolkit to understand its advantages and limitations.



Many tools provide useful GUIs that allow you to, more-or-less quickly, setup a CI/CD process through point, click, and typing in some small attributes like project name. Until you get used to it, writing code instead of clicking might actually take longer. So why do it?

  • Repeatability - what happens when you need to transfer the instructions to another computer/server? Re-apply a text file vs. click around all over again.
  • Source control:
    • Keep build configuration along with the application source code.
    • Easily revert to a prior state.
    • Sharing is caring.
  • Peer review - much easier to review a text file (especially changes!) than look around in a GUI.
  • Run locally - might be nice to run the automated process locally before committing source code changes.
  • Testing - or the flip side, might be nice to test the automation process locally before putting it into the server.
  • Documentation - treat the code as documentation.

Programming Style

The code in an IaC might project might not be executable (imperative); it may instead be declarative configuration that that describes the desired state and lets the tool figure out how to achieve it. Examples of each:

  • Imperative: Bash, Python, PowerShell, Kotlin (a bit of a hybrid), HCL, etc
  • Declarative: JSON, YAML, XML, ini, proprietary, etc

Which style, and which type of file (bash vs. powershell, json vs xml) will largely depend on the application you are interacting with and your general objectives. Often times you won’t get to choose between them. Many tasks can rely on declarative configuration, especially using YAML. But that is not well suited for tasks like setting up a remote service through API calls, which might require scripting in an imperative language.


Every platform has its own approach. Following the simplest path, you can often get up-and-running with a build configuration in the tool very quickly — but that effort will not help you if you need to change tools or if you want to run the same commands on your development workstation.

How do you avoid vendor lock-in? “Universalize” — create a process that can be transported to any tool with ease. This likely means writing imperative scripts.

Image of NUnit configuration with caption "how do i run this locally?"

The screenshot above is from TeamCity, showing a build runner step for running NUnit on a project. A developer who does not know how to run NUnit at the command line will not be able to learn from this example. Furthermore, the configuration process in another tool may look completely different. Instead of using hte “NUnit runner” in TeamCity, we can write a script and put it in the source code repository. Since NUnit is a .NET unit testing tool, and most .NET development is done on Windows systems, PowerShell is often a good choice for this sort of script. Configuring TeamCity (or Jenkins, etc) to run that script should be trivial and easy to maintain.

Examples of IaC Tools and Processes

While this article is about continuous integration and delivery, it is worth noting the many different types of tools that support an IaC mindset. Here is a partial list of tools, with the configuration language in parenthesis.

  • IIS: web.config, applicationHost.config (XML)
  • Containers: Dockerfiles, Docker Compose, Kubernetes (YAML, often calling imperative scripts)
  • VM Images: Packer (JSON or HCL)
  • Configuration: Ansible (YAML), AWS CloudFormation (JSON), Terraform (JSON or HCL), Puppet, Salt, Chef, and so many more.
  • Network Settings: firewalls, port configurations, proxy servers, etc. (wide variety of tools and styles)

Generally, these tools use declarative configuration scripts that are composed by hand rather than through a user interface, although there are notable exceptions (such as IIS’s inetmgr GUI).

At some point, vendor lock-in does happen: there are no tools (that I know of) for defining a job with a universal language that applies to all of the relevant platforms. TerraForm might come the closest. There are also some tools that can define continuous integration processes generically and output scripts for configuring several different platforms. For better or worse, I tend to be leary of getting too far away from the application’s native configuration code, for fear that I’ll miss out on important nuances.

Real World Examples of Continuous Integration Scripts

PowerShell and .NET

Command line examples using Ed-Fi ODS AdminApp’s build.ps1 script:

$ ./build.ps1 build -BuildConfiguration release -Version "2.0.0" -BuildCounter 45
$ ./build.ps1 unittest
$ ./build.ps1 integrationtest
$ ./build.ps1 package -Version "2.0.0" -BuildCounter 45
$ ./build.ps1 push -NuGetApiKey $env:nuget_key

Any of those commands can easily be run in any build automation tool. What are these commands doing? The first command is for the build operation, and it calls function Invoke-Build:

function Invoke-Build {
    Write-Host "Building Version $Version" -ForegroundColor Cyan

    Invoke-Step { InitializeNuGet }
    Invoke-Step { Clean }
    Invoke-Step { Restore }
    Invoke-Step { AssemblyInfo }
    Invoke-Step { Compile }

Side-note: Invoke-Step seen here, and Invoke-Execute seen below, are custom functions that (a) create a domain-specific language for writing a build script, and (b) setup command timing and logging to the console for each operation.

This function in turn is calling a series of other functions. If you are a .NET developer, you’ll probably recognize these steps quite readily. Let’s peak into the last function call:

function Compile {
    Invoke-Execute {
        dotnet --info
        dotnet build $solutionRoot -c $Configuration --nologo --no-restore

        $outputPath = "$solutionRoot/EdFi.Ods.AdminApp.Web/publish"
        $project = "$solutionRoot/EdFi.Ods.AdminApp.Web/"
        dotnet publish $project -c $Configuration /p:EnvironmentName=Production -o $outputPath --no-build --nologo

Now we see the key operations for compilation. In this is specific case, the development team actually wanted to run two commands, and even before running them they wanted to capture log output showing the version of dotnet in use. Any developer can easily run the build script to execute the same sequence of actions, without having to remember the detailed command options. And any tool should be able to run a PowerShell script with ease.


Command line examples using LMS-Toolkit’s

$ python ./ install schoology-extractor
$ python ./ test schoology-extractor
$ python ./ coverage schoology-extractor
$ python ./ coverage:html schoology-extractor

As the project in question (LMS Toolkit) is a set of Python scripts, and because we wanted to use a scripting language that is well supported in both Windows and Linux, we decided to use Python instead of a shell script.

There is a helper function for instructing the Python interpreter to run a shell command:

def _run_command(command: List[str], exit_immediately: bool = True):

    print('\033[95m' + " ".join(command) + '\033[0m')

    # Some system configurations on Windows-based CI servers have trouble
    # finding poetry, others do not. Explicitly calling "cmd /c" seems to help,
    # though unsure why.

    if ( == "nt"):
        # All versions of Windows are "nt"
        command = ["cmd", "/c", *command]

    script_dir = os.path.dirname(os.path.realpath(sys.argv[0]))

    package_name = sys.argv[2]

    package_dir = os.path.join(script_dir, "..", "src", package_name)
    if not os.path.exists(package_dir):
        package_dir = os.path.join(script_dir, "..", "utils", package_name)

        if not os.path.exists(package_dir):
            raise RuntimeError(f"Cannot find package {package_name}")

    result =, cwd=package_dir)

    if exit_immediately:

    if result.returncode != 0:

And then we have the individual build operations, such as running unit tests with a code coverage report:

def _run_coverage():
    ], exit_immediately=False)
    ], exit_immediately=False)

Reading this is a little strange at first, because the Python function is expecting an array of commands rather than a single string. Hence the command poetry run coverage report becomes the array ["poetry", "run", "coverage", "report"]. But here’s the thing: once you write the script, anyone can run it repeatedly, on any system that has the necessary tools installed, without having to learn and remember the detailed syntax of the commands that are being executed under the hood.


The JavaScript / TypeScript world provides npm, which is a great facility for running build operations.

Using Ed-Fi Project Buzz, you can run commands like the following:

$ npm install
$ npm run build
$ npm run test
$ npm run test:ci

The npm run XYZ commands are invoking scripts defined in the package.json file:

    "build": "nest build && copyfiles src/**/*.graphql dist",
    "test": "jest",
    "test:cov": "jest --coverage",
    "test:debug": "node --inspect-brk -r tsconfig-paths/register -r ts-node/register node_modules/.bin/jest --runInBand",
    "test:ci": "SET CI=true && SET TEAMCITY_VERSION = 1 && yarn test --testResultsProcessor=jest-teamcity-reporter--reporters=jest-junit",

Look at that debug command! Imagine having to type that in manually instead of just running npm run test:debug. Yuck!

Real World Examples of Tool Automation Scripts

That is, examples of scripts for automating the software that will run the integration and/or delivery process.

Octopus Deploy Operations

I can distinctly recall seeing advertisements for Octopus Deploy that castigated the use of YAML. While they have long supported JSON import and export of configurations, those JSON files were not very portable: they could only interoperate with the same Octopus version that created them.

Octopus has been coming around to deployment process as code. It appears that they’re embracing the philosophy extolled in this article. The referenced article doesn’t give examples of how to work with Octopus itself; instead it just tells you to use the .NET SDK. Which is what we’ve done in the example below. Also of note: as of May 2021, the roadmap shows that Git-integration is under development. This feature would, if I understand correctly, enable changes in the Octopus Deploy UI to be saved directly into Git source control. That’s a great step! I do not see any indication of what language will be used or whether changes can be scripted and then picked up by Octopus Deploy instead of vice versa.

In the Ed-Fi ODS/API application there’s a PowerShell script that imperatively creates channels and releases, and deploys releases, on Octopus Deploy — all without having to click around in the user interface. The following example imports the module; runs a command to install the Octopus command line client (typically a one-time operation), and then it creates a new deployment channel:

$ Import-Module octopus-deploy-management.psm1
$ Install-OctopusClient
$ $parms = @{
     Project="Ed-Fi ODS Shared Instance (SQL Server)"
$ Invoke-OctoCreateChannel @parms

And here’s the body of the Invoke-OctoCreateChannel function, which is running the .NET SDK command line tool:

$params = @(
    "--project", $Project,
    "--channel", $Channel,
    "--server", $ServerBaseUrl,
    "--apiKey", $ApiKey,
    "--timeout", $Timeout

Write-Host -ForegroundColor Magenta "& dotnet-octo create-channel $params"
&$ToolsPath/dotnet-octo create-channel $params


TeamCity build configurations can be automated with either XML or Kotlin. The latter is my preferred approach, and I’ve talked about it in three prior blog posts:

  1. Getting Started with Infrastructure as Code in TeamCity
  2. Splitting TeamCity Kotlin Into Multiple Files
  3. Template Inheritance with TeamCity Kotlin

GitHub Actions

Intrinsically YAML-driven, the following example from the Ed-Fi LMS Toolkit demonstrates the use of the Python script that is described above. For brevity’s sake I’ve removed steps that prepare the container by setting up the right version of Python and performing some other optimization activities.

# SPDX-License-Identifier: Apache-2.0
# Licensed to the Ed-Fi Alliance under one or more agreements.
# The Ed-Fi Alliance licenses this file to you under the Apache License, Version 2.0.
# See the LICENSE and NOTICES files in the project root for more information.

name: Canvas Extractor - Publish

    name: Run unit tests and publish
    runs-on: ubuntu-20.04
      - name: Checkout code
        uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f


      - name: Install Python Dependencies
        run: python ./eng/ install canvas-extractor


      - name: Run Tests
        run: python ./eng/ test canvas-extractor

      - name: Publish
          TWINE_USERNAME: $
          TWINE_PASSWORD: $
        run: python ./eng/ publish canvas-extractor


Taking the approach of Infrastructure-as-Code is all about shifting from a point-and-click mindset to a programming mindset, with benefits such as source control, peer review, and repeatability. Most continuous integration and delivery tools will support this paradigm. Many tools offer specialized commands that hide some of the complexity of running a process. While these can get you up-and-running quickly, over-utilization of such commands can lead to a tightly-coupled system, making it painful to move to another system. Scripted execution of integration and delivery steps (“universalizing”) can lead to more loosely-coupled systems while also enabling developers to run the same commands locally as would run on the CI/CD server.


Useful references for learning more about Infrastructure-as-code:

More generally, the use of IaC represents a “DevOps mindset”: developers thinking more about operations, and operations acting more like developers. To the benefit of both. Good DevOps references include:


All code samples shown here are from projects manged by the Ed-Fi Alliance, and are used under the terms of the Apache License, version 2.0.

‘Ed-Fi is open’: thus the Ed-Fi Alliance announced its transition from a proprietary license to the open source Apache License, version 2.0, in April, 2020 (FAQ). Moving to an open source license is a clear commitment to transparency: anyone can see the source code, and the user community knows that their right to use that code can never be revoked. But this change is about more than just words: as the list of contributions below demonstrates, embracing open source is also about participation.

In this second year of #edfiopensource we are asking ourselves – and the community – what comes next? What can we do, together, to unlock further innovation and deliver more tools that make use of student data in new, practical, and transformative ways?

Continue reading on

Elephant and dog

It looks like a beautiful morning in Austin, Texas, from the comfort of my feeder-facing position on the couch. Later in the afternoon I will get out and enjoy it on my afternoon walk with All Things Considered. As I write these lines a bully has been at work: a Yellow-Rumped Warbler (Myrtle) has been chasing the other birds away. Thankfully this greedy marauder was absent for most of the morning, as I read portions of Dr. J. Drew Lanham’s The Home Place, Memoirs of a Colored Man’s Love Affair with Nature.

Lanham, who also penned the harrowing-yet-humorous 9 Rules for the Black Birdwatcher, shares a compelling and beautifully-written story of family and place — at least, those are the key themes of first third of the book that I’ve read thus far. Appropriate to this day of reflection and remembrance for one of our great American heroes, Dr. Martin Luther King, Jr, it is a story of forces and people who shaped this scientist, a Black man from the South who learned to love nature from the first-hand experiences of playing, watching, listening, chopping, and hoeing on the edge of the South Carolina piedmont.

Understanding that one man’s experience, views, and insights can never encapsulate those of an entire amorphous people, it is nevertheless critical that we all spend time getting to better know and understand the forces that shape our myriad cultures and the people who emerge from them. As we become more familiar with “others,” “they” become “we” and “we” become self-aware. Becoming self-aware, we recognize the truth of Dr. King’s famous saying:

“We are caught in an inescapable network of mutuality, tied in a single garment of destiny. Whatever affects one directly, affects all indirectly.”

Being aware of our mutuality, believing in it deeply, we can make better choices about how to live well with and for everyone on this planet, both those alive today and those yet to be born.

A passage of beautiful prose from pages 1-2 of The Home Place, to give you a taste of what is in store. After describing his ethno-racial heritage — primarily African American with an admixture of European , American Indian, Asian, “and Neanderthal tossed in” — he remarks,

“But that’s only a part of the whole: There is also the red of miry clay, plowed up and planted to pass a legacy forward. There is the brown of spring floods rushing over a Savannah River shoal. There is the gold of ripening tobacco drying in the heat of summer’s last breath. There are endless rows of cotton’s cloudy white. My plumage is a kaledeiscopic rainbow of an eternal hope and the deepest blue of despair and darkness. All of these hues are me; I am, in the deepest sense, colored.”

Birds seen at the “backyard” feeder this morning while reading. Photos are a few weeks old but all of these species were observed today. © Tania Homayoun, some rights Creative Commons:

Black-Crested Titmouse
Black-Crested Titmouse

Carolina Wren
Carolina Wren

Hermit Thrush
Hermit Thrush

Orange-crowned Warbler
Orange-crowned Warbler

Ruby-crowned Kinglet
Ruby-crowned Kinglet

Yellow-rumped Warbler
Yellow-rumped Warbler

Also seen: Northern Cardinal, American Robin, Bewick’s Wren.

Are algorithms doomed to be racist and harmful, or is there a legitimate role for them in a just and equitable society?

Algorithms have been causing disproportionate harm to low- and middle-income individuals, especially people of color, since long before this current age of machine learning and artificial intelligence. Two cases in point: neighborhood redlining and credit scores. While residential redlining was a deliberately racist anti-black practice [1], FICO-based credit scoring does not appear to have been created from a racist motive. By amplifying and codifying existing inequities, however, the credit score can easily become another tool for racial oppression [2].

Still, with appropriate measures in place, and a bit of pragmatic optimism, perhaps we can find ways to achieve the scalability/impartiality goals of algorithms while upholding true equity and justice.

equality, equity, justice graphic
Justice: changing conditions, removing the barriers. Could not find the original source to credit, so I drew my own version of this thought-provoking graphic. I leave the sport being played behind the fence up to your imagination.

Fresh out of college I served as an AmeriCorps*VISTA at a non-profit dedicated to supporting small business development for women and minorities. There I began learning about the detrimental effects, deliberate and insidious, of so many modern policies around finance and housing. Later when I became a full-time employee, I was given a mission: come up with a rubric - an algorithm - for pre-qualifying loan applicants. The organization only had so much money to lend, and to remain solvent it would need to ensure that most loans would be repaid in full. Could we come up with a scoring mechanism that would help focus our attention on the most promising opportunities, bring a measure of objectivity and accountability, and yet remain true to our mission?

The organization was founded and, at that time, run by Jeannette Peten, an African American woman with a background in business finance and a passion for helping small businesses to succeed. Where credit scores attempt to judge credit worthiness through a complex calculation based on repayment histories, she asked me to take a broader approach that was dubbed the Four C’s of lending: Cash Flow, Character, Credit, and Collateral. Thus: what manner of calculation, utilizing these four concepts, would yield a useful prediction of a potential borrower’s capacity and capability to thrive and repay the loan?

Roughly following a knowledge engineering [3] approach, we brainstormed simple scoring systems for each of the C’s, with Character counting disproportionately relative to the others. To avoid snap judgment and bias, Character especially had to be treated through careful inference rather than subjective opinion, and thus was drawn from multiple sources including public records, references, site visits, business training and experience, and more.

Then I applied the scores to existing borrowers for validation: would the successful borrowers have made the grade? No? Tweak the system and try again. And again. And when a handful of great businesses in the target demographic were still on the borderline, my mentor identified additional “bonus points” that could be given for high community impact. I do not recall any formal measurement of model fitness / goodness beyond the simple question: does this model include more of our pool of successful loan applicants than all other models? Admittedly this was an eyeball test, not a rigorous and statistically valid experiment.

create model, test, tweak, validate, evaluate

Machine learning is the automated process of creating models, testing them against a sample, and seeing which yields the best predictions. Then (if you are doing it right) cross-validating [4] the result against a held-out sample to make sure the model did not over-fit the training data. In a simplistic fashion, I was following the historical antecedent of machine learning: perhaps we can call it Human Learning (HL). As a Human Learning exercise, I was able to twiddle the knobs on the formula, adjusting in a manner easily explained and easily defended to another human being. Additionally, as an engineer whose goal was justice, rather than blind equality, it was a simple matter to ensure that the training set represented a broad array of borrowers who fell into the target demographic.

In the end, the resulting algorithm did not make the lending decisions for the organization, and it required human assessment to pull together the various criteria and assign individual scores. What it did accomplish was to help us winnow through the large number of applicants, identifying those who would receive greater scrutiny and human attention.

Nearly twenty years ago, we had neither the foresight nor the resources to perform a long-range study evaluating the true effectiveness. Nevertheless, it taught this software engineer to work harder: don’t use the easy formula, make sure the baseline data are meaningful and valid for the problem, listen to domain experts, and most of all treat equity and justice as key features to be baked in rather than bolted on.

blurred image of the scoring spreadsheet

Algorithms are increasingly machine-automated and increasingly impacting our lives, all too often for the worse. The MIT Technology Review summarizes the current state of affairs thus:

“Algorithms now decide which children enter foster care, which patients receive medical care, which families get access to stable housing. Those of us with means can pass our lives unaware of any of this. But for low-income individuals, the rapid growth and adoption of automated decision-making systems has created a hidden web of interlocking traps.”[5]

On our current path, “color-blind” machine learning will continue tightening these nets that entrap those “without means.” But it does not have to be that way. With forethought, care, and a bit of the human touch, I believe we can work our way out of this mess to the benefit of all people. But it’s gonna take a lot of work.


  1. The History of Redlining (ThoughtCo), Redlining in America: How a history of housing discrimination endures (Thomson Reuters Foundation). Potential modern version: Redlined by Algorithm (Dissent Magazine), Modern-day redlining: How banks block people of color from homeownership (Chicago Tribune).
  2. Credit scores in America perpetuate racial injustice. Here’s how (The Guardian). Counterpoint from FICO: Do credit scores have a disparate impact on racial minorities?. Insurance is another arena where “color-blind” algorithms can cause real harm: Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination (Wired).
  3. Knowledge Engineering (ScienceDirect).
  4. What is Cross Validation in Machine learning? Types of Cross Validation (Great Learning Blog)
  5. The coming war on the hidden algorithms that trap people in poverty (MIT Technology Review)

Advances in the availability and breadth of data over the past few decades have enabled the rapid and unregulated deployment of statistical algorithms that aim to predict and thereby influence the course of human behavior. Most are designed to promote the corporate bottom line, not the welfare of the people. Those that aim to promote the common good run the danger of straying into authoritarian suppression of freedoms. Regardless of intention, these algorithms often reinforce existing social inequities or present a double-edged sword, with potential for positive use weighed against potential for misuse.

Coded Bias film poster

The films Coded Bias (now in virtual theaters) and The Social Dilemma (Netflix) probe these issues in detail through powerful documentary filmmaking and storytelling. Where The Social Dilemma focuses on the dangers of corporate and extremist manipulation through social media, Coded Bias reveals the biases inherit more broadly in “artificial intelligence” (AI) / machine learning (ML) systems. If you must choose just one, I would watch Coded Bias both for its incisive reveal of injustices large and small and its inspiring depiction of those working to bring these injustices to light.

Several well-regarded books explore these topics; indeed some of the authors are among those featured in these films. While I have yet to read the first three, they seem well-regarded and worth mentioning:

In Race After Technology (2019), Ruha Benjamin pulls the strands of algorithmic injustice together in a broader critique of technology’s impact on race, describing what she calls the New Jim Code: “The employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era.” (p10)

The New Jim Code thesis is a powerful critique of technology that simultaneously fails to see people of color (facial recognition, motion detection) and pins them in a spotlight of law enforcement surveillance and tracking. By explicit extension it is also a critique of the societies that tolerate, sponsor, and exploit such technologies, even while it acknowledges that many of the problems emerge from negligence rather than intention. From the outset, Benjamin gives us a useful prescription for working our way out of this mess, exhorting us to “move slower and empower people.” (p16).

After detailing many manifestations of technological inequality and injustice, she urges technologists like me to “optimize for justice and equity” as we “come to terms with the fact that all data are necessarily partial and potentially biased” (p126). The book concludes with further explorations on how justice might (and might not) be achieved while re-imagining technology.

Benjamin’s book was also my introduction to the Algorithmic Justice League, an advocacy organization that “combine(s) art and research to illuminate the implications and harms of AI.” The AJL is featured prominently in Coded Bias, and their website provides many resources for exploring this topic in more detail.

These works send a clear message that data and the algorithms that exploit them are causing and will continue to cause harm unless reined in and reconceptualized. This is the techno-bureaucratic side of #BlackLivesMatter, calling for #InclusiveTech and #ResponsibleAI. As with so many other issues of equity and justice, we all need to stand up, take notice, and act.

  1. Think twice before choosing to use AI/ML. Then get an outside review.
  2. Vet your data sets carefully and clearly describe their origin and use for posterity and review.
  3. Ask yourself: what are the potential impacts of the technology I am developing on historically oppressed groups?
  4. Cross-validate your assumptions, data, and results with a broad and representative audience.
  5. Keep listening. Keep learning.

Slack - choosing skin tone
Something positive: choosing an emoji skin tone

Last month my manager asked me about changing our naming convention for the primary “source of truth” in source code management: from “master” to… well, anything but “master.” I admit to initial hesitancy. I needed to think about it. After all, it seems like the name derives from the multimedia concept of a “master copy.” It’s not like the terribly-named “master-slave” software and hardware patterns. Or is it?

From 1996 to 2001 I spent near countless hours in two buildings on the University of Texas campus: RLM Hall and an un-named annex to the Engineering Science Building. Soon neither will exist: the one renamed, the other demolished. Reflecting on this I feel a small sense of empathy, but no sympathy, for others’ whose cherished institutions are being renamed. It was well past time to change the one’s name, and the other had outlived its usefulness.

This business of identifying that which needs to change, and then quickly acting on it, has gathered incredible momentum at last in 2020, as the people of the United States grapple with the double pandemic of a ruthless virus and endemic racism. Collectively we have barely moved the needle on either front: but there is movement.

Symbols must be re-evaluated and removed when they are found wanting, whether they are statues or names. Robert Lee Moore Hall honored a man who operated at the pinnacle of his profession and yet was, apparently, an outright segregationist. Not that any of us knew that. As an undergraduate and graduate student in Physics at UT, 52% of my classes where held in that building. I studied in its library and in the undergraduate physics lounge. I split time working part and full time between RLM and that unnamed building, in the High Energy Physics Lab. I remember more misery than joy there, but mostly extreme stress. There is no love lost for that frankly odious brick hulk or its even more odious name, yet there is a feeling of losing something personal with the change of name that was finally accepted by the University a month ago.

And that just goes to show the power of a name, of a symbol. All the more reason to change it. Time for an attitude adjustment.

The name has been found wanting and it must go. Just like that other little building, whose utility in housing twin three-story particle accelerators had long run out. It made way for a new building, better serving the needs of the students. And the Physics, Math, and Astronomy Building now takes its place on campus as, I hope, a more welcoming place for diverse groups of students, faculty, and staff to continue advancing the boundaries of science.

And that’s exactly what we need in software development: a welcoming place. Detaching from the name “RLM” was quite easy. But I had to think through the source code problem for a minute or two, rather than just rely strictly on GitHub’s judgment. My conclusion: if it bothers someone, then do something about it. And then I found one person who acknowledged: yes, it is disturbing being a Black programmer and confronting this loaded word on a regular basis (sadly I didn’t hang onto the URL and can’t find the blog post right now). OK, time to change.

I started with the code repository backing this blog. Took me all of… perhaps a minute to create a main branch from the old master, change the default branch to main, and delete master. If I had been working with a code base associated with a continuous integration environment it might have been a few more minutes, but even then it is so easy, as I have already found with the first few conversions at work. So much easier than having to print new business cards and letterhead for all the faculty in the Physics, Math, and Astronomy Building, assuming they still use such things.

A simple attitude adjustment is all it took: no sympathy for that which is lost, for the way we’ve always done things. Instead, a quick and painless removal of a useless reminder of a cruel past.

Steps taken to change this blog’s source code repository:

  1. Create the main branch
    Screenshot showing creation of main branch
  2. Switch the default from master to main
    Screenshot showing change of default branch
  3. Change the branch used by GitHub Pages
    Screenshot showing change to the GitHub Pages branch
  4. Finally, delete the old branch
    Screenshot showing deletion of old branch

This summer, one of the development teams at the Ed-Fi Alliance has been hard at work building Project Buzz: “When school shutdowns went into effect across the country as a result of COVID-19, much of the information teachers need to support students in the new online-school model had become fragmented across multiple surveys and the Student Information System.” (Fix-It-Fridays Delivers Project Buzz, A Mobile App to Help Teachers Prepare for Back-to-School).

As project architect, my role has been one of support for the development team, guiding technology toolkit choices and supporting downstream build and deployment operations. The team agreed to develop the applications in TypeScript on both the front- and back-ends. My next challenge: rapidly create TeamCity build configurations for all components using Kotlin code.


At this time, there are four components to the software stack: database, API, GUI, and ETL. The project is available under the Apache License, version 2, on GitHub. The build configurations for these four are generally very similar, although there are some key differences. This gave me a great opportunity to explore the power of creating abstract base classes in TeamCity for sharing baseline settings among several templates and build configurations.


  1. Minimize duplication
  2. Drive configurations through scripts that also operate at the command line, so that developers can easily execute the same steps as TeamCity.
  3. The above item implies use of script tasks. When those scripts emit an error message, that message should trigger the entire build to fail.
  4. All build configurations should check for sufficient disk space before running.
  5. All build configurations should use the same Swabra settings.
  6. All build configurations will need access to the VCS root, and the Kotlin files will be in the same repository as the rest of the source code.
  7. All projects will need build steps for pull requests and for the default branch.
    • Pull requests should run build and test activities
    • Default branch should run build, test, and package activities, and then trigger deployment.
  8. Both branch and pull request triggers should operate only when the given component is modified. For example, a pull request for the database project should not trigger the build configurations for the API, GUI, or ETL components.
  9. Pull requests should publish information back to GitHub so that the reviewer will know the status of the build operation.


Class diagram


The most general settings are applied in class BuildBaseClass, covering requirements 3, 4, 5, 6, and the commonalities in the two branches of requirement 7.

Structure of BuildBaseClass

Note that only the required imports are present. The class is made abstract via the open class keywords in the signature.

package _self.templates

import jetbrains.buildServer.configs.kotlin.v2019_2.*
import jetbrains.buildServer.configs.kotlin.v2019_2.buildFeatures.freeDiskSpace
import jetbrains.buildServer.configs.kotlin.v2019_2.buildFeatures.swabra
import jetbrains.buildServer.configs.kotlin.v2019_2.buildSteps.powerShell

open class BuildBaseClass : Template({
    // contents are split up and discussed below

Requirement 3: Fail on Error Message

It took me a surprisingly long time to discover this. PowerShell build steps in TeamCity behave a little differently than one might expect. You can set them to format StdErr as an error message, and it is natural to assume an error message will cause the build to fail. Not true. This setting helps, but as will be seen below, is not actually sufficient.

open class BuildBaseClass : Template({
    // ...

    option("shouldFailBuildOnAnyErrorMessage", "true")

    // ...

Requirements 4 and 5: Free Disk Space and Swabra

Apply two build features: check for minimum available disk space, and use the Swabra build cleaner.

open class BuildBaseClass : Template({
    // ...

    features {
        freeDiskSpace {
            id = ""
            requiredSpace = "%build.feature.freeDiskSpace%"
            failBuild = true
        // Default setting is to clean before next build
        swabra {

    // ...

Requirement 6: VCS Root

Use the special VCS root object, DslContext.settingsRoot. Checkout rules are applied via parameter so that each component’s build type will be able to specify a rule for checking out only that component’s directory, thus preventing triggering on updates to other components.

open class BuildBaseClass : Template({
    // ...

    vcs {
        root(DslContext.settingsRoot, "%vcs.checkout.rules%")

    // ...

Requirement 7: Shared Build Steps

The database project, which deploys tables into a PostgreSQL database, does not have any tests. Therefore this base class contains only the following build steps, without a testing step:

  1. Install and Use Correct Version of Node.js
  2. Install Packages
  3. Build

That first step supports TeamCity agents that need to use different versions of Node.js for different projects, using nvm for Windows. The second executes yarn install and the third executes yarn build. Because the TeamCity build agents are on Windows, all steps are executed using PowerShell.

open class BuildBaseClass : Template({
    // ...

    steps {
        powerShell {
            name = "Install and Use Correct Version of Node.js"
            formatStderrAsError = true
            scriptMode = script {
                content = """
                    nvm install %node.version%
                    nvm use %node.version%
                    Start-Sleep -Seconds 1
        powerShell {
            name = "Install Packages"
            workingDir = ""
            formatStderrAsError = true
            scriptMode = script {
                content = """
                    yarn install
        powerShell {
            name = "Build"
            workingDir = ""
            formatStderrAsError = true
            scriptMode = script {
                content = """
                    yarn build

    // ...


Structure of BuildOnlyPullRequestTemplate

Once again, the structure below contains only the required imports for this class. Carefully note the brace style: in the abstract class, the class “contents” were all inside braces as an argument to the Template constructor. In this concrete class, the “contents” are inside an init method, which is in turn inside a code block outside the BuildBaseClass constructor. You can learn more about this in the Kotlin: Classes and Inheritance documentation.

This class inherits directly from BuildBaseClass and does not need to apply any additional build steps.

package _self.templates

import jetbrains.buildServer.configs.kotlin.v2019_2.*
import jetbrains.buildServer.configs.kotlin.v2019_2.buildFeatures.commitStatusPublisher
import jetbrains.buildServer.configs.kotlin.v2019_2.buildFeatures.PullRequests
import jetbrains.buildServer.configs.kotlin.v2019_2.buildFeatures.pullRequests
import jetbrains.buildServer.configs.kotlin.v2019_2.triggers.VcsTrigger

object BuildOnlyPullRequestTemplate : BuildBaseClass() {
    init {
        name = "Build Only Pull Request Node.js Template"
        id = RelativeId("BuildOnlyPullRequestTemplate")

        // Remainder of the contents are split up and discussed below

Requirement 8: Pull Request Triggering

Here I am attempting to use the Pull Request build feature. I have had trouble getting it to work as advertised. This configuration needs further tweking, to ensure that only repository members’ pull requests automatically trigger a build (do not want random people submitting random code in a pull request, which might execute malicious statements on my TeamCity agent). I need to try changing that branch filter to +:pull/*.

object BuildOnlyPullRequestTemplate : BuildBaseClass() {
    init {

        // ...

        triggers {
            vcs {
                id ="vcsTrigger"
                quietPeriodMode = VcsTrigger.QuietPeriodMode.USE_CUSTOM
                quietPeriod = 120
                // This allows triggering on "anything" and then removes
                // triggering on the default branch and in feature branches,
                // thus leaving only the pull requests.
                branchFilter = """
        features {
            pullRequests {
                vcsRootExtId = "${}"
                provider = github {
                    authType = token {
                        token = "%github.accessToken%"
                    filterTargetBranch = "+:<default>"
                    filterAuthorRole = PullRequests.GitHubRoleFilter.MEMBER_OR_COLLABORATOR

        // ...


Requirement 9: Publishing Build Status

This uses the Commit Status Publisher. Note that the authType is personalToken here, whereas it was just token above. I have no idea why this is different ¯\(ツ)/¯.

object BuildOnlyPullRequestTemplate : BuildBaseClass() {
    init {

        // ...

        features {
            commitStatusPublisher {
                publisher = github {
                    githubUrl = ""
                    authType = personalToken {
                        token = "%github.accessToken%"

        // ...



Unlike the class described above, this one needs to run automated tests. Unfortunately, it demonstrates my (current) inability to avoid some degree of duplication. Perhaps in a future iteration I’ll rethink the inheritance tree and find a solution. For now, it duplicates features shown above, with the only difference being the base class: it inherits from BuildAndTestBaseClass, shown next, instead of BuildBaseClass.


This simple class inherits from BuildBaseClass and adds two steps: run tests using the yarn test:ci command and run quality inspections using command yarn lint:ci.

package _self.templates

import jetbrains.buildServer.configs.kotlin.v2019_2.*
import jetbrains.buildServer.configs.kotlin.v2019_2.buildFeatures.freeDiskSpace
import jetbrains.buildServer.configs.kotlin.v2019_2.buildFeatures.swabra
import jetbrains.buildServer.configs.kotlin.v2019_2.buildSteps.powerShell

open class BuildAndTestBaseClass : BuildBaseClass() {
    init {
        steps {
            powerShell {
                name = "Test"
                workingDir = ""
                formatStderrAsError = true
                scriptMode = script {
                    content = """
                        yarn test:ci
            powerShell {
                name = "Style Check"
                workingDir = ""
                formatStderrAsError = true
                scriptMode = script {
                    content = """
                        yarn lint:ci


Based on BuildAndTestBaseClass, this class adds a build step for packaging, and artifact rule, and a trigger. Although these are TypeScript packages, the build process is using NuGet packaging in order to take advantage of other tools (NuGet package feed, Octopus Deploy). The packaging step is orchestrated with a PowerShell script. The configuration can be used for any branch, but it is only triggered by the default branch.

package _self.templates

import jetbrains.buildServer.configs.kotlin.v2019_2.*
import jetbrains.buildServer.configs.kotlin.v2019_2.buildFeatures.freeDiskSpace
import jetbrains.buildServer.configs.kotlin.v2019_2.buildSteps.powerShell
import jetbrains.buildServer.configs.kotlin.v2019_2.triggers.VcsTrigger
import jetbrains.buildServer.configs.kotlin.v2019_2.triggers.vcs

object BuildAndTestTemplate : BuildAndTestBaseClass() {
    init {
        name = "Build and Test Node.js Template"
        id = RelativeId("BuildAndTestTemplate")

        artifactRules = "*.nupkg"

        steps {
            // Additional packaging step to augment the template build
            powerShell {
                name = "Package"
                workingDir = ""
                formatStderrAsError = true
                scriptMode = script {
                    content = """
                        .\build-package.ps1 -BuildCounter %build.counter%

        triggers {
            vcs {
                id ="vcsTrigger"
                quietPeriodMode = VcsTrigger.QuietPeriodMode.USE_CUSTOM
                quietPeriod = 120
                branchFilter = "+:<default>"

Component-Specific Projects

Bringing this all together, each components is a stand-alone project and contains at least two build types: Branch and Pull Request. These respectively utilize the appropriate template. The parameters are defined on the sub-project, making the build types extremely small:


package api.buildTypes

import jetbrains.buildServer.configs.kotlin.v2019_2.*

object BranchAPIBuild : BuildType ({
    name = "Branch Build and Test"



package api.buildTypes

import jetbrains.buildServer.configs.kotlin.v2019_2.*

object PullRequestAPIBuild : BuildType ({
    name = "Pull Request Build and Test"

API Project

Of the parameters shown below, only and vcs.checkout.rules will be familiar from the text above. The Octopus parameters are used in an additional Octopus Deploy build configuration, which is not material to the current demonstration.

package api

import jetbrains.buildServer.configs.kotlin.v2019_2.*

object APIProject : Project({
    name = "API"
    description = "Buzz API"


        param("", "./EdFi.Buzz.Api");
        param("octopus.release.version","<placeholder value>")
        param("octopus.release.project", "Buzz API")
        param("", "Projects-111")
            +:.teamcity => .teamcity


TeamCity templates have been developed in Kotlin that greatly reduce code duplication and ensure that certain important features are used by all templates. Unfortunately they did not completely eliminate duplication. Through use of class inheritance, merged-branch and pull request build configurations are able to share common settings. However, parallel templates with some duplication were still required.

In the future, perhaps I’ll explore handling this through an alternative approach using feature wrappers instead of or in addition to templates. My initial impression of these wrapper functions is that they obscure a build type’s action: in the examples above, a Template class reveals its base class, signaling immediately that there is more to the Template. In the feature wrapper approach, one only finds the additional functionality when reading the project file. It will be interesting one day to see if the two approaches can be combined, moving the wrapper inside the template or base class, insead of being applied to it externally.


All code samples above are Copyright © 2020, Ed-Fi Alliance, LLC and contributors. These samples are re-used under the terms of the Apache License, Version 2.

Previous Articles on TeamCity and Kotlin

While the Ed-Fi Alliance has made investments to improve the installation processes for its tools, it is still a time–consuming task: easy to get wrong, you must have the right runtime libraries, and it is problematic to have multiple versions running on the same server.

What if end-users could quickly startup and switch between ODS/API versions, testing out vendor integrations and new APIs with little development cost and with no need to manage runtime dependencies? Docker containers can do that for us.

Continue reading on

Potential Docker Architecture