Skip to main content

Exploring CDK for Terraform for .NET

· 4 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Overview

Both AWS CDK and Terraform aim to solve a similar problem: alleviating some of the infrastructure management challenges with code. CDK supports several general-purpose languages, including C#, Python, and TypeScript, while Terraform uses its configuration language called HCL. While CDK can only create AWS resources, Terraform supports virtually every cloud provider, granting the ability to write code to deploy to multiple public clouds at once. Last year, Terraform and AWS announced a project called Terraform for CDK, aiming to grant the best of both worlds (i.e., support for GPLs, multi-cloud, etc.).

Nuances and Limitations

In addition to the programming language features of AWS CDK, there's a construct library with three levels:

  • L1 (level one) constructs are representations of CloudFormation resources
  • L2 (level two) constructs provide defaults and boilerplate to simplify the code
  • Patterns are the highest level and create many resources configured together wrapped in a single construct (e.g., Lambda RESTful API)

While CDK for Terraform utilizes the AWS construct programming model, it does not share the same construct library as CDK. It's important to distinguish that CDK for Terraform stacks only support Terraform providers.

GitHub Repository

You can find a complete working example here.

Installing the Tools and Scaffolding the .NET Solution

The following command line tools are required for getting started:

  • Terraform (0.12+)
  • Node.js (12.16+)
  • AWS CLI (specifically the credentials)

First, install the cdktf CLI:

npm install -g cdktf-cli
# 0.3
cdktf --version

After that, create the .NET project using the cdktf CLI:

mkdir resources
cd resources
# the --local flag refers to local Terraform state management
cdktf init --template=csharp --local

This action creates several files, including a cdktf.json file. Inside this configuration file, specify the AWS provider.

resources/cdktf.json
{
"language": "csharp",
"app": "dotnet run -p MyTerraformStack.csproj",
"terraformProviders": ["aws@~> 2.0"],
"terraformModules": [],
"context": {
"excludeStackIdFromLogicalIds": "true",
"allowSepCharsInLogicalIds": "true"
}
}

After adding the provider configuration, generate the provider objects using the following command:

cdktf get

The generated objects are stored in the newly created .gen/ folder. Add this as a reference:

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.1</TargetFramework>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="HashiCorp.Cdktf" Version="0.3.0" />
</ItemGroup>

<ItemGroup>
<ProjectReference Include=".gen\aws\aws.csproj" />
</ItemGroup>

</Project>

Lastly, initialize the AwsProvider object in the Main.cs file.

resources/Main.cs
using System;
using Constructs;
using HashiCorp.Cdktf;
// AWS provider objects generated by cdktf get command
using aws;

namespace MyCompany.MyApp
{
class MyApp : TerraformStack
{
public MyApp(Construct scope, string id) : base(scope, id)
{
// initialize the AWS provider
// located in the .gen/ folder
new AwsProvider(this, "aws", new AwsProviderConfig {
Region = "us-east-1"
});
}

public static void Main(string[] args)
{
App app = new App();
new MyApp(app, "resources");
app.Synth();
Console.WriteLine("App synth complete");
}
}
}

Adding Resources

As noted above, the resources will be created using the Terraform AWS provider. There are corresponding C# classes for each of the AWS resources specified by the provider. While writing code, the AWS provider documentation in conjunction with your IDE's autocomplete functionality is a powerful way to navigate the available resources. For this example, the code looks up the latest AMI for Ubuntu 20.04 and uses it to create an EC2 Instance. Below the AwsProvider constructor method call in the MyApp constructor method, add a data source and instance like so:

resources/Main.cs
// initialize the AWS provider
// located in the .gen/ folder
new AwsProvider(this, "aws", new AwsProviderConfig {
Region = "us-east-1"
});

// https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance
DataAwsAmi dataAwsAmi = new DataAwsAmi(this, "aws_ami_ubuntu", new DataAwsAmiConfig()
{
MostRecent = true,
Filter = new []
{
new DataAwsAmiFilter()
{
Name = "name",
Values = new [] { "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*" }
},
new DataAwsAmiFilter()
{
Name = "virtualization-type",
Values = new [] { "hvm" }
},
},
Owners = new [] { "099720109477" }
});

// https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance
Instance ec2Instance = new Instance(this, "aws_ec2_instance", new InstanceConfig()
{
Ami = dataAwsAmi.ImageId,
InstanceType = "t3.micro",
});

Note how this functionally behaves the same as the corresponding Terraform HCL with the power of a general-purpose programming language.

Deploying and Managing State

Once finished adding the data source and resource, the project can be built and deployed assuming that the AWS credentials are available (i.e., aws configure has been run).

dotnet build
cdktf deploy
# when ready
cdktf destroy

The state resides in terraform.resources.tfstate.

.NET 5 Docker Lambda Function with API Gateway and Self-Mutating Pipeline Using CDK

· 9 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Deciding on Which Technology to Use

While infrastructure as code (IaC) has existed within the AWS ecosystem since 2011, adoption has exploded recently due to the ability to manage large amounts of infrastructure at scale and standardize design across an organization. There are almost too many options between CloudFormation (CFN), CDK, and Terraform for IaC and Serverless Application Model (SAM) and Serverless Framework for development. This article from A Cloud Guru quickly sums up the pros and cons of each option. I choose this particular stack for some key reasons:

  • CDK allows the infrastructure and the CI/CD pipeline to be described as C# instead of YAML, JSON, or HCL
  • CDK provides the ability to inject more robust logic than intrinsic functions in CloudFormation and more modularity as well while still being a native AWS offering
  • Docker ensures that the Lambda functions run consistently across local development, builds, and production environments and simplifies dependency management
  • CDK Pipelines offer a higher level construct with much less configuration than CodePipeline and streamline management of multiple environments

GitHub Repository

You can find a complete working example here.

Initializing the Project

Ensure that .NET 5 and the latest version of CDK are installed. To create a solution skeleton, run these commands in the root directory:

# note that CDK uses this directory name as the solution name
mkdir LambdaApiSolution
cd LambdaApiSolution
cdk init app --language=csharp
# creates a CFN stack called CDKToolkit with an S3 bucket for staging purposes and configures IAM permissions for CI/CD
cdk bootstrap --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess
cdk deploy

In order to use CDK Pipelines later on, a specific flag needs to be added to cdk.json:

LambdaApiSolution/cdk.json
{
"app": "dotnet run -p src/LambdaApiSolution/LambdaApiSolution.csproj",
"context": {
..
"@aws-cdk/core:newStyleStackSynthesis": "true",
..
}
}

At the time of writing, the generated CDK template uses .NET Core 3.1. Inside of the .csproj file, change the TargetFramework tag to net5.0.

LambdaApiSolution/src/LambdaApiSolution/LambdaApiSolution.csproj
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
</PropertyGroup>
</Project>

From the /LambdaApiSolution directory, run these commands to create the serverless skeleton:

# install the latest version of the .NET Lambda templates
dotnet new -i Amazon.Lambda.Templates
cd src/
# create the function
dotnet new lambda.image.EmptyFunction --name LambdaApiSolution.DockerFunction
# add the projects to the solution file
dotnet sln add LambdaApiSolution.DockerFunction/src/LambdaApiSolution.DockerFunction/LambdaApiSolution.DockerFunction.csproj
dotnet sln add LambdaApiSolution.DockerFunction/test/LambdaApiSolution.DockerFunction.Tests/LambdaApiSolution.DockerFunction.Tests.csproj
# build the solution and run the sample unit test to verify that everything is wired up correctly
dotnet test LambdaApiSolution.sln

Creating the Lambda Infrastructure and Build

First, add the Lambda CDK NuGet package to the CDK project.

<PackageReference Include="Amazon.CDK.AWS.Lambda" Version="1.90.0" />

Then, create the Docker image and Lambda function using CDK constructs in LambdaApiSolutionStack.cs:

LambdaApiSolution/src/LambdaApiSolution/LambdaApiSolutionStack.cs
public class LambdaApiSolutionStack : Stack
{
internal LambdaApiSolutionStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{
// this path is relative to the directory where CDK commands are run
// the directory must contain a Dockerfile
DockerImageCode dockerImageCode = DockerImageCode.FromImageAsset("src/LambdaApiSolution.DockerFunction/src/LambdaApiSolution.DockerFunction");
DockerImageFunction dockerImageFunction = new DockerImageFunction(this, "LambdaFunction", new DockerImageFunctionProps()
{
Code = dockerImageCode,
Description = ".NET 5 Docker Lambda function"
});
}
}

Lastly, update the Dockerfile in the Lambda function project like so to build the C# code:

LambdaApiSolution/src/LambdaApiSolution.DockerFunction/src/LambdaApiSolution.DockerFunction/Dockerfile
FROM public.ecr.aws/lambda/dotnet:5.0
FROM mcr.microsoft.com/dotnet/sdk:5.0 as build-image

ARG FUNCTION_DIR="/build"
ARG CONFIGURATION="release"
ENV PATH="/root/.dotnet/tools:${PATH}"

RUN apt-get update && apt-get -y install zip

RUN mkdir $FUNCTION_DIR
WORKDIR $FUNCTION_DIR
COPY Function.cs LambdaApiSolution.DockerFunction.csproj aws-lambda-tools-defaults.json $FUNCTION_DIR/
RUN dotnet tool install -g Amazon.Lambda.Tools

RUN mkdir -p build_artifacts
RUN if [ "$CONFIGURATION" = "debug" ]; then dotnet lambda package --configuration Debug --package-type zip; else dotnet lambda package --configuration Release --package-type zip; fi
RUN if [ "$CONFIGURATION" = "debug" ]; then cp -r /build/bin/Debug/net5.0/publish/* /build/build_artifacts; else cp -r /build/bin/Release/net5.0/publish/* /build/build_artifacts; fi

FROM public.ecr.aws/lambda/dotnet:5.0

COPY --from=build-image /build/build_artifacts/ /var/task/
CMD ["LambdaApiSolution.DockerFunction::LambdaApiSolution.DockerFunction.Function::FunctionHandler"]

At this point, you can now deploy the changes with the cdk deploy command. The Lambda function can be tested via the AWS Console. The easiest way to do so is to navigate to the CloudFormation stack, click on the function resource, and then create a test event with the string "hello" as the input. Note that this should not be a JSON object because the event handler's parameter currently accepts a single string.

Integrating API Gateway

Add the following packages to the CDK project:

<PackageReference Include="Amazon.CDK.AWS.APIGatewayv2" Version="1.90.0" />
<PackageReference Include="Amazon.CDK.AWS.APIGatewayv2.Integrations" Version="1.90.0" />

Next, you can add the API Gateway resources to the stack immediately after the DockerImageFunction in LambdaApiSolutionStack.cs:

LambdaApiSolution/src/LambdaApiSolution/LambdaApiSolutionStack.cs
HttpApi httpApi = new HttpApi(this, "APIGatewayForLambda", new HttpApiProps()
{
ApiName = "APIGatewayForLambda",
CreateDefaultStage = true,
CorsPreflight = new CorsPreflightOptions()
{
AllowMethods = new [] { HttpMethod.GET },
AllowOrigins = new [] { "*" },
MaxAge = Duration.Days(10)
}
});

Then, create a Lambda proxy integration and a route for the function:

LambdaApiSolution/src/LambdaApiSolution/LambdaApiSolutionStack.cs
LambdaProxyIntegration lambdaProxyIntegration = new LambdaProxyIntegration(new LambdaProxyIntegrationProps()
{
Handler = dockerImageFunction,
PayloadFormatVersion = PayloadFormatVersion.VERSION_2_0
});
httpApi.AddRoutes(new AddRoutesOptions()
{
Path = "/casing",
Integration = lambdaProxyIntegration,
Methods = new [] { HttpMethod.POST }
});

I used /casing for the path since the sample Lambda function returns an upper and lower case version of the input string. Finally, it's helpful to display the endpoint URL using a CFN output for testing.

LambdaApiSolution/src/LambdaApiSolution/LambdaApiSolutionStack.cs
// adding entropy to prevent a name collision
string guid = Guid.NewGuid().ToString();
CfnOutput apiUrl = new CfnOutput(this, "APIGatewayURLOutput", new CfnOutputProps()
{
ExportName = $"APIGatewayEndpointURL-{guid}",
Value = httpApi.ApiEndpoint
});

With these changes to the resources, the Lambda function can be invoked by a POST request. The handler method parameters in Function.cs need to be updated for the request body to be passed in.

LambdaApiSolution/src/LambdaApiSolution.DockerFunction/src/LambdaApiSolution.DockerFunction/Function.cs
// replace the string parameter with a proxy request parameter
public Casing FunctionHandler(APIGatewayProxyRequest apiGatewayProxyRequest, ILambdaContext context)
{
// update the input to use the proxy
string input = apiGatewayProxyRequest.Body;
return new Casing(input.ToLower(), input.ToUpper());
}

After successfully deploying the changes, the function can be tested in two ways. The first way is through an HTTP client like Postman. Add a string to the body parameter of the POST request. This action tests the full integration with API Gateway as well as the Lambda function. To test via the Lambda Console, update the test event from before to match the APIGatewayProxyRequest parameter:

{
"body": "hello"
}

Adding CI/CD Using CDK Pipelines

For this example, the source code resides in GitHub as opposed to CodeCommit. To grant the CI/CD pipeline access to the repository, a personal access token with repo permissions must be created via GitHub and stored in Secrets Manager as a plaintext format object. Note that for this codebase, I've named my secret GitHub-Token.

GitHub Personal Access Token

Next, add the following packages to the CDK project:

<PackageReference Include="Amazon.CDK.AWS.CodeBuild" Version="1.90.0" />
<PackageReference Include="Amazon.CDK.AWS.CodeDeploy" Version="1.90.0" />
<PackageReference Include="Amazon.CDK.AWS.CodePipeline" Version="1.90.0" />
<PackageReference Include="Amazon.CDK.AWS.CodePipeline.Actions" Version="1.90.0" />
<PackageReference Include="Amazon.CDK.Pipelines" Version="1.90.0" />

With these dependencies loaded, create a class called PipelineStack.cs. The following code creates a self-mutating CDK Pipeline, adds a GitHub source action to fetch the code using the token from Secrets Manager, and synthesizes the solution's CDK:

LambdaApiSolution/src/LambdaApiSolution/PipelineStack.cs
using Amazon.CDK;
using Amazon.CDK.AWS.CodeBuild;
using Amazon.CDK.AWS.CodePipeline;
using Amazon.CDK.AWS.CodePipeline.Actions;
using Amazon.CDK.Pipelines;

namespace LambdaApiSolution
{
public class PipelineStack : Stack
{
internal PipelineStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{
Artifact_ sourceArtifact = new Artifact_();
Artifact_ cloudAssemblyArtifact = new Artifact_();
CdkPipeline pipeline = new CdkPipeline(this, "LambdaApiSolutionPipeline", new CdkPipelineProps()
{
CloudAssemblyArtifact = cloudAssemblyArtifact,
PipelineName = "LambdaApiSolutionPipeline",
SourceAction = new GitHubSourceAction(new GitHubSourceActionProps()
{
ActionName = "GitHubSource",
Output = sourceArtifact,
OauthToken = SecretValue.SecretsManager(Constants.GitHubTokenSecretsManagerId),
// these values are in Constants.cs instead of being hardcoded
Owner = Constants.Owner,
Repo = Constants.RepositoryName,
Branch = Constants.Branch,
Trigger = GitHubTrigger.POLL
}),
SynthAction = new SimpleSynthAction(new SimpleSynthActionProps()
{
Environment = new BuildEnvironment
{
// required for .NET 5
// https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html
BuildImage = LinuxBuildImage.STANDARD_5_0
},
SourceArtifact = sourceArtifact,
CloudAssemblyArtifact = cloudAssemblyArtifact,
// navigates to the proper subdirectory to simplify other commands
Subdirectory = "LambdaApiSolution",
InstallCommands = new [] { "npm install -g aws-cdk" },
BuildCommands = new [] { "dotnet build src/LambdaApiSolution.sln" },
SynthCommand = "cdk synth"
})
});
}
}
}

Remove the following line from Program.cs since the pipeline will deploy the API from now on:

new LambdaApiSolutionStack(app, "LambdaApiSolutionStack");

Delete the previous stack, commit the latest changes to the source code so that they'll be available when the pipeline fetches the repo, and finally deploy the pipeline:

cdk destroy
git add .
git commit -m "Adding source code to GitHub repository"
git push origin main
cdk deploy LambdaApiSolutionPipelineStack

Creating Multiple Environments

From now on, the pipeline will manage changes instead of manual cdk deploy commands. By merely pushing changes to the main branch, the pipeline will update itself and the other resources. The last feature in this example is adding development, test, and production environments. Rather than creating more stacks, we can leverage stages instead. Each environment will have a stage that makes a separate stack plus actions like approvals or integration testing. First, a stage must be defined in code. For this example, a stage will only contain an API stack.

LambdaApiSolution/src/LambdaApiSolution/Program.cs
using Amazon.CDK;
using Construct = Constructs.Construct;

namespace LambdaApiSolution
{
public class SolutionStage : Stage
{
public SolutionStage(Construct scope, string id, IStageProps props = null) : base(scope, id, props)
{
LambdaApiSolutionStack lambdaApiSolutionStack = new LambdaApiSolutionStack(this, "Solution");
}
}
}

To implement the stages, navigate back to PipelineStack.cs and append the following code after the pipeline declaration:

LambdaApiSolution/src/LambdaApiSolution/PipelineStack.cs
CdkStage developmentStage = pipeline.AddApplicationStage(new SolutionStage(this, "Development"));
CdkStage testStage = pipeline.AddApplicationStage(new SolutionStage(this, "Test"));
testStage.AddManualApprovalAction(new AddManualApprovalOptions()
{
ActionName = "PromoteToProduction"
});
CdkStage productionStage = pipeline.AddApplicationStage(new SolutionStage(this, "Production"));

Next Steps

The Lambda function, API Gateway, and multi-environment CI/CD pipeline are now in place. More Lambda functions can be added as separate C# projects. More stacks can be created and added to SolutionStage.cs.

AWS re:Invent 2020

· 10 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Contents

  • Container Support for Lambda
  • Introducing AWS Proton
  • EC2 macOS Instances
  • First-Class .NET 5 Support for Lambda
  • CloudShell

Container Support for Lambda

AWS Lambda supports Docker images up to 10GB in size. They've also provided base images for Lambda runtimes in the new public ECR. For reference, the base Node.js 12 image is ~450MB. The Serverless Application Model (SAM) CLI has already been updated for container support. Instead of specifying a --runtime, engineers can now use the --base-image flag.

sam --version 
# 1.13.2
sam init --base-image amazon/nodejs12.x-base

This creates a Dockerfile for the function.

FROM public.ecr.aws/lambda/nodejs:12
COPY app.js package.json ./
RUN npm install
CMD ["app.lambdaHandler"]

The deploy command also includes container registry support via ECR. With a quick --guided deployment, I produced the following samconfig:

version = 0.1
[default]
[default.deploy]
[default.deploy.parameters]
stack_name = "sam-app-container-support"
s3_bucket = "aws-sam-cli-managed-default-samclisourcebucket-ENTROPY"
s3_prefix = "sam-app-container-support"
region = "us-east-1"
confirm_changeset = true
capabilities = "CAPABILITY_IAM"
image_repository = "ACCOUNT_NUMBER.dkr.ecr.us-east-1.amazonaws.com/IMAGE_REPOSITORY"

All of this made it seamless to deploy a container-based Lambda function with the same ease as zip-based ones. I haven't had the opportunity to do performance testing yet, but per /u/julianwood from the Lambda team, it should be equivalent.

Performance is on par with zip functions. We don't use Fargate. This is pure Lambda. We optimize the image when the function is created and cache the layers, so the startup time is pretty much the same as zip functions.

A fully-functional example can be found in this GitHub repository.

Introducing AWS Proton

AWS Proton is the first fully managed application deployment service for container and serverless applications. Platform engineering teams can use Proton to connect and coordinate all the different tools needed for infrastructure provisioning, code deployments, monitoring, and updates.

During the announcement video, I wasn’t sure what the relationship between Proton and existing DevOps tools like CloudFormation and CodePipeline would be or even who the target audience is. To answer these questions, it makes sense to describe the problem that AWS is aiming to solve.

Per the Containers from the Couch stream, AWS understands that not all teams are able to staff with the requisite expertise on a single team (i.e., one team with software engineers, DevOps engineers, security, etc.). To mitigate this, companies often create leveraged teams to provide a specific set of services to other groups (i.e., a centralized platform team that serves multiple development teams). Leveraged teams have their own set of problems, including becoming resource bottlenecks, lack of adequate knowledge sharing mechanisms, and the inability to define and enforce standards.

Proton aims to bridge this gap by offering tooling to standardize environments and services in templates across an organization. It also supports versioning so that environments and services are appropriately maintained. The expectation is that centralized platform teams can support these templates instead of individual solutions with heavily nuanced CloudFormation templates and DevOps pipelines. In Proton, environments are defined as sets of shared resources that individual services are deployed into. At this time, it’s not possible to deploy services without environments. The configurations for environments and services are intended to be utilized throughout the organization (although cross-account sharing isn’t available yet). Changes to templates are published as major and minor versions that are applied to individual instances. Unfortunately, auto-updates are not yet available. Schemas are used within these templates to define inputs for consumers.

I haven’t been able to find much documentation on how to create templates other than this GitHub repository. The Lambda example there gives insight into the general structure from the /environment and /service directories. Both types are comprised of schemas, manifests, infrastructure, and pipelines.

As mentioned above, schemas are used to capture inputs. In the sample from GitHub, the only shared environment resource is a DynamoDB table, and the time to live specification is parameterized.

/schema.yaml
schema:
format:
openapi: "3.0.0"
environment_input_type: "EnvironmentInput"
types:
EnvironmentInput:
type: object
description: "Input properties for my environment"
properties:
ttl_attribute:
type: string
description: "Which attribute to use as the ttl attribute"
default: ttl
minLength: 1
maxLength: 100

Defining /infrastructure or /pipeline sections of the Proton template requires a manifest to describe how exactly to interpret the infrastructure as code. I can't find any documentation for the accepted values, but the template indicates that templating engines like Jinja are supported and other infrastructure as code options like CDK may be planned for the future.

/manifest.yaml
infrastructure:
templates:
- file: "cloudformation.yaml"
engine: jinja
template_language: cloudformation

Lastly, a CloudFormation template is used to describe the infrastructure and DevOps automation like CodePipeline. Note the use of Jinja templating (specifically environment.ttl_attribute) to reference shared resources and input parameters.

/cloudformation.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: This environment holds a simple DDB table shared between services.
Resources:
AppTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
- AttributeName: hk
AttributeType: S
- AttributeName: rk
AttributeType: S
BillingMode: PAY_PER_REQUEST
KeySchema:
- AttributeName: hk
KeyType: HASH
- AttributeName: rk
KeyType: RANGE
GlobalSecondaryIndexes:
- IndexName: reverse
KeySchema:
- AttributeName: rk
KeyType: HASH
- AttributeName: hk
KeyType: RANGE
Projection:
ProjectionType: ALL {% if environment.ttl_attribute|length %}
TimeToLiveSpecification:
AttributeName: "{{ environment.ttl_attribute }}"
Enabled: true

When the template is finished, compress the source code, push to S3, create a template, and publish it.

# create an environment template
aws proton-preview create-environment-template \
--region us-east-1 \
--template-name "crud-api" \
--display-name "CRUD Environment" \
--description "Environment with DDB Table"
# create a major version of the template (1)
aws proton-preview create-environment-template-major-version \
--region us-east-1 \
--template-name "crud-api" \
--description "Version 1"
# compress local source code
tar -zcvf env-template.tar.gz environment/
# copy to S3
aws s3 cp env-template.tar.gz s3://proton-cli-templates-${account_id}/env-template.tar.gz --region us-east-1
# delete local artifact
rm env-template.tar.gz
# create a minor version (0)
aws proton-preview create-environment-template-minor-version \
--region us-east-1 \
--template-name "crud-api" \
--description "Version 1" \
--major-version-id "1" \
--source-s3-bucket proton-cli-templates-${account_id} \
--source-s3-key env-template.tar.gz
# publish for users
aws proton-preview update-environment-template-minor-version \
--region us-east-1 \
--template-name "crud-api" \
--major-version-id "1" \
--minor-version-id "0" \
--status "PUBLISHED"
# instantiate an environment
aws proton-preview create-environment \
--region us-east-1 \
--environment-name "crud-api-beta" \
--environment-template-arn arn:aws:proton:us-east-1:${account_id}:environment-template/crud-api \
--template-major-version-id 1 \
--proton-service-role-arn arn:aws:iam::${account_id}:role/ProtonServiceRole \
--spec file://specs/env-spec.yaml

The process for publishing and instantiating services is largely the same.

EC2 macOS Instances

The prospect of having macOS support for EC2 instances is exciting, but the current implementation has some severe limitations. First off, the instances are only available via dedicated hosts with a minimum of a 24-hour tenancy. At an hourly rate of USD 1.083, it’s hard to imagine this being economically viable outside of particular use cases. The only AMIs available are 10.14 (Mojave) and 10.15 (Catalina), although 11.0 (Big Sur) is coming soon. There’s also no mention of support for AWS Workspaces yet, which I hope is a future addition given the popularity of macOS amongst engineers. Lastly, the new Apple M1 ARM-based chip isn’t available until next year.

Despite the cost, I still wanted to get my hands on an instance. I hit two roadblocks while getting started. First, I had to increase my service quota for mac1 dedicated hosts. Second, I had to try several availability zones to find one with dedicated hosts available (use1-az6). I used the following CLI commands to provision a host and instance.

# create host and echo ID
aws ec2 allocate-hosts --instance-type mac1.metal \
--availability-zone us-east-1a --auto-placement on \
--quantity 1 --region us-east-1
# create an EC2 instance on the host
aws ec2 run-instances --region us-east-1 \
--instance-type mac1.metal \
--image-id ami-0e813c305f63cecbd \
--key-name $KEY_PAIR --associate-public-ip-address \
--placement 'HostId=$HOST_ID' \
--block-device-mappings 'DeviceName=/dev/sda1,Ebs={DeleteOnTermination=true,VolumeSize=250,VolumeType=gp2}'

After that, I was able to SSH in and experience EC2 macOS in all its glory.

             .:'
__ :'__ __| __|_ )
.'` `-' ``. _| ( /
: .-' ___|\___|___|
: :
: `-; Amazon EC2
`.__.-.__.' macOS Catalina 10.15.7

Thanks to this awesome blog post, I was able to put together an EC2 user data script for remote access.

sudo su
dscl . -create /Users/Scottie
dscl . -create /Users/Scottie UserShell /bin/zsh
dscl . -create /Users/Scottie RealName "Scottie Enriquez"
dscl . -create /Users/Scottie UniqueID 1000
dscl . -create /Users/Scottie PrimaryGroupID 1000
dscl . -create /Users/Scottie NFSHomeDirectory /Users/Scottie
dscl . -passwd /Users/Scottie $USER_PASSWORD
dscl . -append /Groups/admin GroupMembership Scottie
/System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart \
-activate -configure -access -on \
-clientopts -setvnclegacy -vnclegacy yes \
-clientopts -setvncpw -vncpw $VNC_PASSWORD \
-restart -agent -privs -all

I then used VNC Viewer to connect over port 5900.

macOS on EC2

First-Class .NET 5 Support for Lambda

According to AWS:

.NET 5, which was released last month, is a major release towards the vision of a single .NET experience for .NET Core, .NET Framework, and Xamarin developers. .NET 5 is a “Current” release and is not a long term supported (LTS) version of .NET. The next LTS version will be .NET 6, which is currently scheduled to be released in November 2021. .NET 5 will be supported for 3 months after that date, which means that .NET 5 will be supported for about 15 months in total. In comparison, .NET 6 will have 3 years of support. Even though Lambda’s policy has always been to support LTS versions of language runtimes for managed runtimes, the new container image support makes .NET 5 a first-class platform for Lambda functions.

While AWS has already released the .NET 5 public ECR image, SAM support as a --base-image hasn't been implemented yet as of version 1.13.2. Porting from a .NET Core starter template is as easy as changing the <TargetFramework> in the .csproj file and updating the Dockerfile.

FROM mcr.microsoft.com/dotnet/sdk:5.0 as build-image

ARG FUNCTION_DIR="/build"
ARG SAM_BUILD_MODE="run"
ENV PATH="/root/.dotnet/tools:${PATH}"

RUN apt-get update && apt-get -y install zip

RUN mkdir $FUNCTION_DIR
WORKDIR $FUNCTION_DIR
COPY Function.cs HelloWorld.csproj aws-lambda-tools-defaults.json $FUNCTION_DIR/
RUN dotnet tool install -g Amazon.Lambda.Tools

RUN mkdir -p build_artifacts
RUN if [ "$SAM_BUILD_MODE" = "debug" ]; then dotnet lambda package --configuration Debug; else dotnet lambda package --configuration Release; fi
RUN if [ "$SAM_BUILD_MODE" = "debug" ]; then cp -r /build/bin/Debug/net5.0/publish/* /build/build_artifacts; else cp -r /build/bin/Release/net5.0/publish/* /build/build_artifacts; fi

FROM public.ecr.aws/lambda/dotnet:5.0

COPY --from=build-image /build/build_artifacts/ /var/task/
CMD ["HelloWorld::HelloWorld.Function::FunctionHandler"]

A working example can be found here.

CloudShell

Finally catching up with both Azure and Google Cloud, AWS announced the launch of CloudShell:

AWS CloudShell is a browser-based shell that makes it easy to securely manage, explore, and interact with your AWS resources. CloudShell is pre-authenticated with your console credentials. Common development and operations tools are pre-installed, so no local installation or configuration is required. With CloudShell, you can quickly run scripts with the AWS Command Line Interface (AWS CLI), experiment with AWS service APIs using the AWS SDKs, or use a range of other tools to be productive. You can use CloudShell right from your browser and at no additional cost.

Bash, Zsh, and PowerShell are available as shell options, and run commands can be customized in a typical ~/.bashrc or ~/.zshrc fashion. The free gigabyte of storage persists in the $HOME directory, making it easy to stash working files. While there are several runtimes like Node.js and Python installed alongside Vim, doing development in CloudShell is not as ergonomic as Cloud9 or a local machine. I can see this tool being useful when combined with something like container tabs in Firefox to interact with multiple AWS accounts from the browser instead of running commands in a local terminal with --profile flags.

Cloud9 IDE Configuration

· 3 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Cloud9 Overview and Use Cases

Per AWS, "Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal. Cloud9 comes prepackaged with essential tools like Docker and support for popular programming languages, including JavaScript, Python, PHP, and .NET." The AWS, Serverless Application Model (SAM), and Cloud Development Kit (CDK) CLIs are pre-installed as well. Users are abstracted from credential management (i.e., there's no need to provision an IAM user and run aws configure). Since the underlying compute is an EC2 instance, developers have a consistent experience across client devices.

Cloud9

Cloud9 makes it easy to declare an environment using CloudFormation, specify Git repositories to clone during the provisioning process, and share various custom settings such as themes and keybindings with developers. It's also a cheap option since the EC2 instance shuts itself down after a set period of time (with a default of 30 minutes).

Initial Setup

The first deployment fails unless a Cloud9 environment has been created from the Console due to an IAM service role created in the process (service-role/AWSCloud9SSMAccessRole). See more information in the AWS documentation.

AWS Resources Created

  • A Cloud9 environment with an m5.large instance EC2 instance
  • A CodeCommit repository for stashing work since the Cloud9 environment is considered transient

CloudFormation Template

Resources:
rCloud9Environment:
Type: AWS::Cloud9::EnvironmentEC2
Properties:
AutomaticStopTimeMinutes: 30
ConnectionType: CONNECT_SSM
Description: Web-based cloud development environment
InstanceType: m5.large
Name: Cloud9Environment
Repositories:
- PathComponent: /repos/codecommit
RepositoryUrl: !GetAtt rCloud9WorkingRepository.CloneUrlHttp
- PathComponent: /repos/aws-cloud9-environment
RepositoryUrl: https://github.com/scottenriquez/aws-cloud9-environment.git
rCloud9WorkingRepository:
Type: AWS::CodeCommit::Repository
Properties:
RepositoryName: Cloud9WorkingRepository
RepositoryDescription: A CodeCommit repository for stashing work from the Cloud9 IDE

This template can be deployed via the AWS Console or the AWS CLI.

Initialization Script

wget https://github.com/dotnet/core/files/2186067/openssl-libs-ami.spec.txt
rpmbuild --bb openssl-libs-ami.spec.txt
sudo rpm -i /usr/src/rpm/RPMS/x86_64/openssl-libs-1.0.0-0.x86_64.rpm
sudo rpm -Uvh https://packages.microsoft.com/config/centos/7/packages-microsoft-prod.rpm
sudo yum install dotnet-sdk-3.1 zsh
sudo passwd ec2-user
chsh -s /bin/zsh
sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

The user-data.sh script is intended to run when the Cloud9 instance spins up (mirroring the EC2 instance paramater of the same name). Unfortunately, this setup must be done manually since there isn't a parameter in the CloudFormation resource. To make this easier, I've added this GitHub repository to the list to clone on the instance.

The Bash script does the following:

  • Installs .NET Core 3.1
  • Installs Zsh and Oh My Zsh
  • Resets the ec2-user password
  • Changes the default shell to Zsh

User Settings

{
"ace": {
"@behavioursEnabled": true,
"@fontSize": 18,
"@keyboardmode": "vim",
"@showGutter": true,
"@showInvisibles": true,
"@theme": "@c9/ide/plugins/c9.ide.theme.jett/ace.themes/jett",
"custom-types": {
"json()": {
"settings": "javascript"
}
},
"statusbar": {
"@show": true
}
},
"build": {
"@autobuild": false
},
"collab": {
"@timeslider-visible": false
},
"configurations": {},
"debug": {
"@autoshow": true,
"@pause": 0
},
"general": {
"@autosave": "afterDelay",
"@skin": "jett-dark"
},
"key-bindings": {
"@platform": "auto",
"json()": []
},
"openfiles": {
"@show": false
},
"output": {},
"projecttree": {
"@showhidden": false
},
"tabs": {
"@show": true
},
"terminal": {
"@fontsize": 18
}
}

Much like the user-data, the user settings aren't parameterized in CloudFormation. These settings are included in the repository but must be manually configured.

Using Former2 for Existing AWS Resources

· 5 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Overview

I've been making a concerted effort lately to use infrastructure as code via CloudFormation for all of my personal AWS-hosted projects. Writing these templates can feel a bit tedious, even with editor tooling and plugins. I thought it would be awesome to generate CloudFormation templates for existing resources and first found CloudFormer. I found blog posts about CloudFormer from as far back as 2013, but it was never advertised much.

Update: Former2 is the de facto standard now that CloudFormer has been deprecated. I kept my notes on CloudFormer for posterity at the end of the post.

Getting Started with Former2

Former2 takes a client-side approach to infrastructure as code template generation and has support for Terraform and CDK. Instead of an EC2 instance, it uses the JavaScript SDKs via your browser to make all requisite API calls. You can even use the static website hosted on the public internet. If you're not keen on the idea of passing read-only IAM credentials to a third-party website, clone the repository and run the web application locally via the file system or Docker. I've also created a CloudFormation template to run it on an EC2 instance:

AWSTemplateFormatVersion: "2010-09-09"
Parameters:
pAllowedIpCidr:
Type: String
AllowedPattern: '((\d{1,3})\.){3}\d{1,3}/\d{1,2}'
Default: '0.0.0.0/0'
pLatestAl2AmiId:
Type: AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>
Default: /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2
pVpcId:
Type: AWS::EC2::VPC::Id
pSubnetId:
Type: AWS::EC2::Subnet::Id
pKeyPairName:
Type: AWS::EC2::KeyPair::KeyName
Description: A self-hosted instance of Former2 on EC2
Resources:
rEc2SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Former2 security group
GroupName: Former2
VpcId: !Ref pVpcId
SecurityGroupIngress:
-
CidrIp: !Ref pAllowedIpCidr
IpProtocol: tcp
FromPort: 80
ToPort: 443
SecurityGroupEgress:
-
CidrIp: !Ref pAllowedIpCidr
IpProtocol: tcp
FromPort: 80
ToPort: 443
rEc2Instance:
Type: AWS::EC2::Instance
Properties:
UserData:
Fn::Base64: |
#!/bin/bash
yum update -y
yum install git httpd -y
systemctl start httpd
systemctl enable httpd
cd /var/www/html
git clone https://github.com/iann0036/former2.git .
ImageId: !Ref pLatestAl2AmiId
InstanceType: t2.micro
KeyName: !Ref pKeyPairName
Tenancy: default
SubnetId: !Ref pSubnetId
EbsOptimized: false
SecurityGroupIds:
- !Ref rEc2SecurityGroup
SourceDestCheck: true
BlockDeviceMappings:
-
DeviceName: /dev/xvda
Ebs:
Encrypted: false
VolumeSize: 8
VolumeType: gp2
DeleteOnTermination: true
HibernationOptions:
Configured: false
Outputs:
PublicIp:
Description: Former2 EC2 instance public IP address
Value: !GetAtt rEc2Instance.PublicIp
Export:
Name: !Sub "${AWS::StackName}-PublicIp"

Use Cases for Generating Templates

Overall I’d argue that addressing the minor changes is easier than writing a template from scratch. With that being said, I don’t know that I’d ever spin up resources via the Console with the sole intent of creating CloudFormation templates. However, it could make migrating from a prototype to a productionized product easier if you’re willing to pay a small compute cost.

Getting Started with CloudFormer

Setting up CloudFormer is quite simple through CloudFormation. In fact, it's a sample template that creates a stack with several resources:

  • AWS::EC2::Instance
  • AWS::EC2::SecurityGroup
  • AWS::IAM::InstanceProfile
  • AWS::IAM::Role
  • AWS::IAM::Policy

The template has a few parameters as well:

  • Username
  • Password
  • VPC

After creating the stack like any other CloudFormation template, a URL is outputted. The t2.small EC2 instance is a web server with a public IPv4 address and DNS configured behind the scenes. The security group allows all traffic (0.0.0.0/0) on port 443, but it's worth noting that I did have an SSL issue with my instance that threw a warning in my browser. The instance profile is used by the web server to assume the IAM role with an attached policy that allows for widespread reads across resources and writes to S3. Keep in mind that the CloudFormer stack should be deleted after to use to avoid unnecessary compute charges for the EC2 instance.

Using the CloudFormer Web Server

Navigate to the URL from the output tab of the CloudFormation stack (something like https://ec2-0-0-0-0.compute-1.amazonaws.com) and enter the username and password that you specified as a parameter. Via the GUI, select the resources to reverse engineer across the following categories:

  • DNS
  • VPC
  • VPC Network
  • VPC Security
  • Network
  • Managed
  • Services
  • Managed Config
  • Compute
  • Storage
  • Storage Config
  • App Services
  • Security
  • Operational

The list is robust but not all-inclusive.

Creating a Template for a CloudFront Distribution

I have a public CDN in one of my AWS accounts for images on a JAMstack site hosted on Netlify. It uses a standard design: a private S3 bucket behind a CloudFront distribution with an Origin Access Identity. Through the CloudFormer workflow, I selected the individual components:

  • CloudFront distribution
  • S3 bucket
  • Bucket policy

Sadly, there's no support for YAML as of right now. The web server generated a JSON template, which I converted to YAML via the Designer.

I plugged the template back into CloudFormation, and everything provisioned successfully. Digging deeper into the template, I noticed a few minor changes to make. First of all, the logical names are based on specifics of the existing resources (e.g., distd1yqxti3jheii7cloudfrontnet came from the URL of the CDN). However, these can easily be refactored. Since CloudFormer doesn't support creating an OAI, the existing identity is hardcoded. I added a resource for that to the template and converted the hardcoded value to a reference.

Fantasy Football Power Rankings Markdown Generator

· 3 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Overview

spr is a CLI tool for generating Markdown pages in the league Gatsby site for power rankings posts with Sleeper data injected. The source code is hosted on GitHub.

Build Status

Azure Pipelines

Installation and Dependencies

Clone the repository, install NPM dependencies, and create a symlink in the global folder.

git clone git@github.com:the-winner-is-a-twiath/power-rankings-markdown-cli.git
cd power-rankings-markdown-cli
npm install
npm link
spr --version

The CLI source code doesn't store any secrets, so ensure that the AWS CLI is installed and that the credentials are configured at ~/.aws/credentials.

[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

If not, run aws configure.

Usage

Navigate to the root folder of the league's Gatsby site, and run spr add <WEEK_NUMBER> <AUTHOR_FIRST_NAME>. The alias for add is a. Open the generated index.md file in the newly created directory (<FIRST_NAME>-week-<WEEK_NUMBER>-power-rankings/) to enter the power rankings text for the new post.

Functionality

  • Validates the week number and author first name
  • Checks the current Git branch to ensure that the user has created a non-main branch
  • Verifies that the present working directory contains a Gatsby configuration file to standardize the relative paths
  • Fetches the league members and rosters from the Sleeper API
  • Fetches the current avatar for each league member and copies to a CDN hosted in AWS
  • Generates Markdown power rankings with the member's latest stats neatly formatted
  • Creates a new directory for the post in the Gatsby website and writes the index.md file

Configuration

The league-specific details exist in various JavaScript configuration files to maximize reusability. While the CLI is tightly-coupled with Gatsby, there's still much that can be reconfigured for other leagues.

/lib/config/gatsby.js
const gatsby = {
// used to determine if the user created a new branch
mainBranchName: "master",
// used to determine if the user is in the root Gatsby directory
configFileName: "gatsby-config.js",
// used to support any changes to the default blog path for vanity URLs
postPath: "/content/blog/posts",
// used to defer image styling for the avatar to the Gatsby site
avatarHTMLClass: "sleeper-avatar",
}
/lib/config/aws.js
const awsConfig = {
// S3 bucket
bucketName: "twiath-site-cdn",
// URL base to be used for source in <img> tag
cdnURLBase: "https://d1yqxti3jheii7.cloudfront.net",
}
/lib/config/league.js
const league = {
// Sleeper league ID number
id: "541384381865115648",
}
/lib/config/validAuthors.js
const authors = {
Scottie: "Scottie Enriquez",
Callen: "Callen Trail",
Logan: "Logan Richardson",
Carl: "Carl Meziere",
Andrew: "Andrew Carlough",
John: "John Yarrow",
Matt: "Matt Kniowski",
Chris: "Chris Ramsey",
Caleb: "Caleb Trantow",
Travis: "Travis James",
Trond: "Trond Liu",
Mark: "Mark Hamilton",
}
/lib/config/validWeeks.js
const weeks = {
0: "zero",
1: "one",
2: "two",
3: "three",
4: "four",
5: "five",
6: "six",
7: "seven",
8: "eight",
9: "nine",
10: "ten",
11: "eleven",
12: "twelve",
13: "thirteen",
14: "fourteen",
15: "fifteen",
16: "sixteen",
17: "seventeen",
}

Continuous Integration for Swift Packages in Azure DevOps

· 4 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Overview

I use Azure DevOps for building CI/CD pipelines for professional projects. Given the GitHub integration and broader feature set, I've started using it instead of Travis CI for my open-source projects too. For most technology areas, there's a wide set of pre-built tasks that can be leveraged to build robust pipelines quickly. There are several tasks for compiling and publishing iOS applications using Xcode on transient macOS virtual machines.

However, in the spirit of using Swift like a general-purpose language, I wanted to use a Linux build server, a more industry-standard approach for CI/CD. In my previous post, I described how I set up a Swift executable package to be more testable. This pipeline provides continuous integration for it. Azure Pipelines, which powers CI/CD in Azure DevOps, is scripted in YAML. It also supports integrating shell commands to be run on the virtual machine.

Adding a Trigger

The first thing to specify in the YAML is a trigger. The trigger denotes which branches in the Git repository the build should be run for. For example, to run the build only for master, the trigger would be as follows:

trigger:
- master

In general, I want CI to run on all branches, so I use the following YAML instead:

trigger:
branches:
include:
- "*"

Specifying a Virtual Machine Image

After specifying the trigger, Azure Pipelines needs to know what type of infrastructure to run the build on. At the time of writing, 5.2 is the latest stable version of Swift. Swift is not currently available in APT, Ubuntu's package manager. The binaries from the Swift download page target a specific LTS version of Ubuntu. The most recent version listed is 18.04, even though 20.04 released in April. Because of these specific requirements, I opted to target a specific version of Ubuntu in my YAML instead of ubuntu-latest. ubuntu-latest will be updated to 20.04 at some point, but this is outside my control.

pool:
vmImage: "ubuntu-18.04"

Installing Swift Programmatically

With a product like Azure Pipelines that utilizes transient virtual machines, the customer pays for the server time. In short, the longer your builds take, the more expensive they are. Because of this and performance reasons, it doesn't make sense to compile Swift from source each time the build runs (i.e., when a developer commits). The best practice is to fetch dependencies via the distro's package manager for easier versioning and simple installation. With that not being an option for Swift on Ubuntu, the next best option is to fetch the binaries.

Azure Pipelines supports steps, which are logical sections of the build for human readability and organization. At a high level, the process is to:

  • Install dependencies for running Swift that aren't shipped with Ubuntu
  • Make a working directory
  • Fetch the Swift binaries
  • Unzip the binaries
  • Add the binaries to the PATH so that swift can be used as a shell command
  • Echo the version to ensure that it's working properly

In the pipeline script, the steps above are written as Bash commands and wrapped in a script YAML statement.

steps:
- script: |
sudo apt-get install clang libicu-dev
mkdir swift-install
cd swift-install
wget https://swift.org/builds/swift-5.2.4-release/ubuntu1804/swift-5.2.4-RELEASE/swift-5.2.4-RELEASE-ubuntu18.04.tar.gz
tar -xvzf swift-5.2.4-RELEASE*
export PATH=$PATH:$(pwd)/swift-5.2.4-RELEASE-ubuntu18.04
swift -version
displayName: "Install Swift"

Additional Steps

With Swift successfully installed, the remainder of the build steps is scripted in additional steps. This commonly entails compiling, running unit tests, and static code analysis. For the sake of a simple executable package, this could be merely running swift test like below. Putting it all together, this YAML script is a solid base for many Swift package CI jobs.

trigger:
branches:
include:
- "*"

pool:
vmImage: "ubuntu-18.04"

steps:
- script: |
sudo apt-get install clang libicu-dev
mkdir swift-install
cd swift-install
wget https://swift.org/builds/swift-5.2.4-release/ubuntu1804/swift-5.2.4-RELEASE/swift-5.2.4-RELEASE-ubuntu18.04.tar.gz
tar -xvzf swift-5.2.4-RELEASE*
export PATH=$PATH:$(pwd)/swift-5.2.4-RELEASE-ubuntu18.04
swift -version
displayName: "Install Swift"

- script: |
swift test
displayName: "Run unit tests"

Creating a Swift 5.2 Executable with Unit Tests

· 4 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Background

To better learn Swift, I've been trying to use it as a truly general-purpose programming language instead of purely iOS development. I'm currently building an iOS app that requires multiple versions of the same vector graphics (SVGs). I couldn't find an open-source solution for my needs, so I decided to start scripting. Typically, I would have used Python or Node.js, but I powered through with Swift in the spirit of immersion.

Getting the initial project structure and unit tests set up took some research, so this quick guide will outline how I've been structuring my codebases for executable packages. Outside of iOS development, Swift's documentation isn't as robust as Python or Node.js, given the age difference. This blog post's objective is to merge a lot of useful knowledge I found across forums.

Creating the Project

Use the Swift CLI to create an executable project with this command: swift package init --type executable. It's important to note that the names will be created based on the current directory. If you want to use a name for your project other than the root directory, create a new folder and run the command there.

mkdir AlternatePackageName
cd AlternatePackageName
swift package init --type executable

To open in Xcode, run open Package.swift. Swift has created a project with the following structure:

├── Package.swift
├── README.md
├── Sources
└── SwiftPackageExecutable
└── main.swift
└── Tests
├── LinuxMain.swift
└── SwiftPackageExecutableTests
├── SwiftPackageExecutableTests.swift
└── XCTestManifests.swift

Creating a Library

Executable modules are not testable. The implication is that functions cannot be tested inside /Sources/SwiftPackageExecutable (in the same directory as main.swift). Doing so will throw an unhelpful compiler error. The alternative is to move the logic to a library module. This requires a change to the directory structure and default Package.swift.

// swift-tools-version:5.2

import PackageDescription

let package = Package(
name: "SwiftPackageExecutable",
dependencies: [],
targets: [
.target(
name: "SwiftPackageExecutable",
dependencies: []),
.testTarget(
name: "SwiftPackageExecutableTests",
dependencies: ["SwiftPackageExecutable"]),
]
)

First, set the products variable in between the name and dependencies. Create .executable and .library entries like so:

name: "SwiftPackageExecutable",
products: [
.executable(name: "SwiftPackageExecutable", targets: ["SwiftPackageExecutable"]),
.library(name: "SwiftPackageLibrary", targets: ["SwiftPackageLibrary"]),
],
dependencies: [],

Next, in the array of targets, add another .target for the library, and update the dependencies. The executable and test modules should depend on the library.

.target(
name: "SwiftPackageExecutable",
dependencies: ["SwiftPackageLibrary"]),
.target(
name: "SwiftPackageLibrary",
dependencies: []),
.testTarget(
name: "SwiftPackageExecutableTests",
dependencies: ["SwiftPackageLibrary"]),

The completed Package.swift is as follows:

// swift-tools-version:5.2

import PackageDescription

let package = Package(
name: "SwiftPackageExecutable",
products: [
.executable(name: "SwiftPackageExecutable", targets: ["SwiftPackageExecutable"]),
.library(name: "SwiftPackageLibrary", targets: ["SwiftPackageLibrary"]),
],
dependencies: [],
targets: [
.target(
name: "SwiftPackageExecutable",
dependencies: ["SwiftPackageLibrary"]),
.target(
name: "SwiftPackageLibrary",
dependencies: []),
.testTarget(
name: "SwiftPackageExecutableTests",
dependencies: ["SwiftPackageLibrary"]),
]
)

Lastly, create a new directory inside of /Sources/ for the new library.

Creating Logic and Unit Tests

For a simple example, add some easily testable logic like addition. The Swift file should reside at /Sources/SwiftPackageLibrary/Add.swift.

import Foundation

public struct Add {
public static func integers(_ first: Int, to second: Int) -> Int {
return first + second
}
}

Inside of the test module, add a standard test for the library module function.

import XCTest
@testable import SwiftPackageLibrary

final class AddTests: XCTestCase {
func shouldAddTwoIntegersForStandardInput() throws {
// Arrange
let first = 1
let second = 2
let expectedSum = 3

// Act
let actualSum = Add.integers(first, to: second)

// Assert
XCTAssertEqual(actualSum, expectedSum)
}

static var allTests = [
("shouldAddTwoIntegersForStandardInput", shouldAddTwoIntegersForStandardInput),
]
}

Lastly, update XCTestsManifest.

import XCTest

#if !canImport(ObjectiveC)
public func allTests() -> [XCTestCaseEntry] {
return [
testCase(AddTests.allTests)
]
}
#endif

Putting It All Together

With all this in place, you can now unit test your library logic and expose it as an executable in the main.swift file.

├── Package.swift
├── README.md
├── Sources
├── SwiftPackageExecutable
└── main.swift
└── SwiftPackageLibrary
└── Add.swift
└── Tests
├── LinuxMain.swift
└── SwiftPackageExecutableTests
├── AddTests.swift
└── XCTestManifests.swift

To run the executable, use swift run. To run the unit tests, use swift test.

Using Repl.it in a High School Classroom

· 5 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Background

I co-teach an advanced placement computer science class at Heights High School in Houston with the TEALS program. The curriculum utilizes the Java programming language and has an emphasis on object-oriented programming. From a school IT system perspective, we have limited options for the software we can distribute to the students' laptops. During my first year of volunteering, we opted to use BlueJ for the first couple of months before exposing them to Eclipse, a more heavy-duty integrated development environment. Both editors have their challenges and limitations for novices, so we began to evaluate several other options, including Visual Studio Code. After considering numerous factors, including the complexity of installation, editor features, and accessibility, we opted to try a radically different option: Repl.it.

Benefits and Implications

Repl.it is a feature-rich, browser-based IDE with support for numerous programming languages, including Java. In addition to the editor and computing environment, the application supports classroom features such as creating assignments that I will detail further below. Since Repl.it runs in the browser, there's no installation or configuration in terms of editors, runtimes, etc. Using a browser-based tool decreased the number of local support issues that we had to address. We found that students had much fewer problems with getting acclimated to the tooling compared to BlueJ and Eclipse. The user interface proved to be intuitive. There were relatively few issues with the underlying runtimes and virtualization that Repl.it abstracts from the user.

Repl.it IDE

Repl.it requires an internet connection, and teachers shouldn't assume that students have internet access at home. Though many classes will be online due to the COVID-19 global pandemic, keep in mind that students may have limited connectivity. I recommend offering desktop IDEs as an offline alternative so that students can at least run code locally.

Setting Up a Classroom

Repl.it is free for schools. There's an excellent video overview of the features on YouTube. Last year, we used Repl.it Classroom for assigning coding homework. We use other software like Practice-It for some assignments but struggled to find a way to evaluate raw source code. Repl.it simplified grading because we didn't have to pull down students' source code and build on our local machines.

Integrating with GitHub

While Repl.it is excellent for running code and submitting assignments, it doesn't offer built-in source control. Teachers create classrooms on a per year basis, so sharing examples and references across classes isn't transparent. Each environment targets an individual student exercise, so collaboration isn't seamless either.

GitHub offers a public place to store code and implement software development workflows like pull requests and CI/CD. At Heights High School, we've hosted solutions here for students and any other teachers who want to use the code in their classrooms. The source code for this project resides in a public repository as well. Repl.it has native GitHub integration so that a public repository can be imported when creating a new Repl. The Repl syncs with the GitHub repository so that when a developer pushes changes to the remote origin, the updates propagate to Repl.it.

Creating a Template

With GitHub, a team can create a template project to be used when a new repository is created. Templates allow developers to have codebase structure (i.e., putting source code in /src) and configuration files injected into every child repository. Under the repository settings page, check the template repository flag. After this, when creating a new repository, the template should appear as an option for the base.

Template Project for Repl.it

This GitHub project contains a template repository.

Main File

In terms of source code, only the "Hello, World!" program is included:

Main.java
class Main {
public static void main(String[] args) {
System.out.println("Hello, world!");
}
}

Given that most assignments fit into a single file, I haven't injected any opinions on the file structure.

EditorConfig

I've included an EditorConfig file in this project so that the code styling remains consistent across multiple codebases. EditorConfig is IDE-agnostic with plugins or native integration across IntelliJ, Visual Studio Code, Eclipse, etc.

.editorconfig
root = true

[*]
indent_style = tab
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true

Repl.it Configuration File

Repl.it supports a configuration file that allows a developer to specify which language and run command to use. Without going into details about the recent changes to Oracle's Java license structure, I'll note that this project uses OpenJDK 10, which is free and open-source. The run variable in the configuration file refers to a shell command to compile and execute the program. Bash on the underlying Linux virtual machine interprets the command, so it isn't specific to Repl.it. The run command can be tested on a local computer or by modifying the configuration file directly in Repl.it.

.replit
language = "java10"
run = "javac Main.java && java Main"

Fork the Project

Run on Repl.it

Testing the PINEBOOK Pro as a Daily Driver

· 3 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Hardware

For 200 USD, the laptop is extremely well-made. The keyboard is comfortable for typing (although PINE64 only had the ISO keyboard model available which is an adjustment for me). The body is made out of metal and doesn't feel cheap. The sound is serviceable though one of the weaker parts of the laptop. I consistently get download speeds of up to 50mbps over AT&T fiber at home. The battery life has been good so far. There are no glaring design flaws to speak of. Here's an overview of the specs:

  • Rockchip RK3399 SOC with Mali T860 MP4 GPU
  • 4GB LPDDR4 RAM
  • 1080p IPS Panel
  • 64GB of eMMC
  • Lithium Polymer Battery (10000mAH)
  • WiFi 802.11 AC + Bluetooth 5.0
  • Front-Facing Camera (1080p)

The laptop ships by default with Manjaro KDE for the operating system: neofetch

Development Usage

I was initially concerned about the fact that the laptop has an ARM processor, which adds a layer of complexity to software installation. I use Snap for managing many of my development tools and found that ARM support is a wash. Visual Studio Code and the JetBrains IDE suite can't be installed via Snap, but software like Docker and Chromium worked perfectly.

For Node.js, I had no major issues with hacking on a few GatsbyJS projects. I noticed in the npm install output that some packages with low-level dependencies had to compile from source. After being accustomed to a 2017 15-inch MacBook Pro, the Gatsby static website generation was noticeably slower. Builds took upwards of 20 seconds for reasonably sized websites compared to roughly two seconds on my MacBook.

I've been using Swift as my primary language lately, but I wasn't able to get it working in any native capacity on the PINEBOOK Pro. The Swift package in the Arch User Repository doesn't support ARM. The dependencies list for compiling from source are specific to Ubuntu/APT. I couldn't even get the Swift Docker image to run with a simple docker pull swift and docker run swift. While I certainly wasn't expecting to get Xcode running, I assumed that I'd be able to compile and run Swift. My only option was a browser-based environment called Repl.it.

IDEs

I was able to manually install JetBrains' WebStorm, my preferred JavaScript IDE, and PyCharm, my preferred Python IDE. On my macOS devices, I use numerous plugins such as Material Theme that can be resource-intensive. Most of the plugins worked without issue. JetBrains' Markdown plugin was the only one that I couldn't get to work without crashing when I opened a .md file. The primary feature that isn't compatible out-of-the-box with ARM is the built-in terminal, although there seems to be a workaround. Personally, alt-tabbing to Konsole doesn't bother me. Editing a medium-sized JavaScript project consumed around 1GB of 4GB of memory, and I did see via htop that the CPU was often at capacity during intensive code editing operations.

I found some unofficial ARM binaries for Visual Studio Code to install the editor. I tend to use this more because it's less resource-hungry than the JetBrains products for my permutation of plugins.