Skip to main content

Mitigating Risk With Using Google Maps API Keys in the Browser

· 3 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Recently, Google announced significant changes to the Maps platform to mixed reactions from the development community. While the myriad of products have now been combined and refactored for simplicity, the general concerns seem to stem around a steep increase in price, a Google billing account requirement, falling victim to overage charges from DDoS attacks, and a mere 30-day notice of these changes. All of these factors have many developers (especially of small to medium-sized projects) considering other platforms for map services.

It’s also worth noting that a valid API key is now required. This may seem like a non-factor to developers properly managing credentials on the backend as to not expose them to the client. However, the Maps Embed API requires the key as a parameter in an HTML script tag like so:

src="https://maps.googleapis.com/maps/api/js?key=MY_API_KEY"

When I first added this to my projects, I certainly had concerns about not only exposing the key to the browser via HTML, but also commiting to publicly available source code. There are a few important steps that should be taken to mitigate risk, especially considering the drastic increase in price.

Minimizing Permissions for API Keys

Creating an API key for Google Maps is quite simple, but it’s not obvious that by default the key has various services enabled for it including Embed, JavaScript, iOS and Android. While basic security principles dictate that you should grant the minimum permissions required, this need is escalated by the fact that if you use the API key in the browser, you're effectively sharing it with the entire world. Theoretically, you could intercept someone’s key from their web application and use it for any of the services that are enabled for it. For use in the browser, only two APIs are required: Embed and JavaScript.

Restricting Referrer Domains or IP Addresses

Even if you’ve only allowed your key access to the minimum APIs, by default any site on any domain could rack up charges on your Google account by simply using your key on their site. To combat this, you should specify restrictions via a regular expression that fits your domain or IP. For example, my key is restricted to https://www.scottie.is/*.

Set a Billing Alarm

If you’re already a developer using the Maps platform, the email announcing these changes should have given you some indication of whether or not your monthly charges will change. Regardless, setting a billing alarm will help prevent unexpected charges and give warning of unexpectedly high usage.

Testing .NET Standard Libraries Using .NET Core, NUnit, and Travis CI

· 6 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

While my development choices for the frontend change as rapidly as new JavaScript frameworks come and go, I've remained pretty consistent with using the .NET ecosystem for my server-side needs. I still use Node.js, Spring Boot, and Flask for several side projects, but I've grown to love the power of .NET and C# over the past several years while it has been the default technology at my job. My two biggest complaints were the monolithic scale of .NET Framework and the fact that it required Windows (or Mono which hasn't always been backed by Xamarin/Microsoft). Both of these changed with the advent of .NET Core. This alternative is cross-platform, modular, and much more performant. While both .NET Framework and .NET Core are implementations of .NET Standard, .NET Core very much seems to be the preferred way of the future.

Like many other .NET developers, I've packaged logic into open-source libraries and published to NuGet for easy distribution. However, also like many other developers, I've written packages to target .NET Framework. This is a problem because these packages can only be used with .NET Framework. The alternative is to target .NET Standard instead. As described by Microsoft docs, .NET Standard is a set of APIs that all .NET implementations must provide to conform to the standard. Because of this, having your NuGet packages target .NET Standard means that your package has support for .NET Framework, .NET Core, Mono, Xamarin, etc.

The primary reason that Microsoft recommends sticking with .NET Framework over .NET Core is compatibility. Essentially, they want developers to choose .NET Core first and .NET Framework second if there is some dependency that's not .NET Standard compatible. This shift in mentality means that libraries need to target .NET Standard by default. In order for a library to target .NET Standard, all dependent libraries must target .NET Standard as well which is arguably the biggest hurdle that .NET Core adoption is facing at this time.

With such a radical shift in the ecosystem comes some growing pains and a great deal of learning. While working to convert one of my libraries to .NET Standard, I faced challenges with setting up my testing infrastructure, so I wanted to share what I learned. This post will walk you through setting up a .NET Standard library, unit tests, and continuous integration using Travis CI. A working example with complete source code and Travis CI configured can be found on GitHub.

Preparing Your Development Environment

The demo source code included in this post targets .NET Standard 2.0. If you are running an older version of Visual Studio, you will either need to upgrade or install the SDKs manually. Visual Studio 2017 version 15.3 and on for Windows include the requisite SDK. For macOS, the Visual Studio IDE can be used as well. This demo was developed using 7.5 Preview 1. For Linux, .NET Core can be installed via scripts and an alternative editor like Visual Studio Code or Rider from JetBrains can be used for development.

Creating the Domain Project

This library does simple addition for type decimal. We'll start by creating a project for the domain logic. Be sure to create a .NET Standard library and not a .NET Framework library. Next, add an AdditionService class with function to execute the logic.

src/Gg.Scottie.Dotnet.Standard.Testing.Demo.Domain/AdditionService.cs
public class AdditionService : IAdditionService
{
public decimal Add(decimal first, decimal second)
{
return first + second;
}
}

Creating the Test Project

This repo uses NUnit for unit testing. The NUnit project offers templates for creating test projects, but this post walks through adding each dependency manually to clarify some of the nuances. As noted in the NUnit wiki, Microsoft has specified that tests must target a specific platform in order to properly validate the expected behavior of the .NET Standard-targeted code against that platform. While this may seem counterintuitive to the nature of .NET Standard, keep in mind that you can write multiple tests to support multiple platforms. For the sake of this demo, just target .NET Core. Instead of creating a .NET Standard project for the unit tests create a .NET Core class library.

The first dependency is Microsoft.NET.Test.Sdk. As noted previously, .NET Core is much more modular. This package is Microsoft's testing module. The next two dependencies are NUnit and NUnit3TestAdapter. These two packages will allow us to write NUnit tests and run them via the command line. We can now create our first unit test.

src/Gg.Scottie.Dotnet.Standard.Testing.Unit.Tests/AdditionServiceTests.cs
[TestFixture]
public class AdditionServiceTests
{
[Test]
public void Should_Add_ForStandardInput()
{
//arrange
decimal first = 1.0m;
decimal second = 2.0m;
decimal expectedOutput = 3.0em;
IAdditionService additionService = new AdditionService();

//act
decimal actualSum = additionService.Add(first, second);

//assert
actualSum.Should().Be(expectedSum);
}
}

You can run the unit tests locally using an IDE like Visual Studio or Rider or via terminal with the command dotnet test. Note that you can also supply a path to your .csproj file to only test specific projects in your solution.

Configuring Travis CI for GitHub Projects

If you plan on hosting your source code in a public repository on GitHub, you can leverage a testing automation tool called Travis CI for free. To get started, log into the Travis CI site with GitHub authentication and enable your repository for testing through the web interface. After that, simply add a YAML file in the root of your project named .travis.yml.

.travis.yml
language: csharp
mono: none
dotnet: 2.0.0

install:
- dotnet restore src

script:
- dotnet build src
- dotnet test src/Gg.Scottie.Dotnet.Standard.Testing.Unit.Tests/Gg.Scottie.Dotnet.Standard.Testing.Unit.Tests.csproj

Testing Multiple Targets on Windows and *NIX

As mentioned above, the primary benefit of targeting .NET Standard is that it can be added as a dependency by newer versions of .NET Framework and .NET Core without any additional code or configuration. With that being said, you may want to have unit tests that target both .NET Core and .NET Framework to ensure that your library behaves as expected with each. We can add multiple targets to our testing project by simply modifying the .csproj. By changing the TargetFramework tag to TargetFrameworks and changing the value to netcoreapp2.0;net47 we can test against both .NET Core 2.0 and .NET Framework 4.7. As you might imagine, this could cause issues for non-Windows developers because there is no native *NIX support for .NET Framework. In the XML, we can even add conditions to only target .NET Core if the tests are not running on Windows by adding Condition="'$(OS)' != 'Windows_NT'">netcoreapp2.0.

Colley Matrix NuGet Package

· 3 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Overview

It's very well documented that I'm a huge college football fan. We're presently in the College Football Playoff era of Division I football, which involves a selection committee choosing four playoff teams to compete to be the national champion. The previous era, known as the Bowl Championship Series era, involved a combined poll of human experts and computer algorithms choosing the two best teams to play in the national championship game. One such algorithm is known as the Colley Matrix. Though not a factor in the post-season selection process anymore, it's still referred to at times, particularly when debating the selection committee’s decisions. Based on the whitepaper written by Colley himself and [an existing JavaScript implementation](an existing JavaScript implementation), I developed this NuGet package for simulating head-to-head matchups and applying the Colley algorithm. This algorithm can be applied to any sport or competitions without tie games.

Usage

The ColleyMatrix client exposes two methods: SimulateGame and Solve. The client constructor takes one argument: numberOfTeams.

ColleyMatrix colleyMatrix = new ColleyMatrix(numberOfTeams);

This will create a client with an underlying sparse matrix where the dimensions span from 0 to numberOfTeams - 1 corresponding to each team's ID. Next, we can simulate matchups.

colleyMatrix.SimulateGame(winnerId, loserId);

Note that if the winnerId or loserId is not valid respective to the sparse matrix's dimensions, an exception will be thrown.

You can solve the sparse matrix at any point without modifying the internal state. The solved vector that is returned is a list of scores with the highest score indicating the best team.

IEnumerable<double> solvedVector = colleyMatrix.Solve();

Basics of Implementation

SimulateGame updates the matrix state which is wrapped by an interface called IMatrixProvider. This removes the dependency on a specific matrix implementation from the underlying domain logic. For reference, the ColleyMatrix client ultimately injects a Math.NET SparseMatrix. The updates to the matrix state are very simple.

src/ColleyMatrix/Service/ColleyMatrixService.cs
double gameCount = _matrixProvider.GetValue(winnerId, loserId);
_matrixProvider.SetValue(winnerId, loserId, gameCount - 1);
_matrixProvider.SetValue(loserId, winnerId, gameCount - 1);
_matrixProvider.SetValue(winnerId, winnerId, _matrixProvider.GetValue(winnerId, winnerId) + 1);
_matrixProvider.SetValue(loserId, loserId, _matrixProvider.GetValue(loserId, loserId) + 1);

A list of teams and their corresponding ratings are also maintained.

src/ColleyMatrix/Service/ColleyMatrixService.cs
_teams[winnerId].Wins++;
_teams[loserId].Losses++;
_teams[winnerId].ColleyRating = ComputeColleyRating(_teams[winnerId].Wins, _teams[winnerId].Losses);
_teams[loserId].ColleyRating = ComputeColleyRating(_teams[loserId].Wins, _teams[loserId].Losses);

The formula for computing the Colley rating is very simple.

src/ColleyMatrix/Service/ColleyMatrixService.cs
1 + (wins - losses) / 2;

For the Solve method, the matrix is lower-upper factorized then solved for the vector of the teams' Colley ratings.

src/ColleyMatrix/Service/ColleyMatrixService.cs
IEnumerable<double> colleyRatings = _teams.Select(team => team.ColleyRating);
IEnumerable<double> solvedVector = _matrixProvider.LowerUpperFactorizeAndSolve(colleyRatings);

Build Status

Build status

Kanji Alive NuGet Package

· 2 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Overview

This NuGet package provides a C# interface to easily query and fetch kanji data from the Kanji Alive public API. This package is designed to simplify development of Japanese learning desktop and web applications on the C#/.NET platform.

Usage

All of the API endpoints are accessible using the KanjiAliveClient. To use the client, simply instantiate while passing your Mashape API key as the sole constructor parameter. You can obtain an API key here.

KanjiAliveClient client = new KanjiAliveClient("MY_API_KEY");

Nested inside of the main client are three subclients that mirror the structure of the API endpoints: AdvancedSearchClient, BasicSearchClient, and KanjiDetailsClient. The endpoints are exposed as asynchronous instance methods, so be sure to await them.

<List<KanjiSimpleResponse>> apiResponse = await client.AdvancedSearchClient.SearchByKanjiStrokeNumber(5);

Contributing

In order to obfuscate your API key for integration tests, add your API key to the Windows Registry as a string value with the key set to MASHAPE_API_KEY. This allows you to discreetly fetch your key at runtime instead of exposing it in the source code.

KanjiAliveClient client = new KanjiAliveClient(Environment.GetEnvironmentVariable("MASHAPE_API_KEY"));

Please ensure that any code additions follow the styling laid out in the .DotSettings file and that all unit and integration tests pass before submitting a pull request. For break fixes, please add tests. For any questions, issues, or enhancements, please use the issue tracker for this repository.

Build Status

Build status

Thanks

Special thanks to the Kanji Alive team for not only providing their kanji data in a clean, consumable format, but also for hosting a RESTful API to expose it too. Note that if you would like to include this kanji data locally in your project, you can download the language data and media directly from this repo.

College Football Opening Weekend 2016

· 8 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

The Plan

Before the start of the 2016 college football season, two good friends and I decided to make an epic pilgrimage for the opening weekend. While most games are played on Saturdays, it just so happened that our alma maters played on Friday, Saturday, and Sunday of Labor Day weekend. The three of us all lived in Oklahoma at the time, so we began planning the logistics to see if it would even be physically possible to be at each stadium by kickoff. The match ups were Kansas State at Stanford in Palo Alto, California (for Chris), UCLA at Texas A&M in College Station, Texas (for Callen), and Notre Dame at UT in Austin, Texas (for me). Despite requiring about 4,500 miles of total travel, we determined that it would be possible to see all the games barring any sort of delays. We took a gamble, and this is how it all played out.

Thursday, September 1st


The journey began with a five-hour drive from northern Oklahoma to a hotel adjacent to the DFW airport. The three of us had every intention of leaving early to arrive in Dallas at a reasonable hour, but a combination of a forgotten Tyler Lockett jersey and stopping for a GoPro head mount ultimately turned the initial leg of the trip into a six-hour affair. We stumbled into our hotel room well after midnight after making note of a 4:45am shuttle departure to the airport to catch a 6am flight to Oakland. After a few hours of unsatisfying slumber, we packed most of our belongings back into my car, which remained parked at the hotel while we went to the West Coast, and took the essentials with us for a day trip to the Bay Area.

Friday, September 2nd


After a short flight with a timezone change in our favor, we landed in Oakland. The game was in the evening, so we had the whole day to kill before kickoff at Stanford Stadium in Palo Alto. Callen’s sister was kind enough to pick us up from the airport and be our hosts for the day. We started off with brunch and drinks in downtown Oakland and a cruise down Telegraph Ave. before heading towards the Bay Bridge. In San Francisco, we picked up some local brews and went to Baker Beach to sprawl out on that sunny, chilly day. After some time of soaking up the majestic views of the Golden Gate and outrageously expensive homes that surround the beach, we stopped at a pub for an early dinner and some strong IPAs. Not long after that, we hailed an Uber for Palo Alto.

KSU vs. Stanford

MetricValue
Recorded Attendance46,147
Stadium Capacity50,000
Kansas State AP RankUnranked
Stanford AP Rank8
VictorStanford
Box Score26-13
OvertimeNo

Stanford Stadium was much calmer than we all expected. Even during critical moments of the game and watching Christian McCaffrey’s video game moves, the stadium never got particularly loud. The reported attendance was substantially more than the actual number of people at the game. Looking back at our pictures, we could easily see a number of empty seats. To be fair, this game was over Labor Day weekend and classes had not started yet at Stanford. K-State showed out in numbers and proved again that they are one of the best travelling teams.

After the very solid matchup, we took an Uber from Palo Alto back to San Francisco for a red-eye flight back to Dallas-Fort Worth. Our flight didn’t leave until midnight, and we arrived at SFO about an hour and a half early. This was the first time that exhaustion really began to set in. I thought back to how little sleep we started this long day with and how much longer it was going to be before I was able to crash on a comfy bed. We boarded and departed right on time.

Saturday, September 3rd


Friday and Saturday blended together. The three of us had to pay back the two hours that we gained from the initial timezone change. This meant that it was 6am when we landed in DFW. None of us really slept on the flight, so we stopped for a hearty breakfast at Whataburger, a staple restaurant of Texas. Kickoff for the Texas A&M game was at 2:30pm, so we had plenty of time to make the three-hour drive to College Station. After several stops, we arrived at our hotel at around 11am. As exhausted as I was at this point, I felt completely refreshed after brushing my teeth and a quick shower even though I didn't have time for a nap. After a quick lunch at the Buffalo Wild Wings next door, we took an Uber to the stadium.

A&amp;M vs. UCLA

MetricValue
Recorded Attendance100,443
Stadium Capacity102,733
Texas A&M AP RankUnranked
UCLA AP Rank16
VictorTexas A&M
Box Score31-24
Overtime1OT

As much as it pains me as a Longhorn to say, the A&M gameday vibe was impressive. The atmosphere of this game was staggering especially compared to the game in the Bay Area the day before. Before the game several fighter jets flew overhead. Each of the 100,000 people in the stadium made their presence known. It didn’t help the situation at all that UCLA quarterback Josh Rosen made a comment beforehand that "after 50,000 it all sounds the same." This incited several chants. In true A&M fashion, they did their best to lose the game after securing an early lead. The 4th quarter was dominated by UCLA, but A&M secured an overtime win with a touchdown from a quarterback run. It was awesome to see the crowd’s reaction, although I outright refused to sing the A&M fight song, which is almost entirely about the University of Texas.

As much as I would have liked to check out the local scene, we ate Buffalo Wild Wings again after the game then promptly passed out.

Sunday, September 4th


After the first full night’s sleep in a few days, we drove to Austin. The three of us met up with other friends and ate some amazing breakfast tacos from Juan in a Million, then walked around campus. Touring my alma mater made me incredibly nostalgic, but hanging out at a pre-game house party made me feel incredibly old. After drinks and a barbecue, we made the trek to Darrell K. Royal Stadium.

Texas vs. ND

MetricValue
Recorded Attendance102,315
Stadium Capacity100,119
Texas AP RankUnranked
Notre Dame AP Rank10
VictorTexas
Box Score50-47
Overtime2OT

This game was an emotional rollercoaster ride. As a fan, I had extremely low expectations after an embarrassing loss last year in South Bend. The Irish came out of the gate strong with a touchdown on their opening drive, and I could feel the energy draining from the record-setting crowd. After their kicking unit left the field, Texas was about to have our answer as to who our starting quarterback was going to be. The energy came back immediately when true freshman Shane Buechele took the field. It peaked when he completed his first pass as a college quarterback and finished the opening drive off with a perfectly placed touchdown ball.

Momentum shifted back and forth between the two teams all night long. Texas seemed to be coming out on top when an extra point that should have sealed the game was returned for two points by Notre Dame after a special teams mishap. The game entered overtime. Then a second overtime brought the game to a climactic close with Tyrone Swoopes diving into the endzone for the win. As insane as the crowd went in that moment, what still gives me chills to think about is the 100,000 people singing The Eyes of Texas when it was all over.

Burnt Orange Atom Themes

· 2 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

It's safe to say that I'm biased because of my alma mater, but I personally think that burnt orange is one of the most versatile colors. It's somewhat polarizing due to being the primary color of the University of Texas, but I use it as one of the core colors here on my personal website to show my Longhorn pride. Inspired by this color palette, I've created a theme for Atom, my favorite text editor, and released version 1.0.0 just in time for the start of college football season.

Atom editor in burnt orange

To use this theme, ensure that you have the Atom text editor installed. Themes in Atom are split into UI themes and syntax themes. Syntax themes change the style for the text editing area itself, and UI themes change the style everything else. I have created both, and they can be installed either through the application menus or via the command line using apm install atom-burnt-orange-ui and apm install atom-burnt-orange-syntax. For more information and the source code, check out the Atom documentation pages for atom-burnt-orange-ui and atom-burnt-orange-syntax.

I'm by no means a UI/UX expert, so feel free to submit a pull request for any improvements. I intend on maintaining the code for future Atom releases.

Travel Diary: Japan 2016

· 22 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

April 30th, 2016 - 電車の風 - Tokyo


As expected, Kaylee and I arrived at Narita Airport in Japan extremely jet-lagged. Even though I was lucky enough to have only one connecting flight in Chicago, the 18 or so hours of travel and 14-hour time difference took their toll on me both mentally and physically.

Kyle, my close friend from college who lives in Tokyo, greeted us right outside of customs after we landed. We exchanged our「お久しぶりですね」 greetings and headed towards the train station. It never ceases to amaze me how Kyle and I are always able to just pick up where we left off. Even though I hadn't seen him in over a year, it felt like our conversation only had a momentary pause. The dumb running jokes and philosophical life conversations seem to run forever in a loop like the train that we were about to board.

We all purchased our tickets and awaited the Skyliner express train. Right on schedule I was greeted with a brisk gust of wind displaced by the arriving train. It immediately woke me from my exhaustion-induced fog and gave me the strangest sense of déjà vu. Moments like these always come off like a cheap movie transition, but it truly felt like the past year had gone by in instant and that I had only been gone from Tokyo for a few days. Winds from trains are always one of the first images of Japan that come to my mind. It's a perfect mixture of the natural imagery of traditional Japanese culture and technology that's prevalent in the sprawling metropolises like Tokyo.

May 1st, 2016 - Nooks and Crannies - Tokyo


For my second journey to the land of the rising sun, I opted to see more of the country and spend less time in Tokyo. Don’t get me wrong, Tokyo natives can go their whole lives without seeing all that the metropolis has to offer, but there’s even more to see across the countryside. However, we thought it would be a good idea to schedule a day in Tokyo early on in the trip to get our bearings and get acclimated to all of the changes. I highly recommend this for anyone who plans on traveling within Japan.

We first travelled from where we were staying in Waseda to Sumida to see the legendary Tokyo Skytree Tower. The tower stands at approximately 650 meters tall making it the tallest structure in Japan and the second tallest structure in the entire world. While it is somewhat of a tourist trap, the view from Tembo Deck shows the massive sprawl of the largest metropolitan area in the world. 13.5 million people, about 1.5 times as many as New York City, live in Tokyo below. From approximately 450 meters up through a panoramic window, you can see how the iron and concrete seem to consume the earth in every direction for as far as the eye can see.

Skytree Tower

It’s a view with a surreal amount of detail. There seem to be an infinite number of nooks and crannies that house culture, shops, universities, parks, and libraries. When traveling throughout Tokyo, it’s easy to be shielded from the overwhelming logistics of a city that size. Given how advanced the subway system is, it’s very easy to abstractly move from district to district in your mind without fully appreciating the absolute chaos that surrounds you.

That’s not to say that you're never exposed to copious amount of people. From Sumida we travelled to Taito to see the famous Senso-ji. Near this Buddhist temple is a market that sells everything from homemade food to souvenirs geared towards English-speaking tourists. It was some of the most organized havoc that I had ever seen. People were packed tightly together and slowly moving forward through the human congestion. The notion of personal space did not exist there. Even though I’m well over six feet tall and could breathe freely above the mob, I still felt like I was suffocating. I thought back to how peaceful everything looked from the Skytree Tower earlier in the day. My view now was quite different in many ways.

May 2nd, 2016 - Sea of Trees - Aokigahara


Aokigahara is without question the ultimate symbol for life and death. Known as the Sea of Trees, it's a dense forest in front of Mount Fuji where many have committed suicide. The forest came to be upon the cooled lava from a 10-day eruption in the year 864. From death comes life, I suppose. The origin of Aokigahara as the so-called Suicide Forest isn’t exactly clear. Some attribute its morbidity to ubasute which is the practice of abandoning elders to die in times of famine. Some credit the forest’s popularity in recent years to a novel called Kuroi Jukai published in 1960. In any case, since the early 2000s alone, there have been hundreds of suicides and many more attempts.

Despite the unchecked growth of the forest, it has a sense of rot to it. Many trees have greedily drained the nutrients out of the soil leaving their roots exposed and frail. There's also some notable parasitic plants that feed and grow on one another. A sense of stillness is the only thing that seems to permeate the thick foliage of the towering trees. Some light and most wind was blocked by the upper layers of leaves, leaving many parts dimly lit with the sound of leaves rustling and no breeze for hikers to feel. It's quite easy to get lost and perish in the wilderness, and there are many warning signs not to deviate from the trails for this reason.

I knew of the forest from a documentary that I saw on YouTube when I was a college student. I distinctly remembered a chilling sign beckoning those contemplating suicide to reconsider, but did not know its exact location nor could I find very much detail on the internet. Due to some logistical issues, I was unable to see the forest for myself during my first trip to Japan last year. This time I wanted to make sure that I had the opportunity to.

The three of us took trains for about an hour from Waseda in Tokyo to Fujikawaguchiko, a small town just north of Fuji. From Fujikawaguchiko, we plotted our trip to Aokigahara. We explored the area while asking minimal questions about the location of the forest to avoid attention from the locals. When asking for a map of the area from a guide, he seemed puzzled as to why we wanted to go to the forest specifically. There are numerous tourist attractions in the area, so I told the guide that we were interested in them as to not bring any attention to ourselves. He told us about the caves and other sights and pointed us to the correct bus. We rode the bus for about thrity minutes or so through the town and surrounding area until one of the last stops. There were countless breathtaking views of green mountains circling a peaceful lake.

While searching for the sign, we enjoyed all of the activities that Aokigahara has to offer: hiking through serene trails and crouching through caves. After a bit more research, we eventually found out that the trail with the anti-suicide sign was supposed to be located somewhere near the so-called Ice Cave. As it turned out, this cave is an attraction with many visitors. It seemed hard to believe that something so depressing and morose would be near a bunch of families posing in front of natural wonders. Sure enough, just behind the gift shop was a trail leading back into the forest. A short way into the the path, there it was.

Aokigahara sign

「命は親から頂いた大切なもの。もう一度静かに両親や兄弟、子供のことを考えてみましょう。一人で悩まずまず相談してください。」

"Life is a precious gift from your parents. Quietly think once more about your parents, siblings, and children. Don’t keep this to yourself; please discuss this with someone."

At this point, the mood changed amongst the three of us. A certain gravity weighed down upon us knowing that hundreds of people had entered this path never to leave. We decided to enter this final trail and see what we'd find. Despite the fact that a hundred or so vacationing tourists were right outside the trail, their voices seemed to fade almost immediately behind the thickage. We walked through this trail much more slowly than the others we had walked before earlier in the day. Our conversation had died out as well and the only thing that was to be heard was the sound of leaves rustling many meters above us.

We didn't really know what lied ahead, but I felt this strange compulsion to continue going further into the forest. We had the map of the area, but this particular path was an easily missed gray line at the bottom that ran off the page. This trail looked very similar to the others at first, but it quickly became noticably darker. Kaylee found a laminated card with what we assumed to be a faded religious figure and a phone number for a suicide hotline. We saw many signs warning us to stay on the main path, but eventually I saw one that stuck out to me from the Vice documentary.

The already somber mood became sobering. The stillness somehow became even more still. We decided to leave the main path, but not to stray too far. It would be very possible to unintentionally die in the forest if we wandered far from the path and couldn't find our way back. "Well, we came all this way," I remember saying.

At first, we walked as far was we could with the sign that said no entry still in sight. That's when I saw the first tissue. My heart sank and the experience became even more real. It was a clear marker for navigating back to the main path. I examined the tissue and saw that it was light blue with what looked like hearts printed on it. It was very feminine. Immediately, I was already making deductions about the personality of whoever left it there. I thought about how they had the same idea we did about finding our way back out of the forest. It had been raining every day that week in the area prior to our arrival, so we knew that it was recently tied otherwise it would have disintegrated. The fact that this person used a flimsy tissue instead of a sturdy line or ribbon suggested that this person was not an experienced hiker.

Forest tissue

We decided to follow the tissues. Honestly, I hadn't taken into consideration what I would have done if I found the person on the other end, but we pressed on. At first the tissues were in very rapid succession. Then they became more sparse. Eventually, the trail began to rise much more steeply and dissipate, and I didn't find any more tissues. I scouted ahead a bit, asking my companions to stay back at the last tissue, but I didn't find anything. Without food or rope to mark our way back, we decided to leave the forest. I'd like to think that the person turned around and left the forest, but we'll never know.

May 3rd, 2016 - Change of Pace - Chiba


After an extremely emotional day in Aokigahara, a light-hearted day in Chiba was more than welcome. The three of us attended Japan Jam Beach, an annual all-day rock music festival. Chiba-shi, which sits on Tokyo Bay, reminds me a lot of the San Francisco Bay Area. Most of Chiba gave me a California vibe as well.

I had the pleasure of seeing two of my favorite bands that I frequently listened to during my university days when I first started studying Japanese: The Back Horn and the world-famous Asian Kung-Fu Generation. Even though the three of us attended the same music festival when in Japan last year, I can’t even begin to put into words how it felt to enjoy the bands in person whose lyrics I spent hours translating with one of my closest friends. Back then, seeing Japan was just a distant dream that filled the spaces in between my classes. To be there on the beach just felt like anything is possible.

May 4th-5th, 2016 - The Old City - Kyoto


Vacation usually implies a sense of relaxation, but at this point in the trip we were all already exhausted. The past two days had been a blur of trains, buses, and walking everywhere. Each day had been filled with hours of transit, constant motion, and a draining mixture of emotions. It didn’t help that Japan doesn’t observe Daylight Savings Time like the US does. Sunlight flooded our little Tokyo apartment around five every morning, and it affected my sleeping patterns greatly. I remember waking up constantly, and I couldn’t help but think back to the midnight sun from when I lived in Alaska.

This day we had to wake up even earlier than five in the morning. After getting back to Tokyo around 11 the previous night, we were too exhausted to pack for our trip to Kyoto and Okinawa after that. The bullet train to Kyoto was going to leave at 6:25 with or without us, so we quickly packed a small bag and headed to Tokyo Station. We cut it quite close and the doors closed a couple minutes after we sat down.

It’s about a three-hour ride on the train from Tokyo to Kyoto. Kyle and Kaylee quickly drifted off, but I still couldn't sleep. Despite the fact that we were heading west on the main island at about 200 miles per hour, it was an incredibly smooth ride. The only evidence of how fast we were really going was the blurred rice fields outside my window. We arrived in Kyoto right on time later in the morning, and I was already exhausted by then.

It was extremely difficult to keep up the pace at which we were traveling at for such a prolonged time, but we had very limited time in Kyoto so we pressed on. We arrived at Kyoto Station with nothing planned for the next day and a half. After walking around the massive station, which feels more like a shopping mall, for a while, we acquired a map of the area and decided to head to Nijo Castle. The bus that we rode to the castle must have had fifty tourists inside. It was by far the most cramped that I was during the entire trip.

The castle itself was absolutely beautiful. At nearly 400-years-old, you can smell the history. Inside, there were numerous paintings and other prominent pieces of artwork from the era. There was also a serene garden in the courtyard area. It always humbles me to be in a place that has already existed much longer than I will.

After visiting the castle, we took the Sagano Line to Arashiyama. This is where we spent most of the day. Kyoto was the capital of Japan long before Tokyo and has an extremely rich cultural history. Because of this, Kyoto is very popular amongst tourists all over the world. Due to its accessibility and absolutely gorgeous scenery, Arashiyama especially caters to tourists. There’s a main street in front of the famous Togetsukyo Bridge which houses many shops and restaurants. It was a welcomed reprieve to relax in this area and enjoy the natural beauty of the surrounding mountains and the wide Katsura River.

The last activity we enjoyed in the Arashiyama District was the Iwatayama Monkey Park. It was a bit of a hike up to the top, but the scenery was simply breathtaking. In addition to the monkeys, at the peak is a beautiful view of the city. It’s nowhere near as sprawling as Tokyo, but there’s still a lot to appreciate. I personally enjoyed the views here more because of the nature that seems to seamlessly blend into the city.

After Arashiyama, we decided to head to our ryokan, a traditional Japanese inn, which was located near Kyoto Station. After a few blunders with slippers, we made it to our room and everyone sprawled out on their tatami mats. Despite all of the traditional culture around us, we grabbed McDonald’s for dinner and turned in early for the night.

The next day was mostly reserved for traveling. From Kyoto Station, we went to Kansai Airport to fly to Okinawa for the next few days. It’s about an hour and a half train ride to the artificial island in Osaka Bay where the airport is located. From Kansai, we had a two-hour flight to the Pacific island of Okinawa.

May 5th-8th, 2016 - Texas in the Pacific - Okinawa


Immediately upon landing in Okinawa the differences from mainland Japan became extremely apparent. Tokyo and to some extent Kyoto as well seemed very focused on appearance. Tokyo is a global fashion hub and home to many corporate employees. It was rare to see someone who wasn’t well-dressed. Waiting in the small Naha Airport, I couldn’t help but compare it to sprawling Narita Airport in Tokyo.

After leaving the airport, we hailed a taxi that looked fresh out of a 1980s film. It was an antique Toyota whose model I’m sure was never sold in the United States. The interior was a grey, worn fabric that was equally as dated. The driver was dressed like a retired person in Florida. I immediately realized that unlike the metropolises of mainland Japan, life in Okinawa would be nearly impossible without a car. After about 20 minutes, we arrived at our Western style hotel in Naha. From our window we could see how everything in Okinawa looked aged: buildings, cars, and style. The best explanation that we could come up with was that it’s probably very expensive to import goods to the island.

Admittedly, I wasn’t sure that I was going to like Okinawa at first. Tokyo, which has a modern New York City vibe to it, had so many sights and splendor, but Okinawa just seemed somewhat dilapidated and retro. After a quick shower and nap, we headed to dinner at a nearby restaurant. I knew as soon as we walked in that it was going to be one of my favorite experiences of the trip.

We couldn’t really hear anything from outside, but we were greeted with to the loud, melodic sounds of taiko and shamisen, traditional Japanese instruments, as we walked in. It was an interactive experience. People were swaying, clapping, and singing proudly. The first page of the menu featured the lyrics for the songs. A little infant girl was the life of the party. She stumbled around the restaurant laughing and clapping. Audience participation was required.

The barrage on my senses was a little overwhelming at first. We were meeting up with some mutual friends from Tokyo who we would be spending the remainder of the trip with, and I was trying to focus on getting everyone introduced. However, the conversation, food, alcohol, singing, and dancing all flowed naturally throughout the night. Our table was a constant mixture of Japanese and English used interchangeably. Several sake-fueled Snapchats were sent.

Getting the exposure to native speakers was amazing Japanese practice and equally amazing to see how far my skills have come over the years. During an intermission, the man singing and playing the shamisen asked who had come to Okinawa from the furthest place. Someone from the audience proudly proclaimed that he had come all the way from Hokkaido. The shamisen player looked towards our table and made a comment about how the foreigners probably came from far away as well. He switched to broken English and asked where we had come from. I replied to him in Japanese that we had come from America to visit my friend who I studied Japanese in college with but now lives in Tokyo. He made a joke about how my Japanese was much better than his English and resumed the music. For the rest of the night, I was approached by several Japanese people who wanted to ask me random questions about my studies and why I came to Okinawa.

The next morning our new friends picked us up in a rental car. They introduced us to some of their friends that live in Okinawa. One of them is a diver and works at the military base. He was kind enough to bring us on the base as his guests, and we were able to enjoy beautiful White Beach all by ourselves. We even were allowed to use their equipment freely. It was the kind of hospitality that you feel like you can never fully repay. We spent all day on the base and left sunburned all over our bodies.

Our new friends found us a house with a private beach to stay at. Honestly, the view looked like something off of a heavily Photoshopped postcard. I had never seen anything like it before. After dinner, we all laid on the beach together and stargazed beneath a clear sky miles and miles away from any light pollution.

We woke the next day sore and stiff. Our bodies were red and blistering from the Pacific sun. We decided to take it easy the next day and explore Motobu. We hiked along the beach, had lunch at a cafe with a breathtaking view of the blue water, and even went to the famous aquarium. That night our gracious hosts even cooked for us, and we taught each other drinking games from our respective cultures.

In Okinawa, life is very simple. Everyone dresses, speaks, and acts very casually. In Tokyo, people seemed to use set, polite phrases when talking to each other. My conversations in Okinawa were much more down-to-earth and personal. Some of which would probably have even been considered rude in Tokyo. There were also some small differences in dialect. For example, when saying welcome, which is said「ようこそ」 in the Tokyo dialect, they instead use 「めんそーれ」. Okinawa wasn’t always a part of Japan either, and at times, it felt like a completely different country. All of these factors combined reminded me a lot of my home state of Texas.

Leaving Okinawa and our new friends was one of the saddest moments of the trip for me, but I’m truly grateful for every amazing experience that I had there and the people who made it possible.

May 8th-10th, 2016 - Hard Goodbyes - Tokyo

A seven-hour delay in Naha Airport threw off our plans for the remainder of the trip. We arrived back in Tokyo much later than we had planned and had to sprint to catch the last Skyliner train at 10:30pm. We didn’t make it back to the apartment until midnight and promptly crashed from exhaustion after the last leg of our trip. We originally had plans to go to Hakone the following day, but decided to rest before our long flight back to the United States.

We hung around Shinjuku most of the day instead and ended up eating Domino’s pizza and watching Big Daddy on Netflix for our last night. It was raining all day which made for a very somber departure. Kyle asked me if it would be a while before I came back to Japan again. I told them that coming twice in two years was a lot already. It made me very sad to think about not being back for some time, so I guess I have no choice but to go back next year again even if only for a shorter time.

ISP Complainer

· 5 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Motivation

I have long battled my internet service providers, and they have waged war on their customers for many years as well. Poor customer service, data caps with obscene overage charges, and many other issues have plagued me for years like many other Americans. Like many of these people, I’m stuck with only one viable option for a provider due to their monopolistic practices. Recently, I was forced to upgrade to a more expensive monthly plan after going over my data cap three times non-consecutively over a year and a half period. Rather than charging me the usual absurd per-gigabyte overage charge, I’m now stuck paying more per month for the remainder of my time with my ISP.

"You’ll also get an increase in speed as well," the representative told me. Well, I’ve decided to hold my ISP accountable. Since my ISP was so diligent in tracking my overages, I’m going to be diligent in tracking the speeds that I’m getting. Inspired by a post on Reddit, I built an application to let my ISP know whenever I’m not getting my promised speeds, and I’m running it off of my Raspberry Pi. Here’s how you can use it too.

Raspberry Pi Setup

Note that you do not need to run ISP Complainer on a Raspberry Pi. I built this application using Node.js, which can run on any operating system. I treat my Raspberry Pi like a little UNIX server and leave it running all the time. If you do not plan on running this on a Pi, you can skip to the next section.

Raspberry Pi uses ARM while most modern desktop processors use the x86 instruction set architecture. Because of this, you’ll need Node.js binaries compiled for ARM. In the early days, these files weren’t maintained like they are for x86 and compiling from the source code was required. This is still a viable option if you want to target a specific version, but you can also just use a download maintained by node-arm using the following commands:

wget http://node-arm.herokuapp.com/node_latest_armhf.deb
sudo dpkg -i node_latest_armhf.deb

Update: the ARM binaries are now available on the Node.js downloads page.

Local Setup

Once Node.js and NPM are installed, fetch the code for ISP Complainer using the command git clone git@github.com:scottenriquez/isp-complainer.git. You can also access the repository via the web interface. Once the repository had been cloned, navigate to the /isp-complainer root folder and install all dependencies using the npm install command. Then, start the web server with the command node server.js or nodemon server.js.

Configuration

Start by creating a Twitter API key. This will allow you to programmatically create tweets to your ISP. You can either create a new Twitter account like I did with @ISPComplainer or use your existing account. Note that in either case, you should treat your API key just like a username and password combination, because if exposed the person who intercepts it can take actions on your behalf. To create an API key, login to the Twitter application management tool and create a new app. Follow all of the steps and take note of the four keys that you’re provided with.

Once you have your keys, you’ll need to add them to a file called /server/configs/twitter-api-config.js. This file is excluded by the .gitignore so that the API key is not exposed upon a commit. Copy the template to a new file using the command cp twitter-api-config-template.js twitter-api-config.js, and then enter the keys inside of the quotes of the return statement for each corresponding function. If you would prefer to store these inside of a database, you can inject the data access logic here as well. See the config template below:

twitter-api-config-template.js
module.exports = {
twitterConsumerKey: function () {
return ""
},
twitterConsumerSecret: function () {
return ""
},
twitterAccessTokenKey: function () {
return ""
},
twitterAccessTokenSecret: function () {
return ""
},
}

One other config must be modified before use:

complaint-config.js
module.exports = {
tweetBody: function (promisedSpeed, actualSpeed, ispHandle) {
return (
ispHandle +
" I pay for " +
promisedSpeed +
"mbps down, but am getting " +
actualSpeed +
"mbps."
)
},
ispHandle: function () {
return "@cableONE"
},
promisedSpeed: function () {
return 150.0
},
threshold: function () {
return 80.0
},
}

tweetBody() generates what will be tweeted to your ISP. Note that the body must be 140 characters or less including the speeds and ISP’s Twitter handle. ispHandle() returns the ISP’s Twitter account name. A simple search should yield your ISP’s Twitter information. Be sure to include the '@' at the beginning of the handle. promisedSpeed() returns the speed that was advertised to you. threshold() is the percent of your promised speed that you are holding your ISP to. If the actual speed is less than your promised speed times the threshold, a tweet will be sent.

Optionally if you want to manually change the port number or environment variable, you can do so in the server file:

server.js
...
var port = process.env.PORT || 3030;
var environment = process.env.NODE_ENV || 'development';
...

Using the ISP Complainer Dashboard

ISP Complainer

After starting the server, you can access the dashboard via http://localhost:3030/. This interface allows two options: manual and scheduled checks. The manual option allows you to kick off individual requests at your will, and the schedule allows you to run the process over custom intervals. All of the scheduling is handled with Angular’s $interval, and the results are tracked in the browser. Note that if you close the browser, no more checks will be scheduled and you will lose all of the results currently displayed on the browser.

For any issues or enhancements, feel free to log them in the issue tracker.

Getting Started with Python and Flask on AWS

· 6 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

Getting Started: Convention Over Configuration

This software development paradigm has become prevalent in many modern frameworks and tools. It can be described as streamlining and standardizing aspects of development similar to all or most projects while specifying configurations for those that deviate from the established convention. For example, most Node.js developers call their main file server.js. Let’s say that I want to write a generic deployment script for Node.js applications. It’s much easier to include a simple command like node server.js than to try to determine the developer’s intended start file. This simple agreement saves a great deal of time and configuration details. The emphasis of this paradigm is that the knowledge will hold for any team or company, and so organizations full of old tribal knowledge become a thing of the past.

Personally, I’m a huge proponent of convention over configuration. Amazon Web Services use this paradigm as well as part of their Platform as a Service offerings, but unfortunately the documentation is sparse, which is the main downfall of convention over configuration. Simply put, if no one knows about your convention or which conventions you’re opting to use, it’s ultimately useless. Soapbox aside, I’m going to cover how I got my initial Python environment setup and some conventions necessary for successful deployment.

Setting Up a Python Environment Using Elastic Beanstalk

I initially tried to use this guide from Amazon to deploy a simple RESTful service built with Flask. I’m not an infrastructure person at all, so I struggled through the steps. I failed to produce anything meaningful, so I decided to switch up my approach. One odd thing about this documentation, other than the fact that it was created in 2010, is that this seems to be the Infrastructure as a Service approach. The instructions have you provisioning an EC2 instance and creating your own virtual environment, then manually starting and stopping Elastic Beanstalk. As a developer, I like being abstracted from all of that whenever possible, so I decided to use the Platform as a Service approach instead.

The first step is to create your application using Elastic Beanstalk via the AWS Management Console. When you create your new application, AWS will automatically create an EC2 instance for your application to reside on. During this initial setup, you can specify what predefined configuration you want to use such as Python, Node.js, and PHP. For the sake of this demo, choose Python. Once you choose the rest of your options, most of which are overkill for this simple demo, AWS will create your application within a few minutes.

Configuring Your Local Machine

While your application and EC2 instance are being provisioned, start preparing your local machine. First of all, install the Elastic Beanstalk command line tools via pip or Homebrew if using Mac. Secondly, create an IAM user and store your credentials locally so that they are not present in source code. Note that you can manage the roles and permissions for all users in the Identity and Access Management section of the AWS console. For the sake of this demo, be sure to grant the user the full S3 and Elastic Beanstalk access policies.

Testing the Code Locally

I have created a Python 2.7 demo for this post on hosted the code in this GitHub repository. You can clone the code using the following command in the desired directory: git clone git@github.com:scottenriquez/scottie-io-python-aws-demo.git. I've also included the source code below for convenience.

application.py
from flask import Flask, request, url_for, jsonify
from boto.s3.key import Key
import boto
import boto.s3.connection
import uuid

application = Flask(__name__)

@application.route("/data/", methods = ["POST"])
def data():
try:
data = request.form["data"]
connection = boto.connect_s3()
#update with your S3 bucket name here
bucket_name = "test"
bucket = connection.get_bucket(bucket_name, validate = False)
key = Key(bucket)
guid = uuid.uuid4()
key.key = guid
key.set_contents_from_string(data)
key.make_public()
return jsonify({"status" : "success"}), 201
except Exception as exception:
return jsonify({"status" : "error", "message" : str(exception)}), 500

if __name__ == "__main__":
application.run()
requirements.txt
flask==0.10.1
uuid==1.30
boto==2.38.0

After obtaining the code, make sure the proper dependencies are installed on your machine. This demo requires three pip packages: Flask, UUID, and Boto. Be sure to create an S3 bucket and update the code to target your desired bucket. Once all of this is configured, you can run the code using the command python application.py.

This code creates a simple RESTful service that takes raw data and stores it as an S3 file with a universally unique identifier for the name. To test the code, use a REST client like Postman to perform an HTTP POST on http://localhost:5000/data/ with the parameter called data containing the data to be posted to S3. The service will return a JSON message with either a status of "success" or an exception message if something went wrong.

Deploying to Elastic Beanstalk

It’s important to note that the names of the two files cannot be changed. As mentioned in the first paragraph, AWS uses convention over configuration. When deploying, Elastic Beanstalk searches for a file called application.py to run. The other file is used to manage dependencies. If you didn’t have the three required pip packages on your local machine, you simply fetched them. Due to autoscaling and other factors, you can’t guarantee that the server that your code is deployed to contains the packages that your code depend on prior to deployment. Because of this, rather than using SSH to connect to an EC2 instance and executing several pip install commands for every new instance, it's best to list of all dependent packages and versions inside of a file called requirements.txt. This way whenever the code is deployed to a new EC2 instance, the build process knows which packages to fetch and install.

Once the code is working locally, we’re ready to deploy to AWS. Start by running the eb init command in the code’s directory. Be sure to choose the same region that was specified when the Elastic Beanstalk application was created. You can verify that the environment was created properly by running the command eb list or simply run eb for a list of all available commands. After initialization, execute eb deploy. The status of the deployment can be monitored via the command line or the AWS console. Once the deployment is completed, testing can be done via the same REST client, but substitute the localhost URL for the Elastic Beanstalk specified one.

You now have a working Python REST service on AWS!

Bright Nights, Nomadic Days

· 3 min read
Scottie Enriquez
Senior Solutions Developer at Amazon Web Services

It feels strange waking up in the Lower 48 again. When I left Alaska, the sunlight was already making people and plants manic again. The grass that was dead for months bloomed vivaciously in a matter of days. The snow was long gone, but the rain hadn’t shown itself quite yet. The midnight sun was already creeping and beaming over me.

It’s hard to believe that it’s already been over a year since I first went to Alaska. It seems fitting that I flew out on the same day that I originally arrived Anchorage. All in all, I spent over eight months there, only returning to Texas to finish my last semester of school. I lived fully while I was stationed in Alaska, managing to take only a few reprieves to bum around the house or work on my passion projects.

I’m exhausted in so many ways after everything though. During my early college days, I was eager to pack up my entire life and travel somewhere new for an opportunity. After five years of constant moving, I welcome a more static lifestyle. Ever since I left Austin behind me last December, I’ve been living out of a suitcase. I took roughly the same amount of things to Alaska for my six-month stay as I did my internship. Each time I moved, I trimmed more fat from my belongings, and now it seems that my life can be contained in almost a single bag.

I’ve spent a lot of time lately thinking about what exactly makes a home a home. Being back in Austin felt a little different this time. In some ways, I felt like I just been gone a couple of days. In others, I felt like I had been gone for eons and the city wasn't even recognizable anymore. I took a sentimental stroll through my alma mater’s campus with my closest friend from college. It made me think of my last walk to the campus bus stop. My last walk to the Transit Center in Anchorage. How my life had suddenly afforded me a new lifestyle that I couldn't have imagined attaining at such a young age. Visiting my hometown felt completely strange as well. I felt no real attachment to the place anymore; not that I ever truly did anyway.

Life just keeps on getting stranger. As I write this from a hotel in Galveston, Texas, I can’t help but wonder what the journey towards making a new home will be like this time around. I never thought that I would say this, but I’m very happy about settling down in one location even if only for a couple of years. Unfortunately, I won’t know exactly where that will be until the start of August. The sleepless nights have already begun. As always, I’m still learning to embrace the uncertainty.