Tag: <span>C#</span>

It was a bit over 10 years ago that I first wrote about my efforts to create a better version of a long-dead native iOS app that Mozilla wrote to dip their toes into the iOS pool. All their app did was give you access to certain data in your Sync account: bookmarks, history, open tabs, etc. It was perfect for me because I had an iPhone and lots of Firefox bookmarks, having used the browser since 2006.

My desire was to write a web app that worked the same but avoided a really dumb bug in their app, and also learn some new technologies while I looked for a new gig (I had just moved to Seattle, WA). I reached into my depths of creativity and called it the Bookmark Browser. It was a tremendous success. I still use the app today, over 10 years later. But it’s changed a lot in that time, evolving to use newer tools and work with changes to Firefox services. Follow along as I catch up on what’s changed since the last revisit.

Sidebar: I want to first tip my hat to Valérian Galliat who in mid-2021 did a deep dive on his efforts to do the same thing the Bookmark Browser does, which is access data in Firefox Sync from an application that’s not Firefox. I wasn’t able to take advantage of the code he wrote, but it’s a detailed and entertaining journey into what it takes to build a third-party app on top of the Firefox ecosystem (hint: far, far too much). It also inspired me to re-visit my original back end code and see if I could get it working again.

The original 2012 Bookmark Browser implementation was a front end built on jQuery Mobile + KnockoutJS with a back end written in C#. That back end leveraged a client library written to do the heavy cryptographic lifting of getting data out of Sync. Life was grand until 2016, which is when authenticating against Firefox services started becoming cumbersome. Previously my code could log into my Sync account and pull down data immediately, with no other interaction needed. Mozilla was in the process of improving their services and hardening things around the login API. I ended up having to code around those restrictions (e.g. needing a second step in the process to verify a login). The good news was it still worked.

Also in 2016 I started using AngularJS in my day job and really liked it. In early 2018 I decided to convert the front end to that framework. It was also around this time that the Sync access just stopped working. I’d get an ‘Unauthorized’ error when trying to call their login API and couldn’t figure out a way past it. So I switched the back end to accept an upload of a JSON bookmark backup made from the desktop version of Firefox, and an endpoint to fetch that data. It was annoying to have to go through the extra step of uploading a backup whenever I wanted to refresh the bookmark data stored on my phone. But it was worth it to have the app work consistently.

In December of 2020 I decided another tech stack update was in order, because AngularJS was nearly dead and I wanted to switch hosting providers. I re-wrote the front end in React which I had been using for over a year and a half in my day job. The back end was the same C# web service that accepted a bookmark backup file, using the latest 4.x version of .NET. I still prioritized certainty of operation over the convenience of being able to get data directly from Sync. The fun part of this upgrade was figuring out how to have front end components communicate with each other, which was very easy in AngularJS. I decided to use the now-much-improved Context API and the various hooks React now offers. I replicated all functionality and resolve some nagging CSS issues along the way. It was a great improvement. I then moved everything to Azure for hosting in a Windows VM, and Azure DevOps for CI/CD (the VM also hosted this blog and therefore ran the WordPress stack).

The app worked great but as time went on I really, really wanted to figure out how to talk to Sync directly again. Then in early 2022 I found Valérian’s blog series on doing exactly what I wanted to do, purely by chance after a random Google search. As luck would have it, at that time I was considering porting the back end to Node.js, since I wanted to ditch my Azure VM and set everything up in an App Service based on Linux. And he had some JavaScript code! Sadly, when I tried to use it I ran into an error I couldn’t explain while trying to authenticate. But he said he had gotten it to work, which suggested it was possible. I dusted off my original C# web service, updated it to a .NET Core 6 project, and compared that code to the JavaScript Valérian had written to authenticate and access the encrypted contents of Sync storage. I really only found one important difference: a reason identifier for the initial login request.

The first step in the crypto-dance to get data out of Sync using Mozilla’s original BrowserID protocol is to make a login request using your email address, Firefox Accounts password, a verification method, and a reason for the request. In my login API that request boils down to the following, using some helper code for the mechanics of making an outbound request and a class for this particular request:

public class LoginRequest
    {
        public LoginRequest(Credentials credentials)
        {
            this.Email = credentials.Email;
            this.AuthPW = BinaryHelper.ToHexString(credentials.AuthPW);
            this.VerificationMethod = "email";
            this.Reason = credentials.Reason;
        }

        [DataMember(Name="email")]
        public string Email { get; private set; }

        [DataMember(Name="authPW")]
        public string AuthPW  { get; private set; }

        [DataMember(Name="verificationMethod")]
        public string VerificationMethod { get; private set; }

        [DataMember(Name="reason")]
        public string Reason { get; private set; }
    }

Post<LoginRequest, LoginResponse>("account/login" + (keys ? "?keys=true" : ""), loginRequest);

In the original Sync client I used, the LoginRequest class only had the email address and password. And I think what happened is at some point the other two values became required, and Mozilla didn’t really go out of its way to tell third-party developers who maintained apps that authenticated to Sync via code. But Valérian noticed it and his implementation passes in all those pieces of data. So I simply added those two properties to the class and it worked. Sort of. I had to go through some trial and error because apparently there are a couple different verification methods you can use. The reason value always needs to be ‘login’, which avoids any immediate need for verification. The verificationMethod value can be one of these supported values:

  • email
    • Sends an email with a confirmation link.
  • email-2fa
    • Sends an email with a confirmation code.
  • email-captcha
    • Sends an email with an unblock code.

I tried email-2fa and email-captcha but couldn’t get either one to work. I don’t know why, but in the end it didn’t matter. The process now has three steps: step one is making a login request, step two is clicking a link in the email you get to verify the login attempt, and basically establish a valid login session to the Firefox Accounts services, and step three is getting bookmark data. The legacy BrowserID protocol that all of this code uses basically requires a new set of keys be generated each time you want to make requests to Sync storage, because they are very short-lived. So on step three I have to make the same login request, but this time it will pass with flying colors and I simply get an email warning that a successful login was made using your credentials, make sure it was you, etc.

I love J.K. Simmons

As of today, Mozilla still supports their legacy BrowserID protocol and infrastructure for authenticating an account and using their various Sync services. This is very nice of them. But I’m under no illusion that any of it is permanent, and that it could all disappear tomorrow. But I don’t think that’s likely, since my guess is part of Firefox or other Mozilla apps/tools still use that protocol. And so they will support it for a while still. As long as they don’t change it I’ll be able to get bookmark data out of my account and into Bookmark Browser every time. And even if they did finally pull the plug on the BrowserID stuff, I left in the bookmark backup code so I can fall back to that.

Postscript:
Mozilla does have a mature OAuth implementation that they put together a while ago, which is obviously a better way to authenticate. But it assumes you have a valid client ID for the initial auth challenge, and that they know the URL to redirect users to once they’ve obtained a token. Apparently they want you to ‘consult’ with them to set it all up, which sounds rather unappealing. As Valérian mentioned and I agree with, Mozilla doesn’t seem to want to make it easy for developers to build apps on top of the Sync ecosystem. Which is sad because a lot of cool things could probably be built.

Update:
Turns out ‘a while still’ means up to May of 2024: Mozilla officially started decommissioning an essential piece of the BrowserID protocol. I found this out by trying to update the bookmarks on my phone and getting a strange 404. The endpoint that does cryptographic certificate signing as part of the authentication process was slowly being restricted. It’s now basically gone and so I’m left with my fallback of uploading a Firefox bookmark backup JSON file. When I first learned what was happening I actually considered reaching out to them to see about getting an OAuth setup in place for my app. I haven’t yet but it might be worth a shot. The worst they could do is say No.

This is part 1 in a series of posts about creating a mobile web app for browsing music databases.

In my continuing quest to up my web/mobile game, I decided to build a web app for searching the All Music Guide database. There are a number of music metadata repositories on the web, some robust and some paltry. The one I like the best is run by Rovi (now Tivo) and powers All Music. They had an iOS app that allows you to access all parts of the data store. It was great, but the search portion is not very user friendly and doesn’t always work the way you expect.

My goal was to create a basic search form that allows you to look up an artist, album, or song title. The native app had an all-in-one search feature where it tried to dynamically show you results for what it thought you were looking for, but it often failed to return what I wanted. I understand the ease and utility of having a single field for different types of information, but I wanted to run specific searches.

The first step was creating an API that would essentially wrap the calls to Rovi’s RESTful API. I didn’t want to interface directly with Rovi for several reasons: to allow the results to be formatted differently if I wanted, to make it easier to switch to a different data store in the future, and so I wouldn’t have to allow cross-origin requests. I went with the standard Web API project in Visual Studio 2013.

Getting the routing to work properly was the only real hurdle with the API. The default Visual Studio template sets up routes that include /api/<controller>. But I just wanted /api and not the <controller> part. It’s tricky because the base controller class is ApiController. The MVC convention is controller class names are of the form <my_ctl_name>Controller and then my_ctl_name becomes part of your route. Having my own class called ApiController wasn’t possible, so I called it SiteApiController. But how to tell all requests to use that controller? Enter WebApiConfig.

I removed all the default routes and added two new ones: one for searching and one for lookups. I was able to specify the exact controller class and a route template that only included /api and not a controller designation. Bonus points to Microsoft for allowing lots of route configuration options.

Even though I only needed to make single, synchronous requests to Rovi, I used HttpClient to do it. There might be a need in the future to make multiple simultaneous requests to build a query result, if so it will be easy to make them asynchronous. The next step was the front end.