-
Getting Tensorflow Extended (TFX) master to work with apple silicon natively
TFX is not compatible with Apple Silicon yet, though there are a few pull requests in flight to make this happen. In my previous post I ran through how to get it working with TFX version 1.14.0. This post covers how to get TFX master working against Tensorflow 2.15.0, using nightly components. Very similar, with an additional TFX build and an added twist for tensorflow-serving-api.
You will need to clone or build patched master versions of
-
Getting Tensorflow Extended (TFX) 1.14.0 to work with apple silicon natively
TFX is not compatible with Apple Silicon yet, though there are a few pull requests in flight to make this happen. If you are willing to build your own wheels against the 1.14.0 tags with the relevant pull requests, it is possible to run TFX natively, though I have no idea how to run the appropriate test suites to verify full compatibility. See this post for how to get TFX master working w/TF 2.
-
Getting Tensorflow Extended (TFX) to work with apple silicon (m1, etc.)
If you’re using Tensorflow and trying to get a model into production, the officially sanctioned toolset for doing so is Tensorflow Extended (TFX). Unfortunately, if you’re on apple silicon as of 28/11/2023, there is no support for tfx (open issue here). If you try and install tfx in your mac environment (1.14.0 being the latest), you will be told there are no installation candidates or version, with varying reasons depending on your set up.
-
Using older fastai (< v1.0) versions on Kaggle kernels
If you’re trying to work through fast.ai’s Introduction to Machine Learning for Coders in 2019 on a Kaggle Kernel, you’ll find that the fastai library api has diverged significantly.
As of Aug 2019, the fastai version on Kaggle kernels is >=1.0, but the course as of this writing uses fastai 0.7.0 Unfortunately, there’s no upper range specified in the requirements.txt for the 0.7.0 release so if you just try and uninstall and reinstall via !
-
Classifying bears as a desktop app
I recently finished fast.ai’s excellent deep learning for coders course. The models used by the course run on python and use their own fast.ai library, which under the covers makes use of pytorch, pandas and more. I wanted to try and package up a predictive model as a desktop app, my own local, offline teddy, grizzly and brown bear classifier.
Introduction After poking around, I decided to go with a fast.
-
PyInstaller with Qt5 WebEngineView using PySide2
As I’m writing this (Aug 2019), there are a number of teething issues and rotating knives if you try and package up a cross-platform (Mac, Linux, Windows) app with a Qt WebEngineView using the official Python Qt bindings from PySide2 and PyInstaller. I’ll go through some of the issues I encountered to hopefully save you some grey hairs.
Note that PyInstaller is not a cross-compiler, so running on separate platforms is still required.
-
Here we are again
Almost a month after my last post, we’re back here again. From once a week, it turned into once a fortnight and now it’s been a month, just so I don’t end up giving my life savings to beeminder.
One of the things about beeminder that took some getting used to is the idea of days till derailment. When you set your goal, you can include some buffer days so even if you a miss a habit / goal, you won’t immediately derail.