Null
was bad: “Null the billion-dollar mistake”
But… doesn’t database tables have nullable columns? And when there is no value, I put NULL
in it? If using NULL
is a billion-dollar mistake there, then what do we use instead?
Cue programmers of various unpopular languages to sweep in, “come! there’s no Null here, maybe”
type Maybe a = Nothing | Just a
message1 = Just "Hello, world!"
message2 = Nothing
So… Null
is a mistake but Nothing
is alright? Yes. No.
You see Null
itself is not the problem: When something has no value, it is Null
[1]. That is correct. It ISN’T better if we ignored that correctness and use a 0
instead, or ""
empty string, or 0001-01-01 00:00:00 +0000 UTC
!
ASIDE: Go programmers, life is not a binary choice between “uninitialized C variable causes core dump” vs “implicit zero values then”! [2] Zero values are not our friend: When we unmarshal a json string without error and some numbers are
0
, how do we know if the values were missing (not good) or the decoder saw0
in the json string and decoded it (good) ? We don’t know [3]. Null errors can still be located and fixed, but we can’t locate zero value data corruption like this. And if you argue “Oh but there’s no difference between an absent value and 0” 👀 you should know that’s your Stockholm syndrome talking. [4]
Null
becomes a problem ONLY IF your language lets you use it like it isn’t
maybeUser = findUser users 42 // a nullable reference to User value
maybeUser.name // your compiler is happy; runtime not so much
This convenience of referrencing the name
attribute on a possibly null user
— THAT is the billion-dollar mistake: the null reference. Not Null
itself.
Back to the example of Nothing
being the same as Null
(they are!). The difference is that, in some languages, we can’t use a maybeUser
value like a User
maybeUser = findUser users 42
maybeUser.name
^^^^^^^^^---- compiler error!
The only thing we can do with a Maybe
value is to deal with each specific code path
case maybeUser of
Just u ->
"Winner is " ++ u.name # guaranteed safe access
Nothing ->
"Nobody won"
Inconvenient, but the guarantees are bliss.
So, boys and girls, don’t avoid Null
itself. Don’t make your table column NOT NULL
because of the reputation. Don’t use an inaccurate zero value in place of the correct one. If a column is indeed nullable, be brave and make it nullable.
Add a Null check linter? Change your programming language? Anything palatable to you, but let the Nulls be Nulls.
[1] Or Nothing, nil, et al.
[2] The simplest correct solution to “uninitialized C variable causes core dump” is to require initialization. A little bit of typing goes a long way.
[3] Unless you look into the json string again. 🤦♂️
[4] After all these sacrifices of correctness for implicit zero values to protect against core dump, Go programmers still have to grapple with the nil zero value and all the same nasties as Null… and worse: why is my nil error value not equal to nil?. 🤣
I like the idea of using a Dict until you’re ready to enforce types. It’s similar to what I suggest at the end…. In the second post, it seems like you haven’t addressed what I consider the root of the problem, which is discovery…. But it is one of the pains that Clojure addresses by bundling dynamic typing with REPL-driven development.
I realise we were almost talking past each other! I had noted in passing the advantage of dynamic types during exploration, citing JSON.parse
and Eric Normand mentioned liking the idea of using a Dict
until we’re ready to enforce. But those two points carry more significance for each party than the other acknowledged! Let’s dig into it instead!
But first,
Q: Does Elm come with a powerful library for working with Dict values (like Clojure does)?
Unfortunately though the functions are enough, the … *drum roll*… type is not flexible enough 🙈. Keys are limited comparable
types (hardcoded in Elm), and since values must be the same type, nesting Dict
requires wrapping with something like Tree a = Node a | Tree a
.
It’s definitely more productive in a dynamic language especially with one equipped with a more powerful repl than usual, eg Clojure. I imagine asking the repl to assign the response into a variable. Then work on this variable until I get what I want from it, and when it worked, copy/paste/save the repl code into my app, done!
For my workflow, I use the raw JSON to TDD my decoder as Eric Normand had described. However, the feedback loop on test-on-file-save, at least for Elm, is not shabby [1].
I suspect the main difference could be this:
However I feel the speed difference of this activity is in minutes[2], but the speed difference later on can easily be hours…
Q: How do we deal with changes after that? e.g. some fields turns out to be optional, etc.
For decoder dev, errors happen when decoder reaches unexpected character of the input. So errors can state how their expectation failed: expect “email” field to be string (but it wasn’t). This allows us to zoom in to the “email” field decoder and start TDDing with the new input variant. Fixed.
For repl dev, if we jump immediately to thinking about fixing the parsing function, the difference there isn’t huge either. While manually stepping through the function body (or comparing old json with new json) to locate the problem is a little tedious but that’s still the happy scenario.
The real scenarios that decoder devs are fending off are all those runtime errors that pops up in places that doesn’t make sense. Tracing why such values are in a bad state takes a lot of time & effort in a dynamically typed system: there are so many code paths? across subsystems? It can take hours before we discover the particular parsing function to fix.
I like idea of using a Dict until we’re ready to enforce
This is more significant to me than an idea to apply for a particular scenario: It’s the fact that entire system is enforced: every tiny part is always locked in and enforced with other tiny parts regardless whether I remember to do it or not. This brings benefits I’d not experienced before non-pure type fp languages:
I’d mentioned “sure-footed” in an earlier paragraph, but here’s my longer description of what that meant:
With Elm, I could take a sure-footed approach to problems that I might have previously felt is too hard for me. My Elm solution to each small part cumulates to the final solution without requiring me to keep revisiting at every step (because it’s “proven”). And the best part is, after I’ve crossed the finishing line & gotten everything working, tidying up my entire solution is a safe and mechanical process. Confidence++
This applies to both getting things to work in the first place but especially to managing changes later; no unnecessary revisits. That’s what I’m giving up the discovery speed for.
Then again “giving up” sounds harsh since I can improve and reach reasonably good productivity for discovery too. On the other hand, I feel without such pure & statically typed language supporting me so completely, the sure-footedness described above cannot be achieved by myself.
[1] I was developing a Slack bot in a place with no wifi, and I just wrote decoders based on JSON samples I’d downloaded earlier from their docs site, then I write code that worked with those types, fixing bugs along the way until it compiles. When I’m finally back online, I tried my bot… and it worked the first run! I’m not such a meticulous person so credits totally goes to the Elm compiler and its type system.
[2] This should possibly be the understatement akin to SVN users downplaying the speed of Git, while Git rightfully claims it’s so freaking fast that you actually use it differently. Must be giddying! I should want to experience such repl workflow someday.
I liked the very concrete and real scenario he picked. I agree with the volatility observed but I also want to note early on that it isn’t productive to respond to any particular solutions for each step: situation do arise, we developers handle it with the information and time we have at hand.
The final solution in his scenario – at the cost of a bit of type safety – was to use a Map
. Since that’s what a dynamically typed language (e.g. Clojure) would’ve chosen from the start, it begs the question: why not start there in the first place [instead of opting for static types and hoping that volatility doesn’t hit]? Which leads to the statement:
sufficiently volatile data prefers a flexible model with optional runtime checks.
I’ll disclaim upfront that I’ve in fact chosen to use a Map
(aka Dict
) most of the time, when dealing with forms, in my Elm apps. This choice is commonly judged as having given up on type safety.
That’s true but only if we stop there!
Once we have a “bag of attributes” backing our form input values, add a function to parse it into the desired type or error messages if any.
parse : Dict -> Result Errors UserInput
If valid, we return the value, e.g. Ok { email = "bob@example.com" }
. Otherwise, return errors, e.g. Err [ ( Email, "is invalid" ) ]
. With the return value, you can enable the submit button when userInput
is present, disable when absent, extract and display input field errors alongside the input fields. Not only are these validation rules consolidated into a single place, we can even use this same pure function on the frontend and the backend too.
Our form can change as much as needed by requirements, we’ll just update parse
accordingly.
The main idea here is to treat a form as a whole, and to consider it as external input, like a file. Drawing a line between that external world and the rest of our system. This isn’t specific to managing HTML forms.
Just because one edge of the system is volatile, doesn’t mean the rest of the system have to be as volatile. Just because we are looser with types on the periphery doesn’t mean we have to bear the cost in the rest of our system. We can continue to benefit from the cosy assurance of statically type checked code within these walls we draw.
This explicit management of boundaries, like managed effects, is what I appreciate from a statically typed fp language.
Q: I want to save each step of partial attributes into the db
I would save the key value Dict
data in each step as-is, without a custom parse
defined for each step: it’s pointless, the entire thing is incomplete anyways. Don’t forget however, we can still use the same parse
to obtain the complete list of errors, but only surface the errors relevant to the current Step’s UI. At the final step, I would require parse
to succeed fully.
Q: What if some form inputs need the values from other form inputs?
For example, an autocompletion list need to know which items have already been added (exclude from suggestions), and the text that is typed so far (filter the suggestions).
We can write a function that returns a Suggestions
value based on those two earlier fields inside our Dict
suggestions : Dict -> Suggestions
Our autocomplete widget should then require a suggestions value in order to render
widget : Suggestions -> Html
Q: You mentioned we could use our pure parse
function in the frontend and the backend. What if some of my validation rules are not pure and needed a check against our database?
Extend the parse
function to account for the new ExternalData
input, e.g. parse : ExternalData -> Dict -> Result Errors UserInput
and run the parse
function inside the procedure that queries for those external data. e.g. Client-side code can call upon an HTTP API and then supply the response data to the parse
function along with the form data.
If it’s not feasible in our scenario to supply such data to the Client-side, then we have to admit it can’t be checked by the Client-side however we do it. So, supply an empty value for ExternalData
to skip that validation on the Client-side; we can still have Client-side validation for the other fields + the full validation can still happen with the same function on Server-side.
UPDATE: a followup post Re: Statically Typing Big Erratic JSON
]]>Some APIs have a huge JSON format. But we might only need a few fields. Some fields could be
false
sometimes but a list of strings other times. Some fields should even be parsed depending on some other fields in the JSON. To statically type this is very tedious and the type might not be understandable anymore.
When it comes to parsing JSON, there are reasons to favor dynamic types (who doesn’t like JSON.parse
when exploring data?), but I think the reasons above are actually reasons to favor static types.
1. “Some APIs have a huge JSON format. But we might only need a few fields.”
In a dynamically typed language, we’d simply ignore the rest of the json and use what we need
function handle(jsonString) {
let user = JSON.parse(jsonString)
store(user);
}
function store({ email }) {
// do stuff with `email`
}
In a statically typed language, we’d simply ignore the rest of the json and decode what we need
userDecoder =
Json.Decode.map (\s -> { email = s })
(field "email" string) -- only decode `"email"` field
handle jsonString =
case decodeString userDecoder jsonString of
Ok user ->
store user
Err jsonError -> -- runtime error for
handle jsonError -- dynamically typed langs
store { email } =
-- do stuff with `email`
The added benefit is, if email
field is anything but a String
, we would’ve gotten a parsing error early and have code to deal with it upfront. Whereas, in the JS example, we might not know that our email
is null
or 42
until much later in the system (a background job 3 days later?).
Troubleshooting the source of bad values that causes a crash is tedious and (given decoder pattern exists) an unnecessary trouble imo.
Even if we implemented decoders in JS, we still can’t reap its benefits in the rest of our dynamically typed system. We need to manually add assertions everywhere that matters (that we remember to). welcomeEmail( { email })
? hmm, add an assertion just to be safe
function welcomeEmail( { email }) {
if (typeof email !== 'string') {
(Btw, what can we effectively do with an invalid value here? Dealing with errors deep in the system is awkward)
Isn’t writing type signatures everywhere equivalent to scattering assertions everywhere?
No. Types are checked at compile time over the entire codebase, while assertions checks at runtime… and only when that line is run. 5th page of a form wizard? Code gotta run until there, in the right condition, to find out about it.
We can add type signatures in some dynamically typed languages
Ironically, type inferred languages like Haskell had long allowed type signatures to even be removed and still keep the benefit!
Basically one provides “where do you want to type check” vs the other provides “everything is type checked”. I think it’s safe to say while we might prefer some control over what we want to type check in our code, we’d also prefer other people’s code to be fully type checked where possible 😆 … as they say, “Software engineering is what happens to programming when you add time and other programmers.”
2. “Some fields could be
false
sometimes but a list of strings other times. Some fields should even be parsed depending on some other fields in the JSON.”
Since the said JSON is beyond our control, we just have to deal with it. There is no escape.
In a dynamically typed language, we do whatever is necessary
function handle(jsonString) {
let user = JSON.parse(jsonString)
// `false` just means no preferences
if (user.preferences === false) user.preferences = []
// `state` determines what `date` actually means
if (user.state === 'deleted') user.deletedAt = user.date
if (user.state === 'active') user.lastLoginAt = user.date
store(user);
}
function store({ email, preferences, deletedAt, lastLoginAt }) {
// do stuff with fields
}
In a dynamically typed language, we do whatever is necessary but inside the decoders.
preferenceDecoder =
Json.Decode.oneOf
[ map (always []) decodeFalse -- `false` found? decode as an empty list []
, list string -- otherwise, decode as list of string
]
dateDecoder =
Json.Decode.map2 datesFromState
-- decode both json fields `state` and `date`
-- then hand it off to `datesFromState` to decide
(Json.Decode.field "state" string)
(Json.Decode.field "date" isoDate)
datesFromState stateString date =
case stateString of
"deleted" ->
{ deletedAt = Just date, lastLoginAt = Nothing }
"active" ->
{ deletedAt = Nothing, lastLoginAt = Just date }
_ ->
{ deletedAt = Nothing, lastLoginAt = Nothing }
-- We compose our `userDecoder` with these decoders
userDecoder =
Json.Decode.map3 buildUser
-- decode the fields then assemble with `buildUser`
(field "email" string)
(field "preferences" preferenceDecoder)
(dateDecoder)
buildUser email preferences { deletedAt, lastLoginAt } =
{ email = email
, preferences = preferences
, deletedAt = deletedAt
, lastLoginAt = lastLoginAt
}
-- Then we update the original snippet to mention the new fields
handle jsonString =
case decodeString userDecoder jsonString of
Ok user ->
store user
Err jsonError ->
handle jsonError
store { email, preferences, deletedAt, lastLoginAt } =
-- do stuff
3. To statically type this is very tedious and the type might not be understandable anymore.
While there is obviously more lines of code, but note that the type of User
is as clean. The messy reality of the JSON rules are captured and compartmentalized inside individual decoder functions, very testable out of the box too. The rest of the system can use this clean User
type in abandon, with the compiler ensuring there are no stray threads.
UPDATE: a followup post Re: REPL
]]>Stop. That’s not a good place to start being concern if a value is valid.
If you have to push out something fast, I’d recommend just go ahead and pretty print the given value in a best-effort manner; no error handling. Then find time to refactor.
But refactor to what? Handle the validation concern MUCH earlier in the system: upon receiving the email string (from db? from api response? from user input?), parse it into a proper data structure, e.g. EmailAddress
. If the parse failed, you’ll find that error handling is very natural (db query error, api response invalid, user input error). If parsing succeeded, now pass this EmailAddress
type value around your entire system instead of that original email string.
Once your entire system is refactored to deal with EmailAddress
instead of email address strings, you’ll find your original problem (what if the email I want to pretty print is invalid?) simply does not exist anymore. An even stronger guarantee can be had if your language can enforce that EmailAddress
value can only ever be created via that function that parses a string, and that the EmailAddress
value cannot be mutated.
Whenever you feel a strain in your options to handle an error condition, ask yourself if you’re trying to deal with it at the wrong level; can you deal with it earlier? Chances are, you need to Parse, Don’t Validate.
]]>LoadThing
message which fires off a network call, getThing
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
LoadThing num ->
( model
, Task.attempt OnThing (getThing num) )
OnThing (Err err) ->
( { model
| alert = Just (httpErrorString err)
}
, Cmd.none )
OnThing (Ok thing) ->
( { model
| thing = RemoteData.Success thing
}
, Cmd.none )
Did you notice a bug?
A very common bug here is: we’ve forgotten to set the loading status before firing off getThing
. And even after we fix that, we might realise days later that we’ve forgotten to unset the loading state upon getting an error. Whack-a-mole. As the number of APIs grow, and changes happen over time, preventing such bugs will only become harder and harder.
Wait, there’s a request to update the api, now we should update model.category
from the api response too, merge PR & deploy. Oops, we forgot to set category = Loading
😩 Again 😖
Is our constant vigilance the only protection?
As a code reviewer, I’d prefer the answer to be: no.
Well, since we’re trying to coordinate the state changes of request and response activities for each API, let’s unify them into a sum type
type RequestResponse param response
= Request param
| Response (Result Http.Error response)
type ApiMsg
= ThingApi (RequestResponse Int Thing)
-- other APIs ...
-- each API defines a `RequestResponse` with their own `param` and `response` types
then we can nest all API related Msg,
type Msg
- = LoadThing Int
- | OnThing (Result Http.Error Thing)
+ = OnApiMsg ApiMsg
-- other button click etc Msg still remain
and delegate all API related state updates to a new updateWithApiMsg
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
OnApiMsg apiMsg ->
updateWithApiMsg apiMsg model
-- other button click etc Msg still handled here
updateWithApiMsg : ApiMsg -> Model -> ( Model, Cmd Msg )
updateWithApiMsg siteApi model =
case siteApi of
ThingApi requestResponse ->
case requestResponse of
Request num ->
( model
, requestCmd ThingApi (getThing num) )
Response (Err err) ->
( { model
| alert = Just (httpErrorString err)
}
, Cmd.none )
Response (Ok thing) ->
( { model
| thing = RemoteData.Success thing
}
, Cmd.none )
-- other APIs ...
We’ve done a bunch of busy work but everything still compiles; we still have our bug?? Perfect. Now, we’re ready to make this bug a type error.
Recall that what we’re trying to achieve is make sure we don’t forget to update the same set of model attributes for every stage of the API call (request, success, error). And each API will have their own set of model attributes.
All branches in a
case
must produce the same type of values. This way, no matter which branch we take, the result is always a consistent shape.
Let’s rearrange our Elm code to take advantage of this
updateWithApiMsg : ApiMsg -> Model -> ( Model, Cmd Msg )
updateWithApiMsg siteApi model =
case siteApi of
ThingApi requestResponse ->
let
-- NOTE: `updated` is only a subset of our `Model` record type
-- with only the fields that needs to be updated for `ThingApi`
( updated, cmd ) =
case requestResponse of
Request num ->
( {}
, requestCmd ThingApi (getThing num)
)
Response (Err err) ->
( { alert = Just (httpErrorString err) }
, Cmd.none
)
Response (Ok thing) ->
( { thing = RemoteData.Success thing }
, Cmd.none
)
in
( { model | alert = updated.alert, thing = updated.thing }, cmd )
-- other APIs ...
-- each `updated` record will be different subsets of `Model` record type
Now, we have a compiler error!
The 2nd branch is a tuple of type:
( { alert : Maybe String, thing : RemoteData.RemoteData Http.Error a }
, Cmd msg
)
But all the previous branches result in:
( {}, Cmd Msg )
Hint: All branches in a `case` must produce the same type of values. This way,
no matter which branch we take, the result is always a consistent shape. Read
<https://elm-lang.org/0.19.1/custom-types> to learn how to “mix” types.
Elm does not allow returning different types for different branches of a case
or if
expression, but currently
{}
{ alert : Maybe Alert }
record{ thing : RemoteData.RemoteData Http.Error a }
record( model, sendRequestCmd )
it won’t work too!The only way to compile, is to return the same subset of fields for all branches of our case requestResponse of
– which is exactly what we wanted!
updateWithApiMsg : ApiMsg -> Model -> ( Model, Cmd Msg )
updateWithApiMsg siteApi model =
case siteApi of
ThingApi requestResponse ->
let
-- NOTE: `updated` is only a subset of our `Model` record type
-- with only the fields that needs to be updated for `ThingApi`
( updated, cmd ) =
case requestResponse of
Request num ->
( { alert = model.alert -- aka no change
, thing = RemoteData.Loading
}
, requestCmd ThingApi (getThing num)
)
Response (Err err) ->
( { alert = Just (httpErrorString err)
, thing = RemoteData.Failure err
}
, Cmd.none
)
Response (Ok thing) ->
( { alert = Just "Thing loaded successfully"
, thing = RemoteData.Success thing
}
, Cmd.none
)
in
( { model | alert = updated.alert, thing = updated.thing }, cmd )
-- other APIs ...
-- each `updated` record will be different subsets of `Model` record type
Now, each branch is required to return the same fields; each API can have their own field set. Elm compiler can be our constant vigilance instead. 🎉
]]>After knowing the basic Elm syntax, we could be reading the following “return values” like this
userFromJWT : String -> Maybe User
-- returns a User value or nothing
split : String -> List String
-- returns a list of String
With the help from our prior programming experience, we treat the return values as *User
and []String
and jumps directly to think in terms of “How do I use a var u *User
value? u.name
?” and “How do I use a var s String
? String.length(s)
?”
We don’t actually “see” the words Maybe
or List
; our attention was focused on User
and String
.
If we proceed with these instincts alone, we’ll fumble when we see unfamiliar types. We’d ask the wrong questions
view : Model -> Html Msg
-- returns an h-t-m-l message... but how do I return such a message?
userEmail : Json.Decode.Decoder String
-- returns a json decode decoder string... but how do I return such a string?
There are no answers to the questions because there’s a misunderstanding of the grammar. To avoid being misled by my prior programming experience, I do this one weird trick instead:
Just focus on the first word
Html Msg
^^^^
-- and read "Html" type
Json.Decode.Decoder String
^^^^^^^^^^^^^^^^^^^
-- and read "Json.Decode.Decoder" type
HelloWorld (Html msg) (Json.Decode.Decoder String)
^^^^^^^^^^
-- and read "HelloWorld" type
Then look at the documentation for that word; look for functions that returns that word
Here are some functions that I found with text search -> Html
on in the Html
docs page
text : String -> Html msg
div : List (Attribute msg) -> List (Html msg) -> Html msg
span : List (Attribute msg) -> List (Html msg) -> Html msg
-- note: unfamiliar types like `Attribute` usually refers to a type defined in the same module, i.e. `Html.Attribute`. You may have to search around for it
And calling any of the these functions will be able to give us a value of Html
type, just provide the necessary input argument values.
Here are some functions that I found returning Decoder
in the docs page of Json.Decode
(there wasn’t a page on Json.Decode.Decoder
alone)
string : Decoder String
int : Decoder Int
list : Decoder a -> Decoder (List a)
field : String -> Decoder a -> Decoder a
oneOf : List (Decoder a) -> Decoder a
Same idea: calling any of the these functions will be able to give us a value of Json.Decode.Decoder
type, just provide the necessary input argument values.
Html
typeIn an Elm Browser program we are required to provide a view
function that returns Html
type. We can deduce Elm takes this return value and renders the corresponding DOM nodes in the browser.
view : model -> Html msg
view model =
Html.h1 [ Html.Attributes.class "greeting" ] [ Html.text "Hello" ]
will render
<h1 class="greeting">Hello</h1>
The Html
module does not export the internal details of this type for us to look at (aka it’s an opaque type). The module only exports a bunch of functions for us to call, to obtain various Html
values. And really that’s all we can know about it.
msg
part?That’s the type parameter of Html
type. We can’t always have an effective mental model of what role they play. Sometimes we can deduce easily, e.g. Maybe Int
or List String
. But other times, it’s not as obvious
string : Parser (String -> a) a
What we can practically do with them is to make sure the type parameters / associated data types align, e.g.
map : (a -> b) -> a -> b
map func arg =
...
To “line up” the type parameters.. i’ll make sure the the a
types are the same. And similarly:
h1 : List (Attribute msg) -> List (Html msg) -> Html msg
When we call Html.h1
we can give any list of attributes and list of inner html elements – as long as their msg
are the same type.
Now, if we go back up and look at a Browser program signature, e.g. Browser.sandbox
sandbox :
{ init : model
, view : model -> Html msg
, update : msg -> model -> model
}
msg
must refer to the same type everywhere regardless of whether it is Int
or a Msg
custom type
update
— a way to update your state based on messages https://guide.elm-lang.org/architecture/
This means the msg
in Html msg
refers to the type of value that will be handled in your update
function. So we can conclude that
Html
is the DOM node typemsg
is the type for “what event type does this DOM node trigger”There’s a slide that says
circle.grow(3) grow(circle, 3)
“Objects and methods” are syntax sugar for structs and procedures.
I use to agree: they are just syntax sugar, where “syntax sugar” imply something inconsequential or communicates a lack of importance.
But in recalling my stumbles learning Elm – and given the fact that the talk is about our we wish FP was the norm; winning market share – I think we should appreciate the difference from another angle: usability. Appreciate maybe why OOP (syntax) has even slightly more users because of that usability win.
Hey it all adds up right?
Affordance describes all actions that are made physically possible by the properties of an object or an environment. A bottle screw cap affords twisting. A hinged door affords pushing or pulling. A staircase affords ascending or descending.
https://uxdesign.cc/affordance-in-user-interface-design-3b4b0b361143
So, in the code snippet above, what does the variable circle
afford us to do? and how does a programmer find out?
NOTE: Keep in mind that the variable could be of a less straightforward type than a
Circle
orShape
(e.g.UnnormalizedMeterReading
), or the programmer might be new to the codebase, or be the original author but had forgotten all the details after a hiatus.
What does an OOP programmer do? I’ll just type c-i-r-c-l-e-. and after that period .
my code editor will show a handy dropdown list of possible actions, along with documentation snippet, specifically catered for the variable:
circle.
color(c Color) : set foreground color to c
grow(n Integer) : increases diameter by n
moveTo(x, y Integer) : place the circle on specified x,y coordinate
Contextual help. Perfect. Basically an equivalent in GUI would be the wizard dialog and its “next” “next” “finish” buttons.
NOTE: That’s in fact exactly what the Swift Playgrounds iPad app does. You can code without even bringing up the keyboard.
On the other hand, what does an FP programmer do? Assuming I can figure out what type the variable is, I’ll open the documentation in my browser, press CMD-F
and try my luck with “Find on page” with some guess words… say… resize
? no?
When you design user interfaces, it’s a good idea to keep two principles in mind:
- Users don’t have the manual, and if they did, they wouldn’t read it.
- In fact, users can’t read anything, and if they could, they wouldn’t want to.
– https://www.joelonsoftware.com/2001/10/24/user-interface-design-for-programmers/
The first step is admitting you have a problem.
What’s the next step? A function signature search perhaps? A bit out of the way, but more importantly, it needs precision to yield results fail example vs success example.
“Can any function from any module that can receive this Shape value, along with any other arguments, please stand forward? Curried functions in the current closure?”
Maybe something for the code editor community to think about.
FWIW, the SQL syntax has the same problem too: you type SELECT
and wait there.. no code editor can help you very much since nobody knows what you are trying to select FROM
yet. If that syntax had just switcheroo, FROM ... SELECT ...
the more usable SQL syntax could’ve driven SQL adoption even higher ;-)
UPDATE: See also, Data exploration through dot-driven development
]]>type Decoder a
A value that knows how to decode JSON values.
And followed by a rather imperative description of Json.Decode.float
float : Decoder Float
Decode a JSON number into an Elm
Float
.
I suspect my head was parsing as
Json.Decode.Decoder a
means a decoder that knows how to decode a given json string into a value of type a
Json.Decode.Decoder Float
knows how to decode a given json string into a Float
.Json.Decode.Decoder User
, it is something that knows how to decode a given json string into a User
I wasn’t aware how deeply OOP had shaped my thinking. Or maybe it’s because of the “-er” suffix of Decoder
and my Go instincts. So, I kept wanting to give a string to my Json.Decode.Decoder User
, “here, decode it into User
“ – but it can’t
Json.Decode.float.decodeString(someString) -- oh no, not a thing
To ease into the right intuition, it is better I don’t treat Json.Decode.Decoder Float
nor Json.Decode.Decoder User
as “objects”. They don’t “know how to do” anything. They hold values and are easier to grok if perceived as dumb values like { kind = "Float" }
or { kind = "User" }
.
A Json.Decode.Decoder a
Give that string to Json.Decode.decodeString
function instead:
Json.Decode.decodeString Json.Decode.float someString
The code doing the work is inside the Json.Decode.decodeString
function, not with your Json.Decode.Decoder a
value.
Our Json.Decode.float
decoders are just values (or flags or settings or config… whatever works for your mental model) that will be used to if-else our way inside the Json.Decode.decodeString
function, to decide what code branch to run. i.e. a pretend implementation of decodeString
might be
decodeString decoder someString =
case decoder.kind of
"Float" ->
String.toFloat someString
"User" ->
...
So, any function returning a Decoder a
is just a function that returns a flag or settings or configuration… not returning an “object” with “method” that parses a string.
If you’re struggling with Json.Decode.Decoder
, hope this mental model helps.
NOTE: it doesn’t matter if the implementation detail is such that there’s actually a function being carried around inside Json.Decode.Decoder a
values; the OOP mental model makes this hard to understand.
It’s been more than a decade since Ruby on Rails included a db:migrate
feature.
Sadly when creating a non-Rails app, it’s still a thing to unnecessarily waste brain cycles on.
Do I want to include Rails to my non-Rails app just to manage my database schema evolution properly?
How about equivalent tools in [my choice language] environment? The implementations are usually ignorant (or ignores by design 🤷♂️) of the various real world use cases that have evolved the design of rails database schema migrations policy. e.g. naming files with an incrementing number starting from 001, requiring human with direct db access to salvage a failed migration.
Whenever I use a new stack, do I have to look for “db:migrate” for that stack again? Even though I didn’t change my choice of database?
How about not evolving my database schema and code like it’s 2004? LOL – do you not version control your source code too?
After getting bitten by yet another issue (that wouldn’t have caused any grief if it had adopted the rails-way), I decided to see what it would take to implement Rails database schema migrations policy.
In Go. Because tiny, fast, and portable binary. I’d like to use it regardless of whether I’m writing Ruby, Rust, Javascript, or Elm.
So, what does it take?
Store each version number that had been applied; not just the “current version”
Quiz time: There were a couple of PRs to be merged X and Y. X was merged and deployed. All good. When Y was merged, deployment succeeded but app began to crash on certain features; migrations in Y wasn’t applied. Guess why?
If you figured out why, part 2 is: how taxing is it going to be on the team to ensure that doesn’t happen again? hint: check list, per PR, per deploy.
The migration policy should be to apply any migration that wasn’t applied before. So we need to track every version number that had been applied. The chaos scenarios of imagined “incompatible migrations” either won’t happen or isn’t better managed in alternative policies. Remember that
System resources aside, the only time when migrations would “pass on staging but fail on production” is when there were unexpected values in production database (e.g. preventing an alter column). When this happens, your production deployment must fail and not proceed. When that happens, our production database will only be 1 bound behind our staging database schema migration history. A variety of approaches can be taken moving forward, e.g. undo previous migration in staging, add unexpected values, replicate failure, edit offending migration script, and redeploy.
Aside: When a new deployment fails (due to schema migration or whatever reason), your previous deployment must still chug along fine. Otherwise, you actually have bigger problems.
UTC timestamp as version number
Why should this even be a choice?
Rollback on error
This allows for a straightforward “fix offending migration and re-deploy” playbook. Given an uncreative but clear constraint, developers can have creative solutions on how to best roll out their multi-step migration.
Aside: Different databases have different rollback capability. The tool itself shouldn’t make promises it can’t keep on atomicity of migrations across different database systems. Understand yours and solution accordingly, e.g. break up into multiple files if necessary to ensure “save points” should something go wrong
Up and down scripts written in your database syntax (SQL)
This is not the place for leaky abstractions. Use your database features to make changes to your database.
If that’s all fine and dandy to you too, then the good news is dbmigrate exists and is runnable as a single binary or ready to go Docker container
]]>