VueNuxt.com
Refactor

Prop Stability

Vue Insight
Return to Insight

Join me for a behind-the-scenes look at how I would approach the refactor of a newly discovered insight such as prop stability. Since we are dealing with legacy code with no prior tests, writing tests for everything is simply out of the question. That's a task all on its own. In cases like this I lean into isolating the feature logic so that it can be tested with as little dependencies (coupling) as possible.

Let's begin

Intro

I usually start by going through all the code and taking mental notes on some of the things that can be improved in our code. Then, I start by grouping things. This example is simple, so most of our feature logic is already colocated, which is a good thing, but not always the case.

const activeId = ref(0);

function isActive(id: number) {
  return id === toValue(activeId);
}

As we move down, we notice some coupling between our feature and the component. I would say this is not the best place for the business logic details of how to cycle through our activeId's.

function onClick() {
  activeId.value = (toValue(activeId) + 1) % stack.length;
}

Finally in our template, we see more coupling, this time to the presentational layer. This is not so bad, but I always try to reduce the amount of logic we put into the template. Lastly, I also see room for improvement on the naming of the property we are adding.

 v-bind="{ ...tech, isActive: isActive(tech.id) }"

These code smells already hint us where to focus our refactoring efforts. However, a bigger, more important issue is becoming visible.

How can we test our new logic? As things stand, I feel testing this will be dificult.

Ok, got it, I will focus on modularizing the feature, adding tests and improving the readability of our code.

These three things will be the scope of our refactor.


Tests

In Vue, the best way to isolate and modularize logic is through composables. By creating a new composable we also have the perfect opportunity to use as much of the TDD approach as possible. I think this is the best way of minimizing our chances of introducing side-effects in the future and documenting how our feature works through tests.

I can identify these as the minimal areas that our tests need to cover.

  1. State
  2. Mutating our state
  3. Logical condition
  4. Extending our data set with the logical condition

So lets begin by translating our acceptance critaria into some empty test cases:

useActiveId.test.ts
describe('useActiveId', async () => {
  test('state: default case', () => {});
  test('state: initialized state', () => {});
  test('mutation: happy path', () => {});
  test('mutation: edge case', () => {});
  test('logicCondition: truth case', () => {});
  test('logicCondition: false case', () => {});
  test('Extending: Isolation case', () => {});
  test('Extending: Integration case', () => {});
});

Notice how I don't dwell on the titles for each test case. I just write enough to link them to the acceptance criteria I have in mind. As I work on each specific test case. I will refine them with a more descriptive title. I have found this approach to greatly reduce the friction of getting started for me.

Next lets create mock data and define the constants that will drive our tests.

useActiveId.test.ts
const MOCK_DATA = [
  { id: 0, name: 'first' },
  { id: 1, name: 'second' },
  { id: 2, name: 'third' },
];

const INIT_ID = 3;
const FIRST_ID = MOCK_DATA[0].id;
const SECOND_ID = FIRST_ID + INCREMENT;
const THIRD_ID = SECOND_ID + INCREMENT;
const LAST_ID = MOCK_DATA[MOCK_DATA.length - 1].id;

Notice how our constants are driven by our main mocked data structure. This will improve the stability of our tests and improve the readability of the assertions in our test cases.


State

The ref holding our state is activeId. We need to remove this from the component and move it to the composable. We also need to expose this ref so that it can be mutated from outside the composable. We need to introduce a constant to handle the init default value for our composable.

For tests, we only have to consider the following two cases:

  1. That it starts with the right value by default (default case).
  2. That we can initialize the composable with a custom value (initialized case).

In code, our refactor looks like this:

export const DEFAULT_INIT = 0;

export const useActiveId = (initActiveId: number = DEFAULT_INIT) => {
  const activeId = ref(initActiveId);

  return {
    activeId,
  };
};

Mutation

Now that we have our state in the composable. We can move to the logic that drives the mutation. At the moment, this is coupled to the component's onClick handler which is not ideal.

function onClick() {
  activeId.value = (toValue(activeId) + 1) % stack.length;
}

So let's extract this logic and move it into our composable. I think that having visual reference of the mutation on the onClick handler would be beneficial for future readers of the code. So I suggest creating a method that only returns the value we need instead of using a setter. This will allow us to do the reassingment at the component level. We also need to add a new constant to be able to configure our increment value.

If the method would handle the mutation directly I would call it something like setActiveId but since we are returning the next id value from our data set and handling an edge case where we cycle from the last item back to the first. I believe nextActiveId is a better name. Lastly, to make it generic and reusable, we must allow it to accept the data set we want to use as an argument, making it a pure function.

Our mutation method would need to account for the two following cases:

  1. Every time we call this method we increase the activeId value by the increment (happy path).
  2. If we are at the last item of our data set, the next Id returned would be the first one from our data set (edge case).
export const DEFAULT_INIT = 0;
export const INCREMENT = 1

export const useActiveId = (initActiveId: number = DEFAULT_INIT) => {
  const activeId = ref(initActiveId);

  function nextActiveId<T>(data: T[]): number {
    return (toValue(activeId) + INCREMENT) % data.length;
  }

  return {
    activeId,
    nextActiveId,
  };
};

Awesome, things are already looking a lot better. Our decision to go after isolation of the new feature with a composable is making it really easy for us to test things. Our design has also improved the naming of our code. So let's keep going.


Logical condition

The next step is our logical condition, which is also at the component level. Yes, you guessed it, we need to move this into the composable as well. However, we can improve its naming because in its new context it might not fit as well anymore.

I like using names like isActive for variables that return a boolean but is also conveys we are dealing with a runtime state, which is no longer the case here. For this reason I would rename this method to hasActiveId in our composable. In my opinion this communicates that the function matches the logical condition its name implies regardless of whether if it runs in the server or client context.

In terms of testing, we only need to cover the truthy and falsy cases:

  1. returns true when we pass an id that matches the activeId (true case)
  2. returns false when we pass an id that does not match the activeId (false case)

No matter how simple the test might be (the ones above are pretty simple). I would include them because in my opinion, they document the intent of the developer while writing the code and the functional requirements of the feature. This enable us to refactor with confidence at any time.

export const DEFAULT_INIT = 0;
export const INCREMENT = 1

export const useActiveId = (initActiveId: number = DEFAULT_INIT) => {
  const activeId = ref(initActiveId);

  function nextActiveId<T>(data: T[]): number {
    return (toValue(activeId) + INCREMENT) % data.length;
  }

  function hasActiveId(id: number): boolean {
    return id === toValue(activeId);
  }

  return {
    activeId,
    nextActiveId,
    hasActiveId,
  };
};

The naming in our template loop is already more descriptive of our intent:

// from
v-bind="{ ...tech, isActive: isActive(tech.id) }"
// to
v-bind="{ ...tech, isActive: hasActive(tech.id) }"

Extending

The next step is a big one, we need to find a cleaner way to extend the data set to comply with the implemenetation details of our feature. A function that extends the schema of any data set is my preferred approach here.

This new function needs to:

  1. make this extension testable in isolation.
  2. minimize the coupling in the template.

With these two things in mind, I'm thinking of creating a new method that extends any data set we pass into it. This would separate our data set from the single responsability of our composable which is to deal with the activeId. This functional approach allows us to test the function in isolation, while making our composable highly resusable.

function withIsActive<T>(data: T[]) {
  return data.map((item) => {
    return {
      ...item,
      isActive: hasActiveId(item.id),
    };
  });
}

My idea here is to receive any data set as an argument and map through it. We would spread any properties we take in first and then include our isActive property with the value from the output of the hasActiveId method for each item in our loop. Returning the modified object from the method to the template.

For testing, we only need to consider two cases:

  1. That every item in the data set is extended with a property isActive with a value type of boolean (isolation case)
  2. That the isActive property is truthy for the current item as we iterate through all our data set (integration case)
export const DEFAULT_INIT = 0;
export const INCREMENT = 1

export const useActiveId = (initActiveId: number = DEFAULT_INIT) => {
  const activeId = ref(initActiveId);

  function nextActiveId<T>(data: T[]): number {
    return (toValue(activeId) + INCREMENT) % data.length;
  }

  function hasActiveId(id: number): boolean {
    return id === toValue(activeId);
  }

  function withIsActive<T>(data: T[]) {
    return data.map((item) => {
      return {
        ...item,
        isActive: hasActiveId(item.id),
      };
    });
  }

  return {
    activeId,
    nextActiveId,
    hasActiveId,
    withIsActive,
  };
};

Types

At this point in our refactoring process, we start to notice some red squiggly lines in our IDE, Typescript is not happy. The types for our composable are pretty straightforward. The only one that complicates things a bit is our withIsActive method. We made the choice to make it generic because we do not want to fix it to any specific shape. The problem is that when we assign the value of isActive we need to pass the item.id into the hasActiveId method.

TS will complain about this because id does not exist in our generic type T.

So lets not bang our heads against the type wall here. We just need to constraint our type enough to give useful feedback to any consumer of our method about what might break its core functionality. This method would pretty much be useless if the data set that we pass in has no property of id. A type like the one below would protect use against this case and improve the devex with hints in case of trouble, which would be enough in my view.

Types.ts
export type WithId<T> = {
  id: T & number;
  [x: string]: T;
};

This type defines the interface of an object that has a key of id with a type from the intersection of our genergic type T and number.

We need to use an intersection here otherwise TS will complain that number cannot be assigned to the type of T.

This is our only requirement and thus main constraint, hence why we hard code it. We then proceed to open the type up to take in any key with a type of string and a generic type of T. This way we can customize T from the outside, which is better than fixing it to a fallback of unknown.

Now we can fix the squiggly lines in our tests:

Test.ts
test('withIsActive: Isolation case', () => {
  const { withIsActive } = useActiveId(0);
  withIsActive<string | number>(MOCK_DATA).forEach((item) => {
    expect(item).toHaveProperty('isActive');
    expect(item.isActive).toBeTypeOf('boolean');
  });
});
test('withIsActive: Integration case', () => {
  const { activeId, nextActiveId, withIsActive } = useActiveId(0);

  MOCK_DATA.map((item) => item.id).forEach((id) => {
    expect(withIsActive<string | number>(MOCK_DATA)[id].isActive).toBeTruthy();
    activeId.value = nextActiveId(MOCK_DATA);
  });
});

You can see now that we are defining the generic of withIsActive with string | number. This tells TS, our data set is an array of objects with at least an id of type number and any other string key with a value of number or string.


Encapsulation

We have worked hard to make our List and ListItem components clean, performant and reusable but we can still do one last thing to make it better. In a more complex example there might be a lot of configuration to consider. Also, if we use this component in multiple places of our app, you can see how keeping all of them in sync can quickly lead to headaches or repetitive work. We can improve all this by encapsulating how we use this reusable component and all it's configuration as required by our feature into a configured component, another best practice I really like.

It's starts by framing our reusable component from the perspective of our feature, which most of the time is also the perspective of the user.

The user does not care that we have a RefactoredList or StableList, what they see and care about is the TechStackList.

So we begin by creating a new component with this name, and removing all coupling of our data set in the RefactoredList component.

<script setup lang="ts">
const stack = [
  { id: 0, name: 'vue' },
  { id: 1, name: 'nuxt' },
  { id: 2, name: 'pinia' },
  { id: 3, name: 'tailwind' },
  { id: 4, name: 'typescript' },
  { id: 5, name: 'vitest' },
];
</script>

<template>
  <RefactoredList :data="stack" />
</template>

Great, this gives use a better place for our data set (If we had to fetch data, this would make even more sense). Best of all, our reusable component is finally truly generic. There is zero coupling. If we need to reuse our feature component in multiple parts of the app then we can just move a single thing along without having to worry about keeping things in sync.

At first glance, you might say, that this wrapper component is really not necessary. With this simple example, I could agree with you. However, one of the reasons I really like this best practice is because it allows us to have a nice package for our feature and sets the boundries for our testing strategy:

  • The reusable component is best visually documented by Storybook and can show generic data.
  • The configured component can be functionaly documented by component tests in Cypress, to make assertions of our feature data set in the dom.

If you consider this, then in my opinion, its defintely worth the effort to do this.

Final code

We did it! Our code is so much cleaner now and our tests give us confidence moving forward. Thank you for sticking with me through the whole process. I know it wasn't quick but we should be proud of what we accomplished. Take it all in, this is the final state of our feature code:

import { ref, toValue } from 'vue';
import type { WithId } from './useActiveId.types';

export const DEFAULT_INIT = 0;
export const INCREMENT = 1;

export const useActiveId = (initActiveId: number = DEFAULT_INIT) => {
  const activeId = ref(initActiveId);

  function nextActiveId<T>(data: T[]): number {
    return (toValue(activeId) + INCREMENT) % data.length;
  }

  function hasActiveId(id: number): boolean {
    return id === toValue(activeId);
  }

  function withIsActive<T>(data: WithId<T>[]) {
    return data.map((item) => {
      return {
        ...item,
        isActive: hasActiveId(item.id),
      };
    });
  }

  return {
    activeId,
    nextActiveId,
    hasActiveId,
    withIsActive,
  };
};
Open Playground

In the playground's terminal you can run ctl+c and npm run test to run the tests with vitest.

Return to Insight

Copyright © 2024