How We Migrate Our Applications to Micro Frontend Architecture at Gcore

How We Migrate Our Applications to Micro Frontend Architecture at Gcore

At Gcore, we’re tech evolution and adaptability are our bread and butter. Recently, we embarked on a new project: migrating our largely Angular-based applications to a Micro Frontend Architecture. We opted for Module Federation as our strategy for this transition, given our extensive use of Angular.

Our Goals

As we started our migration journey, we established clear-cut goals. Our ambitions were not limited to modernizing our technology stack, but also included tangibly improving both the user experience and our development process.

  • Reduce loading time: Our first priority was to enhance our applications’ performance by reducing their loading time. Faster load times translate directly into improved user satisfaction and engagement.
  • Reuse modules: We aimed to establish a more efficient development process by reusing modules across different applications. Not only does this minimize redundancy, but it also accelerates the development cycle and enhances maintainability.
  • Abandon subdomains in favor of pathnames: We wanted to move away from using subdomains (not completely, of course,) instead opting for pathnames. This shift gives us finer control over routing and delivers a more seamless user experience.
  • Optimize widget script initialization: Lastly, we had a widget script that initializes with every application. We decided this needed to change. Rather than have it load with each app individually, wasting precious time, we wanted this process to occur just once during the loading of our shell application.

These objectives guided our migration to the Micro Frontend Architecture. Our story is not just about a technological upgrade, but also about the pursuit of a more efficient, user-friendly digital environment.

Module Federation

Before we delve deeper into our journey, let’s shed some light on the crucial tool we employed: Module Federation. A feature introduced in Webpack 5, Module Federation allows for separate builds to form various “micro frontends,” which can work together seamlessly.

Module Federation enables different JavaScript applications to dynamically run code from another build, essentially sharing libraries or components between them. This architecture fosters code reuse, optimizes load times, and significantly boosts application scalability.

Now, with a better understanding of Module Federation, we can explore how it played a pivotal role in our migration process.

ngx-build-plus

In the Angular ecosystem, ngx-build-plus has been a game-changer for implementing Module Federation. It’s an extension for the Angular CLI that allows us to tweak the build configuration without ejecting the entire Webpack config.

We can define shared dependencies, making sure they’re only included once in the final bundle. Below is an example of a configuration where we’ve shared Angular libraries, rxjs, and some custom Gcore libraries:

hared: share({ 
	'@angular/core': { singleton: true, requiredVersion: '^14.0.0' }, 
	'@angular/common': { singleton: true, requiredVersion: '^14.0.0' }, 
	'@angular/router': { singleton: true, requiredVersion: '^14.0.0' }, 
	rxjs: { singleton: true, requiredVersion: '>=7.1.0' }, 
	'@gcore/my-messages': { 
    	singleton: true, 
    	strictVersion: false, 
    	packageName: '@gcore/my-messages', 
	}, 
	'@gcore/my-modules': { 
    	singleton: true, 
    	strictVersion: true, 
    	requiredVersion: '^1.0.0', 
    	packageName: '@gcore/my-modules', 
	}, 
	'@gcore/ui-kit': { singleton: true, requiredVersion: '^10.2.0' }, 
}),

So there you have it, folks! Get ngx-build-plus, set up Module Federation, configure your shared dependencies and voila, you’re a micro frontend maestro. Congratulations!

Oh, wait


Communication Between Applications

As our applications’ complexity grew, so did the need for efficient communication between them. Initially, we had a widget script loaded on each application page, and communication between the application and widget was orchestrated via the window object. While this worked, we realized we could optimize this even further.

Enter @gcore/my-messages, our very own knight in shining armor. It’s a shareable service. It’s more akin to a message bus—except it’s not a bus, it’s a service, powered by rxjs.

But before we get carried away with metaphors, let’s clarify one thing: this service is blissfully unaware of widgets and applications. It only deals with interfaces of messages and the logic of sending them. Basically, these interfaces are conventions. This keeps the service lean, efficient, and unprejudiced, making it a perfect mediator for our applications’ chatter.

There’s more.

Where Am I? Statics’ Lack of Self-Awareness

Statics are blissfully unaware of their surroundings, and this lack of self-awareness can cause some real headaches.

To solve this existential crisis, we created a mechanism that could inform each micro frontend app about its own origin. Various solutions could have been adopted, but we decided to build my-modules.

Think of @gcore/my-modules as the travel guide. It’s injected into the shell application and carries all the essential information about the micro frontend apps. This module, crafted to be environment-aware, is configured during the shell CI/CD processes. As such, it’s dynamic yet reliable, and filled while shell initialization. This means you can configure it as you wish.

Through Module Federation, my-modules is shared, allowing other apps to access this essential information if and when they need it. Note that every time you add a new micro frontend application which should be served through your shell, you should update my-modules and configure it properly. No more lost applications, everyone knows where they are.

Local Development: Module Federation App as Standalone)

Now, let’s talk about something you’ve not seen yet – @gcore/my-panel. You may not have encountered it in the Module Federation webpack config, but it’s been there all along, working tirelessly behind the scenes.

The role of @gcore/my-panel is to help us initialize the widget. It processes widget-messages, sends them out via widget-messages, and does the reverse as well. And that’s not all; @gcore/my-panel serves another significant role during local development by enabling us to run our micro frontend application as a standalone application.

So, how does it work? Well, in a micro frontend application, similar to in the shell application, you should initialize it during the initialization process. Here’s how we do that in our app.init.ts:

export async function initApp( 
appConfigService: AppConfigService, 
myModulesService: MyModulesService, 
myPanelService: MyPanelService, 
): Promise<unknown> { 
await appConfigService.loadConfig(); 
fillMyModulesService(myModulesService, appConfigService); 
return myPanelService.init(appConfigService.config.widgetUrl); 
}

This is how we’ve managed to integrate @gcore/my-panel and utilize it effectively in our applications, making it an indispensable part of our migration to a micro frontend architecture.

If you look closely, you will see another key operation taking place in our initApp function. We fill up our myModulesService with settings from our appConfigService. This step is essential in making sure our widgets are properly equipped with the necessary configuration to function optimally in our applications. In your MF application the app.module level, you can provide APP_INITIALIZER:

export function initApp(myPanelService: myPanelService): any { 
	return async (): Promise<void> => { 
    	await myPanelService.init(); 
	}; 
}

You might be wondering, “Where’s the extensive configuration we can usually see in initApp from init.app.ts?” You’re not alone! The approach has indeed changed; let’s dissect this.

Micro Frontend Application Initialization

When we serve our application on a domain like localhost:4001, it behaves just like a standard Angular application, thanks to the magic of myPanelService.init(). This function allows developers to work with their application in a familiar environment. Let’s consider this application as Mfe1App hosted on localhost:4001.

However, things get interesting when we attempt to load our micro frontend application into our shell application. Webpack visits localhost:4001/remoteEntry.js to fetch the micro frontend module. This is defined in our shell’s app-routing.module.ts:

{ 
	path: 'mfe1App', 
	loadChildren: () => 
    	loadRemoteModule({ 
        	type: 'manifest', 
        	remoteName: 'mfe1App', 
        	exposedModule: './Module', 
    	}).then((m) => m.Mfe1AppModule), 
},

And also in our mfe.manifest.json:

{ 
  "mfe1App": "<http://localhost:4001/remoteEntry.js>" 
}

Our webpack configuration of Mfe1App only exposes one module:

new webpack.container.ModuleFederationPlugin({ 
	name: 'mfe1App', 
	filename: 'remoteEntry.js', 
	exposes: { 
    	'./Module': './src/app/mfe1App/mfe1App.module.ts', 
	}, 
	library: { 
    	type: 'module', 
	}, 
}),

This configuration exposes the mfe1App module, with the mfe1App.module.ts file as its entry point, and the remoteEntry.js file as the file used to load the module. The type property is set to module to indicate that the module uses ES modules. And this is why our Mfe1App’s initApp is so succinct: we’re initializing everything within this module. For that, we use guards.

Consider initMfe1App.guard.ts:

// imports 
@Injectable() 
export class InitMfe1AppGuard implements CanActivate { 
constructor( 
private myMessagesService: myMessagesService, 
private configService: AppConfigService, 
private authService: AuthService, 
private widgetServive: WidgetService, 
private mfe1AppService: mfe1AppService, 
//... 
@Optional() private myModulesService: myModulesService, 
) {} 
public canActivate( 
	route: ActivatedRouteSnapshot, 
	state: RouterStateSnapshot, 
): Observable<boolean | UrlTree> | Promise<boolean | UrlTree> | boolean | UrlTree { 
	this.mfe1AppService.createPrimaryUrlFromRouteSnapshot(route); 
	if (this.widgetServive.loaded) { 
    	return true; 
	} 
	return this.myMessagesService.messages$.pipe( 
    	filter((message: WMessage): message is WMessageWidgetLoaded | WGlobalConfigChanged => { 
        	if ( 
            	checkMyMessageType(message, W_MESSAGE_TYPE.GLOBAL_CONFIG_CHANGED) || 
            	checkMyMessageType(message, W_MESSAGE_TYPE.WIDGET_LOADED) 
        	) { 
            	return true; 
        	} else { 
            	this.myMessagesService.sendMessage({ type: W_MESSAGE_TYPE.GLOBAL_CONFIG_REQUEST }); 
            	return false; 
        	} 
    	}), 
    	take(1), 
    	tap((message) => this.widgetService.load(message.data)), 
    	//... 
    	mapTo(true), 
    	timeout(1000 * 10), 
	); 
}

This guard replaces the APP_INITIALIZER token you are used to, providing a new home for all initialization logic.

You did it! You can now start using your micro frontend applications. For your mf app, APP_INITIALIZER is for standalone initialization and init.guard is for MF modules. This new approach streamlines the initialization process, offering developers a more familiar, Angular-like experience. The more things change, the more they stay the same, right?

But what about when something goes wrong?

providedIn: ’root’ Is Not So Friendly Anymore

As you embark on your micro frontend journey, you might experience some turbulence, especially when your application doesn’t start smoothly due to injection conflicts. This can occur because most of your providers were provided in ’root’, a popular technique widely used in the Angular realm.

While this approach is still generally good practice, it can become less suitable for your micro frontend apps in certain cases. Specifically, if some of your services depend on other services and configurations that are now initialized in the app init guard, you should provide them at the micro frontend application level.

That being said, providedIn: ’root’ can still be a viable choice for global, non-configurable, or genuinely global services. However, you should harness your analytical prowess to provide services where they’re truly required.

Perhaps it’s time for a little restructuring; consider making some of those global helper services local and injecting them directly into components where they’re needed. This shift can enhance the modularity and maintainability of your application, making it more robust and easier to navigate.

Conclusion

The journey to a micro frontend architecture at Gcore was a worthwhile one, yet laden with unique challenges. Throughout the process, we’ve been building a stronger and more flexible foundation that empowers teams to concentrate on creating the best applications possible.

In a world of micro frontends, teams only need to adopt changes from shared libraries when it directly benefits their applications. This results in fewer interruptions and more time for innovation. But the resulting freedom calls for a clear and agreed-upon integration strategy to maintain coherence across different applications and to manage the frequency of updates.

Our experience is a testament that the transition to a micro frontend architecture is not just about overcoming technical hurdles. It’s a leap towards a more modular, efficient, and scalable way of building frontends.

It’s important to note that while the micro frontend architecture is gaining popularity, it isn’t a one-size-fits-all solution. You should consider the specific needs of your situation, like we did at Gcore, weighing the pros and cons before making the jump. Just because it’s the new trend doesn’t necessarily mean it’s the right fit for your project or organization.

Good luck!

How We Migrate Our Applications to Micro Frontend Architecture at Gcore

Subscribe
to our newsletter

Get the latest industry trends, exclusive insights, and Gcore
updates delivered straight to your inbox.