Over the past few months we spent some time building a pair of prototype applications in order to explore a couple of product ideas we had been discussing internally. Our aim was to move from an initial description of the concepts into something people could actually interact with and respond to.
During that process we ended up experimenting with two development environments Loveable and Mendix. The intention at the start was not specifically to compare the platforms. The focus was simply on getting prototypes built quickly enough that we could show them to others and gather feedback. Looking back, using both tools on the same ideas provided a useful way to understand how each environment approaches the process of building an application.
The two applications we built were quite different. One was a fitness tracker, designed to record exercises and log sets, reps, weight, and perceived difficulty, responding to a query we had received on the plausibility of developing a custom fitness app. The other was an AI compliance tracker, which walks an organisation through a set of questions intended to check whether its use of AI aligns with the requirements of the EU AI Act and other peer systems across the world used for enforcing AI standards and restrictions.
Both prototypes were built with the same audience in mind. We wanted something that could be shown to potential customers and peers in the technology and compliance space in order to gather feedback. The main question we were trying to answer was whether the problems we were looking at were ones the market actually recognised, and whether the approaches we were testing felt relevant to the people dealing with them.
None of us had significant prior experience with either platform, so the exercise also became a way of learning how both environments work in practice.
Functional Prototypes
The first thing that stood out when working with Loveable was the speed at which a prototype could be assembled. For both applications we had something that could be demonstrated within about a day.
In the fitness tracker the first feature we implemented allowed a user to record an exercise session and track sets, reps, weight, and perceived difficulty. In the compliance tracker the starting point was a sequence of questions designed to check alignment with the EU AI Act.
Compared with other low-code or no-code environments we had tried previously, the time required to reach a demoable prototype was noticeably lower. Once we had something working it became much easier to begin showing the idea to people and collecting feedback.
Working Style
Most of the early work in Loveable involved prompting the platform to generate functionality based on descriptions of what we wanted the application to do. Once a working structure existed we then began editing and refining the pages that had been produced.
This style of development is often referred to as “vibe coding”. The process tends to begin with prompts and short iterations rather than detailed planning of the application structure. For early experiments this approach worked well, particularly when the goal was to test whether an idea made sense in practice.
UX and Interface Customisation
The most consistent difficulty we encountered came from the user interfaces generated by default.
When people began interacting with the prototypes, feedback often centred on the clarity of the layouts. Some users found the navigation and page structure slightly confusing, even though the underlying functionality worked as intended.
Initially this could be difficult to address because the system focuses primarily on generating functionality that matches the prompt. Adjusting individual elements of the interface sometimes required several iterations.
During the time we were working with the platform we noticed new features appearing that allow more targeted changes to specific parts of a page. These controls make it easier to refine layouts and respond to feedback from users. The level of interface control is still behind what is available in more structured development environments, though the direction suggests the platform is responding to feedback from people building prototypes in this way.
In practice, Loveable proved most useful during the early stages of exploration, where the priority is producing a working concept quickly and gathering reactions from potential users.
Data Structure Driven Design
Building the same ideas in Mendix begins in a different place. The platform generally expects the developer to define the domain model before moving into page design or workflow logic.
For the fitness tracker this meant creating entities for exercises, workout sessions, and recorded sets. For the compliance tracker it involved modelling questions, responses, and the relationships between them.
This introduces a slightly steeper starting point than Loveable because the developer needs some understanding of data structures and application logic. Once the model is in place, the rest of the application tends to build out in a more structured way.
Maia
Mendix includes an AI assistant called Maia, which provides guidance while building the application. In our experience this was helpful when defining the domain model and working through some of the logic required for the application to function.
Maia can suggest elements of the data structure and provide recommendations for implementing workflows or page components. In some cases it can generate parts of the model directly, which reduces the time spent setting up the basic structure of the application.
The assistant does not remove the need to understand how the application is put together, though it does help smooth some of the more technical steps.
Page Structures and Visual Building Interface
The Mendix visual builder provides a drag-and-drop interface for constructing pages and navigation flows. The experience is more structured than the equivalent process in Loveable because each page is closely connected to the underlying data model.
For the fitness tracker this made it straightforward to create pages for recording workouts and reviewing previous sessions. In the compliance application it allowed us to organise the questionnaire flow in a way that linked each response to the appropriate data entities.
This additional structure means the early stages of the build take slightly longer. As the application grows it becomes easier to extend and maintain because the relationships between pages, data, and logic are clearly defined.
Live Feedback, Bugs, Enhancements, and Calendaring of Work Built Into the App
Another area where Mendix differs from Loveable is the way development work is tracked.
During testing we were able to record bugs, enhancement requests, and feedback directly within the Mendix ecosystem. These items could be assigned to developers and scheduled within a development timeline. As the prototypes evolved this created a single place to keep track of ongoing work.
This made it easier to coordinate changes and improvements as feedback from early demonstrations began to come in.
Working with both environments was instructive in how they both support different parts of the prototyping process.
Loveable made it possible to move from an idea to a working demonstration very quickly, which helped start conversations with potential users and gather feedback early. Mendix required more structure at the beginning of the build, particularly around defining the data model and application logic, but provided a clearer framework for continuing development once the shape of the application was established.
Using the same two ideas in both environments was not something we originally set out to do. In retrospect it provided a useful way to see how different development approaches support the early stages of exploring a product idea.