Anyone who has seriously tried to train their own LoRA model in 2023 or 2024 - be it with kohya_ss, Axolotl or another PEFT-based toolchain - knows that there is often a deep chasm between theory and practice. On paper, it sounds simple: load a basic model, prepare your own training data, adjust the parameters and off you go. In reality, it often ends up in a jungle of Python versions, CUDA errors, inconsistent libraries and faulty memory formats. You switch between safetensors, ckpt, GGUF or, more recently, MLX, without always knowing which format is currently compatible - and why not. Even small changes in the setup can cause entire training runs to crash, and if you want to use a model in a different environment, you are often faced with the next round of conversions.
It is precisely in this situation that one begins to understand the true meaning of the term "low-rank adaptation": Not just because you are adapting low-rank models, but because you are becoming humble yourself - in the face of the complexity of these systems. And yet it is precisely this method that is the key today to adapting large language models efficiently, in a resource-saving and domain-specific manner.
Now Claris is entering the stage with FileMaker 2025 - an environment that was previously known for completely different things: database solutions, business processes, clearly structured workflows.
And suddenly a script step appears there that is simply called "Fine-Tune Model". A command that mentions the word "LoRA" in the same breath as "FileMaker". Anyone who has spent the last few years attending classic LoRA training courses will inevitably rub their eyes. Because the question is obvious: can this really work - and if so, at what level?
This curiosity is justified. After all, whereas people used to spend hours fiddling around in the command line, FileMaker now offers the prospect of "one-click" training - directly in an environment that already contains the data you want to train with. A paradigm shift: away from the experimental laboratory and towards a productive toolbox. But of course skepticism remains appropriate. Because as charming as this idea is, the real question is: what is happening technically? Is it a genuine, fully-fledged LoRA fine-tuning or an abstracted, simplified version? And how do the results differ qualitatively from training carried out using traditional methods?
Before judging this, it is worth taking a brief look back - at the principle itself, at the idea behind LoRA, which has allowed us to rethink large models in a small space.
To the new AI functions from FileMaker 2025 Markus Schall has published a separate article on his website. The following article is now about LoRA fine-tuning a language model directly from FileMaker. In the next article, we will describe how a language model can be trained in practice with FileMaker.
What is LoRA anyway? - A brief review of the principle
LoRA stands for Low-Rank Adaptation, and this name describes the method precisely. It is a technique in which a large neural network is not completely retrained, but only adapted in certain weight matrices in compressed form. Specifically, some layers of the model are provided with small, additional matrices that form so-called "adapters". These adapters learn new, task-specific patterns during the fine-tuning process without changing the original model weights. This has two major advantages: firstly, it saves memory and computing power, and secondly, the base model remains unchanged - meaning you can create, combine and, if necessary, remove several fine tunes on the same basis.
This idea was originally born out of necessity. Complete fine-tuning was simply too expensive. Re-training a model with several billion parameters requires not only powerful hardware, but also huge amounts of data and precise control. LoRA, on the other hand, came up with a pragmatic solution: instead of changing the entire network, only a handful of additional weights are optimized - usually one to two percent of the total volume. This suddenly made fine-tuning a realistic option for individual users, start-ups and research groups.
LoRA is basically a symbol of the change that AI development has undergone. Whereas in the past, training was done from scratch, today we speak of adaptation: adapting existing knowledge instead of forcing new knowledge. It is the machine equivalent of what we might call experience in human learning - the model learns to find its way in a new environment without losing its identity.
Another advantage of LoRA is its modularity. Once trained, a LoRA adapter can be loaded or unloaded like an add-on module. This creates specialized variants of a basic model - for example, a chat model that specializes in medical texts, one for legal language, or one that reflects the style of a particular company. In practice, the process has become established for both text and image models, whereby the underlying principles remain the same: small, differentiated adaptations instead of large, global interventions.
But even if the process itself is elegant, its implementation remains challenging. The training environment, data preparation and the right hyperparameters determine success or failure. This is precisely where the crucial difference between the classic open source toolchains such as Axolotl, LLaMA-Factory or kohya_ss and the new, integrated solution in FileMaker 2025 becomes apparent. Both use the same mathematical idea - but they embed it in completely different technical and conceptual contexts.
And this is precisely where our comparison comes in: in trying to understand two worlds that speak the same language but think very differently.
The classic way - LoRA training with kohya_ss and PEFT
Those who prefer the classic way of LoRA trainings If you've been around the block, you know the ritual: first installing Python, then the right version of PyTorch, then the right NVIDIA drivers - and at the end there's always the same uncertainty as to whether everything will work together. Kohya_SS, originally designed for training visual models, has in recent years become a kind of universal solution for anyone wanting to create LoRA adapters, whether for images or text. Under the hood, the system uses the same principles as the PEFT library from Hugging Face - only in a more convenient graphical interface.
The process always follows the same pattern. You start with a basic model, such as a Llama or Mistral derivative. The training data is then prepared - usually in the form of JSONL files with role structures ("user" and "assistant"), but sometimes also as simple question-answer lists. The parameters must then be defined: Learning rate, LoRA rank, adapter layer, batch size, optimizer, target directories. This phase alone separates the hobbyists from the patient ones, as each of these settings can make the difference between success and failure.
What follows is the actual training phase - often accompanied by a feeling between hope and skepticism. While the GPU calculates for hours, you curiously observe the loss curve, and yet you never really know whether the result at the end is really better than before. Sometimes the training ends with an error message, sometimes with a file that can no longer be loaded later. And if it succeeds, the next challenge awaits: converting the finished model into a format that can be used in other environments. An adapter that is available as safetensors often has to be converted to GGUF or MLX - depending on the target platform. Occasionally, tensors are lost in the process, and if you are unlucky, you are back to square one.
Despite all these hurdles, the classic route has a certain appeal. It is honest, transparent and you can feel what is happening in the background at every step. You can see the weights, you can change parameters individually, you have full control. And that was precisely the charm of this world for a long time: it rewarded those who fought their way through the jungle. Anyone who successfully trained their own LoRA model for the first time felt as if they had climbed a summit.
But at some point, the question arises as to whether this effort is still appropriate. After all, the goal remains the same - to create a model that adapts to a specific language, style or field of activity. The method is sound, but getting there is often difficult. And so there is a growing desire for an environment in which all this is no longer a weekend project, but a working tool.
FileMaker 2025 - LoRA fine-tuning via script
With FileMaker 2025 Claris has now dared to take precisely this step - and, it may be said, has done so with a certain elegance. For the first time, a classic businessDatabase a command that goes by the name of "Fine-Tune Model". Behind this simple expression lies an idea that is remarkable: LoRA training, previously a topic for specialists, is integrated directly into the everyday workflow.
Technically, this is done using the so-called AI Model Serverwhich runs locally on Apple Silicon systems and is based on Apple's MLX framework. This system takes care of all the calculation steps - from loading the base model to creating the adapter layer. The user only has to specify which data is to be trained and can do this in two ways: either via an existing FileMakerTable - for example, a collection of customer inquiries, support dialogs or text fragments - or via an external JSONL file in chat format. This eliminates the need for time-consuming data preparation outside the system; you work directly with the data records that are already available in the company.
The parameter selection has also been significantly streamlined. Instead of twenty parameters, there are only a few, but decisive ones - max_steps, learning_rate, batch_size, and lora_layers. The remaining values are sensibly preset in the engine. This reduction is not a disadvantage, but an expression of a clear design philosophy: FileMaker is not intended to be a research platform, but a tool that delivers reproducible, stable results.
The fine-tuning itself then runs like any other script command: The user calls "Fine-Tune Model", passes the model name and location, and FileMaker passes the rest to the AI Model Server. The training takes place completely locally - without the cloud, without a third-party API, without data protection risk. The result is a new model with the prefix fm-mlx- that can be used directly within the FileMaker environment for text generation, classification or dialog functions.
Significantly simpler LoRA training process with FileMaker
Anyone who has had the classic LoRA experience will probably be taken aback by the first run: no terminal, no flood of logs, no cryptic error messages. Instead, a clean progress bar and a reproducible result. Of course, you can criticize the fact that you have less control - no access to exotic optimizers, no experiments with QLoRA or layer freezing - but that is precisely the point. Claris is not aimed at researchers, but at users who want to work productively with their own data.
This fundamentally changes the character of LoRA training. An experimental process becomes a plannable process. In future, companies will be able to adapt their own internal language models without having to operate the infrastructure themselves. The data remains in-house, the processes are documented and the results can be versioned and automated like any other FileMaker component.
Of course, skepticism is also allowed here. The AI Model Server is still tied to Apple Silicon, and there is still a lack of in-depth parameter access. But the path is clear: where it used to take weeks to set up, it now only takes minutes. And where you used to laboriously switch between storage formats, a script command now suffices.
In doing so, FileMaker has achieved something that rarely happens in the AI scene: it has not tried to do "more", but "less" - and in a way that underlines the actual strength of the platform. Structure instead of chaos, integration instead of fragmentation.
Practical comparison - MLX-LoRA vs. PEFT-LoRA
If you put the two approaches side by side, you will notice at first that they essentially do the same thing - adapt an existing language model with the help of additional adapter weights. But the way to achieve this could hardly be more different. While the open source world sees LoRA as a flexible modular system, Claris sees it as part of a clearly defined workflow. Some experiment with each component, others integrate them seamlessly into a closed environment.
The classic PEFT approach (Parameter-Efficient Fine-Tuning) - for example via Axolotl, LLaMA-Factory or kohya_ss - allows every detail of the training process to be controlled. You can specifically define which layers are adapted, which learning rates are used, how gradients are handled, how memory is saved or batch sizes are varied. This freedom is powerful, but it requires expertise and sensitivity. Even small errors in the configuration lead to unusable models or non-converging runs. The benefit lies in its scientific nature: if you want to understand why a model behaves the way it does, this is the best place to start.
FileMaker 2025 is quite different, where LoRA is not understood as a research tool, but as an operational function - part of a system that processes information instead of researching it. The new script command abstracts many technical details without distorting the basic idea. The fine-tuning runs in the background on the AI Model Server, controlled by a few simple parameters. Everything that was previously in YAML files or shell commands is poured into a familiar FileMaker script. The result is less spectacular, but more stable - a reproducible process that can be documented, automated and integrated into company logic.
You could describe the difference like this: The classic way is like wrenching on an engine, where every gasket is visible and every adjustment is manual. FileMaker, on the other hand, has covered the engine and put a start button next to it. The result may be less exciting for hobbyists, but it starts reliably.
As far as the results are concerned, the quality in both cases depends on the same factors: the quality of the data, the appropriateness of the learning rate and the basic model. Differences arise more from the nature of the environment than from the method itself. FileMaker, by its very nature, works in closed data sets - typically application or company-specific corpora. This means cleaner but smaller data sets. In the open source world, on the other hand, large, mixed data sets are usually used, often from a wide variety of sources. This can lead to more robust results on the one hand, but more inconsistent results on the other.
The result is clear: FileMaker delivers a stable, usable model in less time, while PEFT-based training offers more potential, but also more uncertainty. So if you want a reproducible process that can be integrated into everyday working life, FileMaker is an unexpectedly mature solution. On the other hand, those who want to experiment, understand and go beyond the limits of the standard parameters are better off in the open source world.
Quality differences - what really counts
Despite all the discussions about frameworks, formats and commands, one thing must not be overlooked: The quality of a LoRA fine-tuning is not determined by the tool, but by what you feed it. A cleanly structured data set that contains clearly formulated prompts and realistic answers has a greater impact on the end result than any learning rate or batch size. This is true for both FileMaker training and PEFT-based runs.
Nevertheless, it is worth taking a look at the differences that have an indirect influence on quality. In the classic environment, you usually work with larger amounts of data, which entails a certain variance. Models that are trained on such data sets tend to respond more broadly but less precisely. They often develop a certain "language breadth", which is impressive in generic applications, but can lead to arbitrariness in specialized environments. FileMaker, on the other hand, promotes the opposite: here, data is specifically selected and curated, often directly from a table that reflects the real business context. This creates a natural focus - the model does not learn everything, but what is relevant.
Encapsulated process ensures better stability
Another point is reproducibility. Classic LoRA training usually runs in environments that change quickly due to version updates, GPU drivers or library changes. A training that works today may fail tomorrow. FileMaker breaks with this uncertainty by encapsulating the entire process. The AI Model Server uses a clearly defined MLX runtime that does not depend on the user or the internet connection. This leads to less flexibility, but also to more stability - and that is precisely what is crucial in productive scenarios.
The evaluation of the results also differs. In the open source world, quality is often measured with quantitative metrics - perplexity, accuracy, BLEU score. FileMaker, on the other hand, works more quietly: the result is evident in everyday life when a system suddenly responds more precisely to internal questions or when an automatically generated text sounds more natural. These are qualitative, experience-based differences - the way a model reacts to familiar terms, how it picks up on company-specific tonality or how it "hallucinates" less with technical terms.
Finally, the time factor should not be underestimated. PEFT training with Axolotl or kohya_ss can easily take many hours or even days, including preparation and post-processing. FileMaker training, on the other hand, can be triggered in minutes and carried out in parallel with other tasks. This speed changes the way you work with AI systems: A technical project becomes an everyday process.
The result shows that the qualitative difference lies less in model performance than in availability and usability. FileMaker LoRAs are often smaller, more focused, more stable - and this is precisely what makes them valuable for real work processes. PEFT LoRAs, on the other hand, can be deeper, more adaptable and, at the limit, more powerful when properly trained. It's like comparing a precision machine with a universal laboratory: Both have their uses, but they serve different purposes.
And perhaps this is precisely the lesson of this new development - that quality is not just about numbers, but about reliability, clarity and the ability to bring knowledge into an organized framework. FileMaker 2025 shows that even in a world overflowing with experiments, sometimes the prudent, integrated solution produces the better results.
Portability and sustainability - between worlds
If you look at the landscape of model formats today, you are almost reminded of the early computer years, when each system spoke its own language. What used to be disk formats are now tensor formats: GGUF, safetensors, ckpt, MLX. Every framework, every engine seems to maintain its own logic. And just as you used to need adapter cables when switching from Windows to Mac, today you need conversion scripts - sometimes from MLX to GGUF, sometimes vice versa.
FileMaker 2025 deliberately makes a point here. The new AI Model Server uses MLX exclusively as a backend - the framework that Apple developed for its own Silicon. MLX is still young, but conceptually strong: it allows training, inference and LoRA fine-tuning in a consistent memory format, optimized for the neural cores of the M-Chips. Claris' decision to adopt this system is therefore no coincidence. It follows the philosophy of creating a stable, controlled environment that can be operated completely locally.
This has consequences for portability. A LoRA model that is trained in FileMaker automatically has the prefix fm-mlx- and can be used directly in the MLX runtime. However, if you want to use it in another environment - for example in LM Studio, Ollama or llama.cpp - you have to take the detour via a conversion. This is technically possible, but not yet trivial. Although there are initial tools that can transfer MLX models to GGUF, there is still no standardized bridge. The reason lies less in the mathematics than in the organization: MLX is Apple-centric, while GGUF is community-driven. Both systems are developing rapidly, but independently of each other.
In practice, this means that anyone working with FileMaker initially remains within a closed but stable ecosystem. For many use cases, this is not a disadvantage - on the contrary. The certainty that a model is trained, stored and used in the same environment has advantages that go far beyond technical convenience. It concerns issues of traceability, data sovereignty and longevity. While open source frameworks often live in short innovation cycles, FileMaker traditionally stands for consistency. Models that are trained today will still be executable in the same form in two or three years' time - and that is a value that can hardly be overestimated in the corporate context.
Nevertheless, the desire for interchangeability will remain. It is conceivable - and almost inevitable in the long term - that Claris will offer export functions in the future, for example to GGUF or ONNX. This would allow models to be used outside the FileMaker world without losing their core. It is equally likely that MLX itself will grow more strongly into the open source world and that the barriers between Apple and non-Apple environments will slowly disappear.
For now, however, FileMaker stands on a clearly defined foundation: stability over diversity, simplicity over overload. It's a decision that not everyone will like, but one that makes sense in the long term. Because in a world where everything is possible at the same time, what works reliably will once again carry weight.
Conclusion - From experiment to tool
In the end, the realization remains that FileMaker 2025 has not simply introduced a new function with its LoRA command, but has set a signal. A signal that AI training is no longer a specialist privilege, but can become part of normal business processes. The integration of LoRA into a system that has stood for stability, traceability and user-friendliness for decades marks a turning point - not in research, but in practice.
Classic LoRA training, whether with kohya_ss or PEFT, will retain its place. It remains the realm of developers, researchers and hobbyists - those who want to understand how models behave in detail, who want to look at each weight matrix individually. This openness has its value, it is the basis for progress. But the price for this is effort, uncertainty and a certain fragility.
FileMaker, on the other hand, chooses the other path: it reduces complexity to the essentials and transforms a complicated process into a repeatable routine. Fine-tuning becomes a script command, the model becomes part of a database, the AI becomes one tool among many. This does not make the technology smaller, but more tangible. It loses its experimental character and gains suitability for everyday use.
The qualitative difference is not in the computing power or the range of parameters, but in the approach. While many AI platforms flood the user with options, Claris takes a quieter path - the path of integration. Everything happens where the data is located anyway. This is not a technological trick, but an expression of a philosophy: processes belong together, not next to each other.
Perhaps this is the real progress - that the constant search for new possibilities has finally turned into a tool that can be understood, operated and controlled. FileMaker 2025 puts LoRA where it belongs: in the hands of those who work with data, not just in the labs of those who research it.
And so the circle closes: from the chaotic terminal window to the first experimental fine tunes to the script command that does the same - only clean, structured and comprehensible. A quiet but significant change. Because sometimes the world doesn't change because of what is newly invented, but because of what finally just works.
In the next article, we will describe how a language model can be trained in practice using an example script with FileMaker.
Frequently asked questions
- What exactly is LoRA and what is it used for when training language models?
LoRA stands for Low-Rank Adaptation. It is a process in which only a small proportion of the model parameters are adjusted in order to adapt a large language model to specific tasks or writing styles. Instead of changing billions of weights, additional, small matrices ("adapters") are trained. This saves memory, time and computing power. The basic model remains unchanged, which makes LoRA models particularly flexible and resource-efficient. - What is the difference between a FileMaker LoRA training and a classic PEFT training with Axolotl or kohya_ss?
In essence, not so much - both use the same mathematical idea. The difference lies in the environment. PEFT training is carried out in open frameworks with many adjusting screws, usually via Python libraries. FileMaker, on the other hand, integrates the process into its AI Model Server. The training runs locally via MLX on Apple Silicon systems and is executed via Script controlled. The focus here is on stability and integration rather than freedom of research. - What is the AI Model Server in FileMaker 2025?
The AI Model Server is a local component that provides, trains and executes text models - entirely on Apple Silicon hardware. It forms the technical foundation for all AI functions in FileMaker, including text generation, embedding and fine-tuning. This allows a company to use AI models without transferring data to external clouds. - How does a LoRA training course in FileMaker 2025 actually work?
The user calls the new Fine-Tune Model command in the script. The input is either a table in the FileMaker database (e.g. with prompts and responses) or an external JSONL file with a chat structure. The training then starts locally via the AI Model Server. After completion, a new model with the prefix fm-mlx-... is generated, which can be used immediately in scripts or layouts. - Which parameters can be set for FileMaker training?
FileMaker allows a few specific but decisive parameters:
- max_steps - Number of training steps
- learning_rate - Learning rate
- batch_size - Size of the training batches
- lora_layers - Number of adapter layers
This keeps the training clear without the risk of incorrect configurations. - What are the advantages of training via FileMaker compared to traditional tools?
The biggest advantage lies in the integration. You work directly with the data that is already available in the system and save on setup, environment variables, package installations or GPU configurations. In addition, everything remains local and reproducible. This is a decisive argument for companies - data protection, traceability and simple maintenance. - Is FileMaker-LoRA of lower quality than a PEFT-LoRA?
Not fundamentally. The underlying method is identical. Differences arise due to data set size, parameter selection and evaluation. FileMaker relies on stable defaults and structured data sets, while PEFT setups offer more experimental leeway. In many cases, FileMaker even achieves more consistent results because fewer variables are prone to errors. - Can FileMaker also be used to train larger base models, e.g. Llama 3 or Mistral?
Yes, as long as the base model is in MLX format and is supported by the AI Model Server. FileMaker is optimized for text-based models that run locally on Apple Silicon chips. However, very large models are limited by RAM and GPU capacity - models up to around 8 - 14 billion parameters are usually suitable. - Can I use a model trained with FileMaker outside of FileMaker?
Currently only with restrictions. The model is available in MLX format and is intended directly for the AI Model Server. Initial conversion tools are available for exporting to other formats (e.g. GGUF, ONNX), but they are still experimental. Claris could officially support this function in the future. - What are the hardware requirements for training in FileMaker?
A Mac with Apple Silicon chip (M1, M2, M3 or newer) is required. The training uses the Neural Engine and GPU units of the chip. Intel Macs are not supported. For larger data sets, we recommend at least 16 GB RAM, preferably 32 GB or more. - What about data protection and security in FileMaker training?
The training takes place entirely locally. No data is transferred to third parties and no cloud API is used. For companies that work with confidential or personal data, this is a decisive advantage over external AI services. - Can I run several models simultaneously in FileMaker?
The AI Model Server currently supports one model at a time. However, you can create any number of fine tunes and load or unload them as required. This limitation serves the stability and predictability of the system. - How big is the difference in training effort between FileMaker and classic LoRA?
It is considerable. While a classic PEFT setup often requires hours or days of preparation - installation, dependencies, test runs - FileMaker is ready to go in just a few minutes. The training process itself is also faster because MLX works very efficiently on Apple Silicon. This saves time and nerves, even if you lose some control. - What types of text data are best suited for training?
Structured, dialog-like data is ideal: Customer inquiries, support discussions, internal knowledge databases, FAQs or specialist texts. It is important that the data is clearly formulated and has a recognizable pattern. LoRA does not learn "content", but linguistic and contextual structures - quality beats quantity. - How can the quality of a FileMaker LoRA model be evaluated?
Not with abstract metrics, but in practical use. You check whether the model responds consistently to internal questions, whether it uses technical terms correctly and whether the tonality corresponds to the desired style. FileMaker allows simple comparison tests, for example using scripts that send prompts to different models and save the answers. - Is it possible to delete or overwrite a FileMaker LoRA model?
Yes, fine-tuned models can be managed, deleted or replaced in the Admin Console of the AI Model Server. As the base models remain unchanged, the risk is minimal. You can retrain at any time without losing the starting point. - How does FileMaker compare to cloud fine-tuning with OpenAI or Anthropic?
FileMaker offers local control, while cloud services usually train on the server side and return results via API. The disadvantage of the cloud: high costs, limited data protection and no direct access to the model. FileMaker achieves the opposite - full data sovereignty, no dependence on third parties, but limited to Apple hardware. - How stable is MLX as a platform for LoRA training?
MLX is still young, but technically mature. It was developed by Apple specifically for neural networks on M chips and offers amazingly high performance with low energy consumption. In conjunction with FileMaker, it looks like a solid basis for local AI applications, even if there is currently less community support than with PyTorch. - Will FileMaker also support export to open formats in the future?
That is likely. Claris has emphasized several times in recent years that it wants to support open standards in the long term. An export to GGUF or ONNX would be the logical next step in order to integrate FileMaker training into external environments (e.g. LM Studio or Ollama). This has not yet been officially announced, but is technically feasible. - Is the switch to FileMaker-LoRA worthwhile for experienced PEFT users?
That depends on the goal. If you want to do in-depth research, compare metrics or test your own architectures, it's better to stick with Axolotl or LLaMA-Factory. If, on the other hand, you need stable, repeatable training in a controlled environment - for internal assistants, technical language or process automation, for example - FileMaker is a remarkably elegant solution.

Markus Schall has been developing individual databases, interfaces and business applications based on Claris FileMaker since 1994. He is a Claris partner, FMM Award winner 2011 and developer of the ERP software gFM-Business. He is also a book author and founder of the M. Schall Publishers.
