Introducing VirtualLab, an AI Platform
VirtualLab is an innovative Artificial Intelligence (AI) platform designed to revolutionise the way we conduct science experiments through virtualisation.
Science practicals, which normally require specialised equipment and in-person participation, were essential in Secondary School education. Imagine being able to perform some of the practicals we once did in the laboratory right from our laptops, in the comfort of our rooms.
This is the challenge that VirtualLab is solving.
Developed at Swansea University, where I completed my MSc in Computer Science, this project is novel. Without delving too deeply into the technical details, VirtualLab serves as a comprehensive ‘wrapper’ that orchestrates a range of diverse open-source software packages, unifying them into a single, cohesive ecosystem.
Essentially, VirtualLab does not operate in isolation; it leverages multiple projects that solve smaller components of the bigger problem of making virtual experimentation possible.
For instance, one of the smaller projects utilized was SALOME, which generates the ‘mesh’ version of a physical object. A mesh is a digital representation of the size, width, length, breadth, and height of a physical object under test.
According to the Virtual Lab website, “VirtualLab is not a simulation code itself but rather a ‘wrapper’ which joins together several different software packages into a single platform and carries out some pre/post-processing tasks to achieve this.”
All the components of VirtualLab, along with the several packages it employs for experiment simulation are open-source.
Open-source projects are publicly accessible, allowing anyone to view, modify, or utilize them. Currently, VirtualLab can perform the following virtual experiments: Mechanical – Tensile Testing; Thermal – Laser Flash Analysis; and Multi-Physics – heat by Induction to Verify Extremes.
Identified Problems that Impacted VirtualLab
VirtualLab is a project within the Computer Science department at Swansea University, Undertaken by my Supervisor and being developed by a group of post-graduate students (including me) in the department. One of the initial challenges faced was funding, which means that every resource allocated to the project is judiciously used.
This leads to some bottlenecks in its development. The first problem revolves around how the project might utilise the available free resources that Docker gives to open-source projects like VirtualLab.
Through its sponsored open-source program, Docker provided an opportunity for platforms like VirtualLab to build and host container images, including their dependencies, which were crucial for the platform’s functionality. By abstracting the underlying infrastructure, Docker’s program offered stability, improved visibility, and essential team support for open-source projects.
However, a key requirement of this program was that the Docker configuration files (or Docker Recipes) must be stored in a GitHub repository. Since VirtualLab’s source code is hosted on GitLab, this created a mismatch that hindered seamless integration. This posed a unique challenge: a technique for closing the gap between two separate platforms to support effective automated builds.
The second challenge was a lack of a solid set of software testing procedures. Without comprehensive testing measures, there was a struggle to maintain the quality and reliability of the platform during development. Feature branches, those containing latest updates and enhancements, were most susceptible. Due to inadequate testing procedures, developers faced delays and errors while adding changes to the main branch of the repository. This greatly hampered development and contradicted the goal of rapid progress, typical in open-source software development.
Solutions, Outcomes and suggestion in Accelerating the AI Platform Development.
The first problem was a tricky one, and there are a few solutions that could be deployed to solve it. One option was to transfer the entire project codebase to GitHub, the repository which would facilitate communication between the Docker recipes and Docker Hub, where the project is built.
However, this solution could have disrupted ongoing development with the open community and would have required extensive documentation updates – something we were not eager to undertake. Ultimately, we decided to implement Repository Mirroring. This technique is used to copy files and folders of a project from one repository to another. With this setup, the VirtualLab team was able to continue development on GitLab while periodically synchronising the Docker Recipes to GitHub.
For the second problem, we established a continuous integration pipeline using CircleCI. This pipeline configuration ensured that every pull or merge request – essentially, any attempt to add to the codebase would lead to a series of automated tests. CircleCI was configured with the GitLab repository.
A core of this process was to use feature branch testing, which enabled developers to isolate specific features or fixes within their branches. This approach ensured that all submitted code can be tested independently from the rest of the codebase, significantly reducing the risk of releasing defective code. As each branch underwent a series of rigorous tests, only well-vetted modifications were incorporated into the main project, keeping the platform stable.
We also incorporated parallel development through the feature branch testing framework. This allowed multiple developers to work on separate features or bug fixes at the same time without disrupting each other’s progress. By running tests concurrently across various branches with CircleCI, the platform could efficiently manage multiple tasks without conflicts or delays. This parallel strategy did not only accelerate the software development cycle but also enabled faster testing and further iteration, thus streamlining the entire workflow.
Additionally, the continuous process significantly improved the efficiency of code reviews. Each pull request linked to a specific feature branch made code reviews more manageable and focused. Reviewers could easily assess the effect of the requested modifications without being overwhelmed by irrelevant code. This enabled a thorough evaluation of every contribution, maintaining the high quality of the codebase, therefore reducing the likelihood of introducing errors.
Alongside these enhancements, release management also experienced marked enhancement. The implementation of automated testing and continuous integration allowed features or bug fixes to be integrated into the main codebase only after they had been validated with the appropriate tests and met the established acceptance criteria. This approach led to more predictable releases, giving the team confidence that new code would not introduce bugs. Once changes were merged, integration with Docker Hub meant that the latest version of VirtualLab could be automatically built into containers and ready for deployment.
Furthermore, we identified several areas for additional improvement. One such area was test result synchronization. If CircleCI could sync test results directly with GitLab, it would enable developers to monitor the health of the codebase more effectively without needing to leave the GitLab interface. Moreover, implementing linting would help ensure that the code adheres to a style convention, minimizing errors caused by manual mistakes.
Expanding the use of unit testing and incorporating code coverage would provide deeper insights into the software’s quality, ensuring that all parts of the code were adequately tested before being merged into the main branch.
My strategic implementation of continuous integration (CI) within VirtualLab greatly transformed the platform’s development process, equalizing it with other reliable open-source software (OSS) projects known for their reliability.
The weakness of VirtualLab, which has been prone to defects due to community-driven pull requests, was addressed by actively verifying the platform’s stability and robustness. In this proactive approach, all pull or merge requests to the VirtualLab project underwent extensive testing, and are approved only after they passed these series of tests.
This guarantees that any potential issues are identified early in the workflow, preventing defects from entering the core codebase of the platform. Consequently, the risk of software breakages from community contributions was effectively reduced/mitigated, enhancing VirtualLab’s overall robustness.