1 - Codespaces
Setting up Development Environment in GitHub Codespaces
Install the GitHub Codespaces Extension
Note: When using our DevContainer, the GitHub Codespaces extension is pre-installed.
- Start VSCode
- Go to Extensions
- Search for “GitHub Codespaces”
- Click Install
Alternatively, create a new codespace via the GitHub web interface:
Select a big enough machine type for Yocto/BitBake, e.g. 16 CPU. You need at leasst 50GB disk space.
Building Leda in a Github Codespace
After successfully obtaining and connecting to a codespace you can build Leda either with kas or manually:
Private Repositories
When using GitHub Codespaces with submodules and private repositories,
a separate tool for git authentication is required (see VSCode issue #109050), as the authentication token provided to the GitHub Codespaces virtual machine only allows access to the main repository.
Git Credential Manager:
https://aka.ms/gcm
Installation:
curl -LO https://raw.githubusercontent.com/GitCredentialManager/git-credential-manager/main/src/linux/Packaging.Linux/install-from-source.sh &&
sh ./install-from-source.sh &&
git-credential-manager-core configure
1.1 - Advanced topics
Git Authentication
For private repositories, we need to separately authenticate against the submodule repositories, as
GitHub Codespaces will only inject a token with access rights to the current repository.
-
Change to the users home directory
-
Install Git Credential Manager
curl -LO https://raw.githubusercontent.com/GitCredentialManager/git-credential-manager/main/src/linux/Packaging.Linux/install-from-source.sh &&
sh ./install-from-source.sh &&
git-credential-manager-core configure
-
Configure a credential store typ, e.g. git config --global credential.credentialStore plaintext
-
Verify with git config --global -l
, it should show git-credential-manager-core as the credential helper.
Update the submodules
Run git submodule update --recursive
See VSCode Issue #109050 for details.
Setup skopeo
Skopeo is needed to download various files during the build:
sudo mkdir -p /run/containers/1000
sudo chmod a+w /run/containers/1000
skopeo login ghcr.io --authfile ~/auth.json --username <your GitHub User>
Enter your token when asked for the password.
2 - GitHub Runner
Create a new GitHub Runner for this repo
Start with creating a new azure VM:
- Ubuntu Server Latest, currently 20.04
- Size Standard D16ds v5
- The admin user should be called “runner”
Once the VM is ready:
- Stop the VM
- Go to “Disk” and resize the OS disk to 512 GB
- Start the VM again
Run the script to setup the runner
Log on to the VM as runner. Either copy the scripts/PrepVMasGHRunner.sh
onto the VM or create a new script:
Copy the content of the PrepVMasGHRunner.sh from this repo into the new file, save it and make it executable:
Call it with the token and the next available nummer, see below how to get this items:
./prep.sh "ASYVOMU........DTCFCMBA" 3
In the Azure portal go the VM, go to the “network” section and delete the rule opening port 22.
Congratulation, you are done!
How to get the token and the number to call the script
In the repo, go to “Settings” -> “Actions”. You see the currently provisioned runners:
Pick the next number and pass it to the script.
To get the token press the green button in the above screenshot. The token is in the command:
3 - VSCode DevContainer
Preparation
- Obtain the Docker Engine for your distribution and add your non-privileged user to the docker group (
sudo usermod -aG docker $USER
)
- Install Visual Studio Code
Visual Studio Code: Development Containers
- Open Visual Studio Code
- Open Command Palette (
F1
) and select Clone repository in Container Volume
- Select
eclipse-leda/meta-leda
and the main branch.
- Adapt proxy configurations if necessary (
.devcontainer/proxy.sh
)
For a clean remote build machine, you may want to set up a development environment on GitHub CodeSpaces
Building Leda in a VSCode DevContainer:
After successfully setting up your DevContainer you can build Leda either with kas or manually:
Authentication
The build process requires online connection and you must be authenticated to access private repositories.
- Create a GitHub Personal Access Token (PAT) at https://github.com/settings/tokens and grant
read:packages
permission
- Use
Configure SSO
and authorize your PAT for the organization
- On the build host, authenticate to ghcr.io:
skopeo login ghcr.io --authfile ~/auth.json --username <username>
and enter the PAT as password
- You may need to create the folder where skopeo is storing authentication information beforehand:
sudo mkdir -p /run/containers/1000
sudo chmod a+w /run/containers/1000
- Start the bitbake build process
4 - Building with kas/manually
After setting up your VSCode DevContainer or GitHub Codespace you can proceed with the actual build process. Here you have two choices - either using the kas-build system or setting up the build manually.
Building with kas
This is the easiest way to build leda semi-automatically
cd /workspaces/meta-leda-fork/
- Open the VSCode terminal and run
kas build
- Note: you can alter the build options by modifying the .config.yaml file in the trunk of the repository
Building manually
You can also build Leda manually if more customization of the build process is required.
-
export LEDA_WORKDIR=/workspaces/meta-leda-fork/
-
cd ${LEDA_WORKDIR}
-
Clone the Poky repository with the required release, e.g. kirkstone
and pull updates if necessary:
git clone git://git.yoctoproject.org/poky
cd poky
git checkout -t origin/kirkstone -b kirkstone
git config pull.rebase false
git pull
-
Prepare the build environment:
-
Dry-run a build of the Linux Kernel recipe using BitBake:
bitbake --dry-run linux-yocto
-
Checkout the meta-layer dependencies for Leda:
cd $LEDA_WORKDIR
git clone -b kirkstone https://github.com/rauc/meta-rauc.git meta-rauc
git clone -b kirkstone https://github.com/rauc/meta-rauc-community.git meta-rauc-community
git clone -b kirkstone https://git.yoctoproject.org/meta-virtualization meta-virtualization
git clone -b kirkstone https://git.openembedded.org/meta-openembedded meta-openembedded
-
Change to the poky/build
directory (generated from the oe-init-build-env
script automatically)
-
Add all the necessary meta-layers:
bitbake-layers add-layer ${LEDA_WORKDIR}/meta-rauc
bitbake-layers add-layer ${LEDA_WORKDIR}/meta-rauc-community/meta-rauc-qemux86
bitbake-layers add-layer ${LEDA_WORKDIR}/meta-openembedded/meta-oe
bitbake-layers add-layer ${LEDA_WORKDIR}/meta-openembedded/meta-filesystems
bitbake-layers add-layer ${LEDA_WORKDIR}/meta-openembedded/meta-python
bitbake-layers add-layer ${LEDA_WORKDIR}/meta-openembedded/meta-networking
bitbake-layers add-layer ${LEDA_WORKDIR}/meta-virtualization
bitbake-layers add-layer ${LEDA_WORKDIR}/meta-leda-components
bitbake-layers add-layer ${LEDA_WORKDIR}/meta-leda-bsp
bitbake-layers add-layer ${LEDA_WORKDIR}/meta-leda-distro
-
Dry run:
DISTRO=leda bitbake --dry-run sdv-image-all
-
Real build:
DISTRO=leda bitbake sdv-image-all
-
You can also build one of the target recipies this way:
DISTRO=leda bitbake kanto-container-management
-
Note: in this case you can set the target architecture and other build options in the build/local.conf file
5 - Restricted Internet
Developers working in a corporate environment may face challenges when building Leda-based images, pulling SDV containers, etc.,
usually due to a restrictive corporate proxy. Thus the objective of this page is to collect helpful guides for mitigating such problems.
HTTP(S) proxy
First you might need to configure your http(s) SOCKS proxy such that the BitBake shell uses it for do_fetch
recipe tasks. By default, http_proxy
and https_proxy
environment variables are part of the BB_ENV_PASSTHROUGH
list and are directly passed from the current environment to BitBake. If you are still facing http(s)_proxy
issues during do_fetch
tasks, you might want to check the Working Behind a Network Proxy @ Yocto Project Wiki.
GOPROXY
GOPROXY is a golang-specific mechanism for fetching dependencies during build-time. What is more, gomod-type BitBake recipes pull their external dependencies during
the do_compile
task, instead of the do_fetch
task leading to further issues. The simplest workaround is to set-up a local (caching) goproxy container on
the build host and make BitBake use that. The following steps assume that the build host has docker installed and working, with access to the docker hub registry.
Hosting a local goproxy server
Start by setting up the goproxy container in host networking mode.
docker run -d --env HTTP_PROXY="http://<PROXY_IP>:<PROXY_PORT>" --env HTTPS_PROXY="http://<PROXY_IP>:<PROXY_PORT>" -v cacheDir:/go --network host goproxy/goproxy
NOTE: Don’t forget to substitute <PROXY_IP>
and <PROXY_PORT>
with the appropriate address of your HTTP(S) proxy.
This will start a local caching goproxy on port 8081
with a volume named cacheDir
for caching the downloaded Go packages. The goproxy container can be configured
further to provide access to private Go-package registries. For more information on its configuration take a look at goproxyio/goproxy on GitHub.
Using the local goproxy server for BitBake builds
Since the main objective of BitBake/kas is to facilitate reproducible builds, only certain variables from the host environment are used for the build. Go, however,
looks at the GOPROXY
environmental variable to decide on which proxy to use. That’s why you should first start by exporting the GOPROXY variable in the terminal
from which you will later run the build:
export GOPROXY="http://127.0.0.1:8081"
To make BitBake use the value of the variable you just exported for the build, you should add it to its “environment passtrough” list:
export BB_ENV_PASSTHROUGH_ADDITIONS="${BB_ENV_PASSTHROUGH_ADDITIONS} GOPROXY"
Kas
If you are using kas as a top-level build tool, to set the value of the GOPROXY variable for builds, all you need to do is to add it the env-section of your
kas-config yaml. For example:
header:
version: 12
machine: qemux86-64
env:
GOPROXY: "http://127.0.0.1:8081"
Kas will handle the exporting of the variable and adding it to BitBake’s passtrough list automatically from there.
Airgapped container installation
Sometimes devices might not have internet access on first boot and therefore the SDV containers that are needed for provisioning and updating a SDV-image
will not be available.
Build-Time
The meta-leda layer provides an opitional distro feature that pre-downloads and injects a minimal set of SDV container images in Kanto’s local container registry
on first boot.
IMPORTANT: This will lead to a significant increase of the image size since all containers are downloaded as self-contained tarballs and therefore “layer reuse”
is not possible.
To enable this distro feature, add to your local.conf
:
DISTRO_FEATURES += " airgap-containers"
PREINSTALLED_CTR_IMAGES_DIR = "/path/to/container/images"
IMAGE_INSTALL += "packagegroup-sdv-airgap-containers"
If you are using the sdv-image-data
image recipe packagegroup-sdv-airgap-containers
will be automatically installed when the distro-feature is enabled.
Therefore all you need to add to your local.conf
will be:
DISTRO_FEATURES += " airgap-containers"
PREINSTALLED_CTR_IMAGES_DIR = "/data/var/containers/images"
Note: Here we have assumed that the partition where sdv-image-data is installed is mounted as /data
on the rootfs.
Manual
If you do not wish to use the airgap-containers distro-feature, you can manually download inject the container images in the kanto namespace with ctr.
-
Start on a machine with internet access and docker/ctr installed:
Pull the container image in your machine’s local registry:
ctr -n kanto-cm image pull <REGISTRY>/<IMAGE>:<TAG> --platform linux/<ARCH>
Where if you would like to download the Kuksa Databroker container for an arm64 device you would change the following:
<REGISTRY>/<IMAGE>:<TAG> -> ghcr.io/eclipse/kuksa.val/databroker:0.3.0
<ARCH> -> arm64
After the pull was successful, export the image as a tarball:
ctr -n kanto-cm images export <tarbal_name>.tar <REGISTRY>/<IMAGE>:<TAG> --platform --platform linux/<ARCH>
<REGISTRY>/<IMAGE>:<TAG>
and <ARCH>
should be the same as in the pull
command, while <tarball_name>
can be any name you would like.
-
Transfer the exported <tarball_name>.tar
to your device to a folder of your choosing, e.g. /data/var/containers/images
-
Obtain a terminal connection to this device and go to the directory where you transferred the container image tarball.
-
Import the image to the kanto-cm registry by running:
ctr --namespace kanto-cm image import <tarball_name>.tar
Note: If you see a message from ctr that the “image might be filtered out” this means that you might have pulled an image for an architecture that
does not match the one of your device.