Hi,
we need a way to provide upstream library sources (i.e. the "contrib"-directory) in an environment without internet access.
In order to make this task reasonably comfortable, I miss three features in the prepare tool:
1. Similar to "make -C ./build ...", "make prepare -C ./build" will download all upstream sources required by the corresponding build.conf file.
2. "make prepare ... --download-script download.sh" won't download anything, but instead creates a script file which can be transferred to a PC connected to the internet. There you can execute the script in order to download the sources. The download directory can then be transferred into the offline environment.
3. "make prepare ... --unpack [/path/to/download_dir]" unpacks the downloaded upstream libraries to the local contrib directory.
What do you think? Is something like this possible? Or do you have any other ideas/suggestions?
Regards Roman
Hi Roman,
welcome to the mailing list!
we need a way to provide upstream library sources (i.e. the "contrib"-directory) in an environment without internet access.
In order to make this task reasonably comfortable, I miss three features in the prepare tool:
I wonder if regular Unix tools could already safe your day. How about the following workflow?
1. Clone the Genode repository on a machine with internet access:
git clone https://github.com/genodelabs/genode.git
2. Prepare all packages using Genode's ports tools:
cd genode ./tool/ports/list | xargs -ixxx ./tool/ports/prepare_port xxx
Alternatively to using the output of the 'list' tool, you may maintain a file with the list of ports you actually need.
3. Archive the './contrib' directory:
tar cfz contrib.tgz contrib
4. Transfer the 'contrib.tgz' archive to your machine that lacks internet access.
5. Extract the 'contrib.tgz' archive within in our genode directory.
Alternatively, you may unarchive the 'contrib.tgz' to a location of your choice and create a symlink 'genode/contrib' pointing to the location where you unpacked the contrib archive.
Would that work for you?
Best regards Norman
Hi Norman
we need a way to provide upstream library sources (i.e. the "contrib"-directory) in an environment without internet access.
I wonder if regular Unix tools could already safe your day. How about the following workflow?
Clone the Genode repository on a machine with internet access:
Prepare all packages using Genode's ports tools:
cd genode ./tool/ports/list | xargs -ixxx ./tool/ports/prepare_port xxx
Alternatively to using the output of the 'list' tool, you may maintain a file with the list of ports you actually need.
Thanks for mentioning ./tool/ports/list. I didn't think about that. The workflow you suggest will most likely meet our needs.
Just out of mere curiosity, there isn't yet (and in near future won't be) a way to create such a port list with additional version information? This way it's not necessary to reproduce the working directory of the offline computer on the online computer in order to download the correct versions of the ports.
Regards, Roman
Hi Roman,
Just out of mere curiosity, there isn't yet (and in near future won't be) a way to create such a port list with additional version information? This way it's not necessary to reproduce the working directory of the offline computer on the online computer in order to download the correct versions of the ports.
there is more to the preparation procedure than downloading archives from the right URLs. E.g., for some ports, the source code is obtained directly from their respective Git or Subversion repositories. Even for archives, the mere information about their URLs is not sufficient in the general case. On the server side, someone might replace an archive with a different version but keep the same file name. The port tool would detect such a change by comparing the SHA checksum of the downloaded archive with a known-good value (as contained in the port-description file).
All the version information (including SHA checksums of archives, or Git revisions) of the ports is contained in the port-description files along with their hash files. You can obtain a list of all those files via
cd genode find -mindepth 4 -name "*.port" -or -name "*.hash"
The information contained in those files generally suffices for downloading the 3rd-party code. However, in addition to downloading and extracting code, the port tool also applies patches (depending on the actual port) as part of the preparation procedure. Those patches are located within the Genode source tree and have the suffix '.patch'.
Given this information, it is possible to create a tar archive with the exact subset of the Genode's source tree needed to exercise the prepare-port procedure. The archive must contain the following:
* The port-description files along with their hash files (as printed by the command above) * The port tools (located at 'genode/tool/ports') * All patches ('find -mindepth 4 -name "*.patch"')
The following command assembles such an archive:
cd genode tar cfz genode_3rd.tgz \ `find -mindepth 4 -name "*.port" \ -or -name "*.hash" \ -or -name "*.patch"` \ tool/ports
When transferring the resulting 'genode_3rd.tgz' archive to your online computer, you will be able to perform the preparation of all ports using the 'tool/ports/...' tools contained in the archive. The resulting 'contrib/' directory can then be transferred to your offline computer.
That said, I guess the intention behind your question is to keep your custom code (you are referring to as "working directory") separate from the upstream Genode repository. I think the best way to achieve that would be to host your custom code in an entirely different repository and keep the Genode source tree clean from your custom code. To integrate your code with Genode, you can create a symlink 'genode/repos/romans_code' pointing to the location of your custom repository. This approach has three benefits:
* It keeps your code well separated from Genode's upstream code.
* It allows you to use a revision control system of your choice to manage your code. I.e., if you prefer Mercurial over Git, you can use your preferred tool for your code.
* You will never need to transfer your private repository (or parts thereof) to the online computer in order to prepare 3rd-party software. Information always flows from the online computer to the offline computer but not vice versa.
Best regards Norman
Hi Norman
[...] I guess the intention behind your question is to keep your custom code (you are referring to as "working directory") separate from the upstream Genode repository. I think the best way to achieve that would be to host your custom code in an entirely different repository and keep the Genode source tree clean from your custom code.
That's definitively the way to go! With "working directory" I was referring to the currently checked out branch the upstream genode git repository.
[...] it is possible to create a tar archive with the exact subset of the Genode's source tree needed to exercise the prepare-port procedure. The archive must contain the following:
- The port-description files along with their hash files (as printed by the command above)
- The port tools (located at 'genode/tool/ports')
- All patches ('find -mindepth 4 -name "*.patch"')
The following command assembles such an archive:
cd genode tar cfz genode_3rd.tgz \ `find -mindepth 4 -name "*.port" \ -or -name "*.hash" \ -or -name "*.patch"` \ tool/ports
When transferring the resulting 'genode_3rd.tgz' archive to your online computer, you will be able to perform the preparation of all ports using the 'tool/ports/...' tools contained in the archive. The resulting 'contrib/' directory can then be transferred to your offline computer.
Thanks very much for this elaborate explanation! I'm not yet sure whether I actually require (or want) so much flexibility to get the required ports. But anyway, it's good to know that it could be achieved.
Cheers, Roman