Setup#
Install
uvClone this repository with submodules:
git clone --recurse-submodules <repo-path>
Install
elanandlake: See Lean ManualExecute
cd <repo-path>
uv sync
uv build builds a wheel of Pantograph in dist which can then be installed. For
example, a downstream project could have this line in its pyproject.toml
pantograph = { file = "path/to/wheel/dist/pantograph-0.3.0-cp312-cp312-manylinux_2_40_x86_64.whl" }
All interactions with Lean pass through the Server class. Create an instance of Pantograph using
from pantograph import Server
server = Server()
Lean Dependencies#
The server created from Server() is sufficient for basic theorem proving tasks
reliant on Lean’s Init library. Some users may find this insufficient and want
to use non-builtin libraries such as Aesop or Mathlib4. In this case, feed in a
list of module names via the imports parameter e.g. imports=["Mathlib"]. Due
to inherent restrictions in Lean, importing a module that has not been imported
before after the server has already started is not allowed and will trigger
initializer exceptions. It may be possible to circumvent this if Lean relaxes
this constraint.
To use external Lean dependencies such as Mathlib4, Pantograph relies on an existing Lean repository. Instructions for creating this repository can be found here.
After creating this initial Lean repository, execute in the repository
lake build
to build all files from the repository. This step is necessary after any file in the repository is modified.
Then, feed the repository’s path to the server
server = Server(project_path="./path-to-lean-repo/")
For a complete example, see examples/.
Server Parameters#
The server has some additional options.
core_options: These options are passed to Lean’s kernel. For exampleset_option pp.all truein Lean corresponds to passingpp.all=truetocore_options.options: These options are given to Pantograph itself. See below.timeout: This timeout controls the maximum wait time for the server instance. If the server instance does not respond within this timeout limit, it gets terminated. In some cases it is necessary to increase this if loading a Lean project takes too long.
A special note about running in Jupyter: Use the asynchronous version of each function.
server = await Server.create()
unit, = await server.load_sorry_async(sketch)
print(unit.goal_state)
Options#
automaticMode: Set to false to disable automatic goal continuation.timeout: Set to a positive integer to set tactic execution timeout.printDependentMVars: Set to true to explicitly store goal inter-dependencies