Personal Pools

Launch this tutorial in a Jupyter Notebook on Binder: Binder

A Personal HTCondor Pool is an HTCondor Pool that has a single owner, who is: - The pool’s administrator. - The only submitter who is allowed to submit jobs to the pool. - The owner of all resources managed by the pool.

The HTCondor Python bindings provide a submodule, htcondor.personal, which allows you to manage personal pools from Python. Personal pools are useful for: - Utilizing local computational resources (i.e., all of the cores on a lab server). - Created an isolated testing/development environment for HTCondor workflows. - Serving as an entrypoint to other computational resources, like annexes or flocked pools (not yet implemented).

We can start a personal pool by instantiating a PersonalPool. This object represents the personal pool and lets us manage its “lifecycle”: start up and shut down. We can also use the PersonalPool to interact with the HTCondor pool once it has been started up.

Each Personal Pool must have a unique “local directory”, corresponding to the HTCondor configuration parameter LOCAL_DIR. For this tutorial, we’ll put it in the current working directory so that it’s easy to find.

Advanced users can configure the personal pool using the PersonalPool constructor. See the documentation for details on the available options.

[1]:
import htcondor
from htcondor.personal import PersonalPool
from pathlib import Path
[2]:
pool = PersonalPool(local_dir = Path.cwd() / "personal-condor")
pool
[2]:
PersonalPool(local_dir=./personal-condor, state=INITIALIZED)

To tell the personal pool to start running, call the start() method:

[3]:
pool.start()
[3]:
PersonalPool(local_dir=./personal-condor, state=READY)

start() doesn’t return until the personal pool is READY, which means that it can accept commands (e.g., job submission).

Schedd and Collector objects for the personal pool are available as properties on the PersonalPool:

[4]:
pool.schedd
[4]:
<htcondor.htcondor.Schedd at 0x7f130c5bf8b0>
[5]:
pool.collector
[5]:
<htcondor.htcondor.Collector at 0x7f1308ca72f0>

For example, we can submit jobs using pool.schedd:

[6]:
sub = htcondor.Submit(
    executable = "/bin/sleep",
    arguments = "$(ProcID)s",
)

schedd = pool.schedd
with schedd.transaction() as txn:
    cluster_id = sub.queue(txn, 10)

print(f"ClusterID is {cluster_id}")
ClusterID is 1

And we can query for the state of those jobs:

[7]:
for ad in pool.schedd.query(
    constraint = f"ClusterID == {cluster_id}",
    projection = ["ClusterID", "ProcID", "JobStatus"]
):
    print(repr(ad))
[ ClusterID = 1; ProcID = 0; JobStatus = 1; ServerTime = 1606229321 ]
[ ClusterID = 1; ProcID = 1; JobStatus = 1; ServerTime = 1606229321 ]
[ ClusterID = 1; ProcID = 2; JobStatus = 1; ServerTime = 1606229321 ]
[ ClusterID = 1; ProcID = 3; JobStatus = 1; ServerTime = 1606229321 ]
[ ClusterID = 1; ProcID = 4; JobStatus = 1; ServerTime = 1606229321 ]
[ ClusterID = 1; ProcID = 5; JobStatus = 1; ServerTime = 1606229321 ]
[ ClusterID = 1; ProcID = 6; JobStatus = 1; ServerTime = 1606229321 ]
[ ClusterID = 1; ProcID = 7; JobStatus = 1; ServerTime = 1606229321 ]
[ ClusterID = 1; ProcID = 8; JobStatus = 1; ServerTime = 1606229321 ]
[ ClusterID = 1; ProcID = 9; JobStatus = 1; ServerTime = 1606229321 ]

We can use the collector to query the state of pool:

[8]:
# get 3 random ads from the daemons in the pool
for ad in pool.collector.query()[:3]:
    print(ad)

    [
        AuthenticatedIdentity = "condor@family";
        EffectiveQuota = 0.0;
        GroupSortKey = 0.0;
        ResourcesUsed = 1;
        PriorityFactor = 1.000000000000000E+03;
        NegotiatorName = "jovyan@4726328b203e";
        Name = "<none>";
        AccumulatedUsage = 0.0;
        ConfigQuota = 0.0;
        LastHeardFrom = 1606229321;
        SubtreeQuota = 0.0;
        DaemonStartTime = 1606229311;
        LastUsageTime = 0;
        SurplusPolicy = "byquota";
        TargetType = "none";
        AuthenticationMethod = "FAMILY";
        LastUpdate = 1606229321;
        WeightedAccumulatedUsage = 0.0;
        Priority = 5.000000000000000E+02;
        MyType = "Accounting";
        IsAccountingGroup = true;
        BeginUsageTime = 0;
        AccountingGroup = "<none>";
        UpdateSequenceNumber = 6;
        DaemonLastReconfigTime = 1606229311;
        WeightedResourcesUsed = 3.200000000000000E+01;
        Requested = 0.0
    ]

    [
        UpdateSequenceNumber = 1;
        TargetType = "none";
        AuthenticationMethod = "FAMILY";
        Name = "jovyan@4726328b203e";
        AccountingGroup = "<none>";
        WeightedUnchargedTime = 0.0;
        DaemonStartTime = 1606229311;
        WeightedResourcesUsed = 3.200000000000000E+01;
        LastHeardFrom = 1606229321;
        Priority = 5.000000000000000E+02;
        LastUpdate = 1606229321;
        SubmitterLimit = 3.200000000000000E+01;
        MyType = "Accounting";
        PriorityFactor = 1.000000000000000E+03;
        IsAccountingGroup = false;
        Ceiling = -1;
        ResourcesUsed = 1;
        DaemonLastReconfigTime = 1606229311;
        AuthenticatedIdentity = "condor@family";
        NegotiatorName = "jovyan@4726328b203e";
        UnchargedTime = 0;
        SubmitterShare = 1.000000000000000E+00
    ]

    [
        UpdatesLost_Collector = 0;
        MaxJobsRunningMPI = 0;
        UpdatesInitial_Collector = 1;
        UpdatesTotal_Collector = 1;
        RecentUpdatesTotal_Collector = 1;
        ActiveQueryWorkersPeak = 1;
        PendingQueriesPeak = 0;
        MonitorSelfAge = 1;
        MyType = "Collector";
        CondorVersion = "$CondorVersion: 8.9.9 Oct 25 2020 BuildID: Debian-8.9.9-1.2 PackageID: 8.9.9-1.2 Debian-8.9.9-1.2 $";
        ActiveQueryWorkers = 1;
        MaxJobsRunningPVMD = 0;
        PendingQueries = 0;
        RecentUpdatesLostMax = 0;
        RecentForkQueriesFromCOLLECTOR = 1;
        UpdateInterval = 21600;
        DetectedMemory = 507368;
        RecentUpdatesTotal = 1;
        CurrentJobsRunningVanilla = 0;
        CurrentJobsRunningMPI = 0;
        UpdatesLost = 0;
        MachineAdsPeak = 0;
        DetectedCpus = 32;
        CurrentJobsRunningVM = 0;
        UpdatesLostMax = 0;
        StatsLastUpdateTime = 1606229311;
        CurrentJobsRunningLinda = 0;
        StatsLifetime = 0;
        ForkQueriesFromCOLLECTOR = 1;
        MonitorSelfTime = 1606229310;
        MaxJobsRunningAll = 0;
        CondorPlatform = "$CondorPlatform: X86_64-Ubuntu_20.04 $";
        RecentStatsLifetime = 0;
        MaxJobsRunningVM = 0;
        MaxJobsRunningJava = 0;
        MachineAds = 0;
        UpdatesInitial = 1;
        UpdatesTotal = 1;
        MaxJobsRunningGrid = 0;
        MaxJobsRunningPVM = 0;
        MaxJobsRunningStandard = 0;
        RecentUpdatesLost_Collector = 0;
        MaxJobsRunningUnknown = 0;
        AddressV1 = "{[ p=\"primary\"; a=\"172.17.0.2\"; port=37880; n=\"Internet\"; alias=\"4726328b203e\"; spid=\"collector\"; noUDP=true; ], [ p=\"IPv4\"; a=\"172.17.0.2\"; port=37880; n=\"Internet\"; alias=\"4726328b203e\"; spid=\"collector\"; noUDP=true; ]}";
        CurrentJobsRunningPipe = 0;
        MonitorSelfRegisteredSocketCount = 2;
        MonitorSelfImageSize = 16116;
        CurrentJobsRunningStandard = 0;
        CurrentJobsRunningScheduler = 0;
        Name = "My Pool - 127.0.0.1@4726328b203e";
        CurrentJobsRunningAll = 0;
        SubmitterAdsPeak = 0;
        RecentUpdatesInitial = 1;
        HostsTotal = 0;
        CurrentJobsRunningLocal = 0;
        UpdatesLostRatio = 0.0;
        MonitorSelfSecuritySessions = 2;
        CollectorIpAddr = "<172.17.0.2:37880?addrs=172.17.0.2-37880&alias=4726328b203e&noUDP&sock=collector>";
        HostsClaimed = 0;
        MyCurrentTime = 1606229310;
        MaxJobsRunningParallel = 0;
        MaxJobsRunningScheduler = 0;
        RunningJobs = 0;
        CurrentJobsRunningGrid = 0;
        MaxJobsRunningPipe = 0;
        MyAddress = "<172.17.0.2:37880?addrs=172.17.0.2-37880&alias=4726328b203e&noUDP&sock=collector>";
        Machine = "4726328b203e";
        MaxJobsRunningVanilla = 0;
        RecentUpdatesLost = 0;
        MaxJobsRunningLocal = 0;
        RecentUpdatesLostRatio = 0.0;
        IdleJobs = 0;
        CurrentJobsRunningPVMD = 0;
        DaemonCoreDutyCycle = 3.399809967200684E-03;
        DroppedQueries = 0;
        LastHeardFrom = 1606229311;
        SubmitterAds = 0;
        TargetType = "";
        MonitorSelfResidentSetSize = 6188;
        RecentUpdatesInitial_Collector = 1;
        CurrentJobsRunningParallel = 0;
        RecentDaemonCoreDutyCycle = 3.399809967200684E-03;
        CurrentJobsRunningJava = 0;
        MonitorSelfCPUUsage = 6.500000000000000E+01;
        HostsOwner = 0;
        MaxJobsRunningLinda = 0;
        CondorAdmin = "root@4726328b203e";
        CurrentJobsRunningPVM = 0;
        HostsUnclaimed = 0;
        CurrentJobsRunningUnknown = 0;
        RecentDroppedQueries = 0
    ]

When you’re done using the personal pool, you can stop() it:

[9]:
pool.stop()
[9]:
PersonalPool(local_dir=./personal-condor, state=STOPPED)

stop(), like start() will not return until the personal pool has actually stopped running. The personal pool will also automatically be stopped if the PersonalPool object is garbage-collected, or when the Python interpreter stops running.

To prevent the pool from being automatically stopped in these situations, call the detach() method. The corresponding attach() method can be used to “re-connect” to a detached personal pool.

When working with a personal pool in a script, you may want to use it as a context manager. This pool will automatically start and stop at the beginning and end of the context:

[10]:
with PersonalPool(local_dir = Path.cwd() / "another-personal-condor") as pool:  # note: no need to call start()
    print(pool.get_config_val("LOCAL_DIR"))
/home/jovyan/tutorials/another-personal-condor
[ ]: