Weekly Update
Let’s start.
-
Status :: Waiting for merge the new updates and bug removal for
new_update_upstream_version(see p1049). Also working on the old issues i200 and i321 of conda-forge auto-tick-bot. - Abstract ::During the week we had some problems relating the dump.json process of p1027 and some unused statements that was solved by a bug PR update p1049, we can see at this CircleCi test. As the
sequentialform of theupdate_upstream_versiontook so much time to complete the respective jobs started to stack at our CircleCi workflow. Them as the newversionfolder was successfully deployed atcf-graph-countyfairwe stopped the run usage of the new code to merge thepoolversion, as explained at the last weak post. After the updatepoolversions is merged and successfully working with Circle I can attend to merge some alteration tomake graphto insert the newversions/node.jsonfiles to thegraph, currently it’s also outdated for the new updatesdef update_nodes_with_new_versions(gx): """Updates every node with it's new version (when available)""" try: with open('new_version.json') as file: ver_attrs = json.load(file) except FileNotFoundError as e: logger.debug(f'Process interrupted with exeption: {e}') return # Update graph according to ver_attrs for node, vertrs in ver_attrs.items(): with gx.nodes[f'{node}']['payload'] as attrs: if vertrs["bad"]: attrs["bad"] = vertrs.get("bad") elif vertrs["archived"]: attrs["archived"] = vertrs.get("archived") # Update with new information attrs["new_version"] = vertrs.get("new_version") attrs["new_version_attempts"] = vertrs.get("new_version_attempts") attrs["new_version_errors"] = vertrs.get("new_version_errors")So I have to insert an
open('versions/node.json')entry to load every new version and bad object tograph. Now we have to wait to merge the new alterations I started to work on some old issues ofcf-scriptsas i200. One of them uses some alterations at our migration phase, so during the next week meeting I will ask of some revisions, merge the alterations and talk about the migrations and the issues solve ideas. - Next Steps ::Wait for merged alteration and discuss about the i200, i321 issues. The i200 issue is the most interesting one that a want to attend right now, I’ve already created an routine for that ```python import json import subprocess import logging import tqdm import networkx as nx from .utils import load_graph, executor
conda-forge logger
logger = logging.getLogger(“conda-forge-tick._bad_issues_request”)
prepare subprocess run command
def hub_create_issue(name: str, maintainer: any, bad_str: any) -> None: # TODO: maybe add a way to create the issue at the feedstock page, not inside cf-scripts title = f”Bad feedstock error on Conda-forge ({name})” label1 = “conda-forge” label2 = “bad” body = ( “An bad error occurred when trying to update your feedstock information, “ “please check any possible alteration made.\n I the project was discontinued please let us know.\n” ) body += ( f”You are receiving this error as an attempt to solve the current bad behavior “ f”with the actual version of your feedstock, the problem raised {bad_str} as exception “ f”for retrieving a new version, please look for further details at…” )
assignee = f"{maintainer}"
command = "gh issue create"
command += f'--title "{title}" --body "{body}" --label "{label1}" --label "{label2}" --assignee "{assignee}"'
try:
# try GitHub cli issue creation
subprocess.run(command)
except Exception as ee:
logging.info("")
else:
pass
def hub(graph: nx.DiGraph) -> None: # Open the data file with feedstock bad relating information with open(“bad.json”) as file: # bad_data is a dict containing info regarding the Node feedstock and it’s bad status bad_data = json.load(file)
_all_nodes = [t for t in graph.nodes.items()]
revision = []
for node, node_attrs in _all_nodes:
with node_attrs["payload"] as attrs:
if node in bad_data:
revision.append((node, attrs))
with executor(kind="dask", max_workers=20) as pool:
for node, node_attrs in tqdm.tqdm(revision):
with node_attrs["payload"]["extra"] as attrs_extra:
# TODO: This will not work, we need to send a new issue for every maintainers not stack them
# get maintainers list
maintainers = attrs_extra.get("recipe-maintainers")
# get bad occurrence
bad = bad_data[f"{node}"]
pool.submit(hub_create_issue, node, maintainers, bad)
if name == “main”: # load graph gx = load_graph()
# load feedstock info and try issue request
try:
hub(gx)
except Exception as e:
pass ```