Proposal: use NVDA Add-ons organization repo to host an enhanced version of get.php but in JSON format


 

Hi all,

After thinking about Alberto’s suggestion on including NVDA compatibility checks as part of downloading add-ons via Add-on Updater, I figured it would be helpful to “upgrade” get.php found in add-on files repo but using a different format: JSON. In short, I propose that a dedicated JSON file be created to house information found in get.php, enhanced with proposed add-on metadata from the proposed add-ons store (see Noelia’s earlier message about metadata and store proposal). As this is based on a proposed idea, this proposal itself should be viewed as a temporary solution until add-ons store is under development.

The basic infrastructure is using NVDA Add-ons organization GitHub pages repo (https://github.com/nvdaaddons/nvdaaddons.github.io, which powers nvdaaddons.github.io website). Based on notes from stack Overflow, you can access JSON files stored on GitHub repos as though you are using API’s (you can’t host php files because they are dynamic scripts, whereas JSON (Javascript Object Notation) is a static data structure). To isolate contents of current website from add-on metadata, a separate folder should be created to house one or more JSON files.

The format of the JSON file should be based on add-ons store metadata proposal with slight modifications:

  • At least two dictionaries must be created, one to record active add-ons and another to list legacy add-ons.
  • Each dictionary will be composed of further dictionaries, one per add-on with the key being add-on internal identifier (“name” field in the manifest).
  • For simplicity, the following add-on manifest fields should be present: summary, author, minimum NVDA version, last tested NvDA version. It should also record update key as recorded in get.php. Optional fields can include minimum Windows version required (major.minor.build) and SHA 256 value for the latest version installer.
  • In the legacy add-ons dictionary, each add-on must record the manifest fields above, plus a legacy add-on reason such as features included in NVDA or a declaration from add-on author.

 

For example, Windows 10 App Essentials can be represented this way:

Level 0: “active” (a dictionary)

Level 1 (add-on identifier): “wintenApps” (case-sensitive)

Level 2: manifest dictionary consisting of:

“summary”: “Windows 10 App Essentials”,

“author”: “Joseph Lee joseph.lee22590@...”,

“minimumNVDAVersion”: “2020.4”, (or perhaps a tuple consisting of year, major, minor)

“lastTestedNVDAVersion”: “2021.1”, (same considerations as above)

“updateKey”: “w10” (case-sensitive)

 

On the other hand, Screen Curtain can be represented as follows:

Level 0: “legacy” (another dictionary)

Level 1: “screenCurtain”

Level 2: add-on manifest fields and:

“legacyReason”: “Features included in NVDA 2019.3”

 

The biggest benefit of storing add-on manifest structure like this online is that it helps not only Add-on Updater, but any human or a program wishing to look up compatibility information and such can do so without having to download add-ons just to read manifest information or visit community add-ons website and select individual add-ons. The biggest beneficiary is Add-on Updater, as it simplifies the add-on significantly and allows extra checks to be performed when parsing add-on installer file names such as excluding add-on updates not compatible with current NVDA release. This last point is useful because excluding updates not compatible with current NVDA API version leads to bandwidth savings (no need to download an add-on just to learn that it isn’t compatible).

Another benefit, mostly affecting Add-on Updater, is ready availability of update checks for just approved/registered add-ons on community add-ons website without having to wait for a version of the add-on that includes new add-on information. Several Add-on Updater releases are dedicated to such a task (including new add-on info/update key), and releases can take weeks to show up.

The biggest drawback is maintenance. In order to effectively maintain this JSON structure, a person or two must monitor changes made to add-on files repo periodically (maybe once a week or two). This gets a bit messy when add-on authors publish updates compatible with just released beta 1 of backwards incompatible NVDA version.

Another drawback is representing dynamic links. As noted by Jose Manuel, several add-ons are housed behind dynamic links (Windows 10 App Essentials development snapshots and Braille Extender are good examples). I think the simplest solution is a dictionary key that lists add-ons and update channels hosted behind dynamic links.

 

To make lives easier, I think it makes sense to touch the proposed JSON structure/file if:

  1. Add-ons are added/renamed/removed/moved to legacy status/back to active status.
  2. The following manifest information is changed: summary, author, minimum NVDA version, last tested NVDA version.

 

Questions:

  1. Doesn’t this JSON structure sort of resemble NVDA Store add-on? In a way, yes, but the overall purpose of the proposed JSON structure is to store just bare minimum information for add-ons such as Add-on Updater to work when fetching add-on updates.
  2. Why not use a personal repo for all JSON work if it benefits only one add-on? I propose hosting this on NvDA Add-ons organization so people with write access to the GitHub pages repo can submit changes instead of delegating this to an individual. If it is better to host it on my own GitHub pages repo, then I will do so.
  3. How does this proposal relate to the proposed add-ons store? This proposal could be considered a stepping stone for the proposed add-ons store, and as such, this proposal is considered a temporary solution.

 

Comments are appreciated.

Cheers,

Joseph


José Manuel Delicado Alcolea
 

Hi Joseph,

This might be considered as a starting point: https://github.com/nvda-es/advancedAddonFiles

The generated JSON doesn't match exactly the propossed structure, but it can be easily adapted. This application has been running in production mode in nvda.es website since I introduced it in this list a few months ago.

Here is the JSON output: https://nvda.es/files/get.php?addonslist

And here, a possible use for get.php without parameters, instead of displaying an error message: https://nvda.es/files/get.php

It would require a server with PHP instead of a static GitHub repository, but there are no more requirements. SQLite is used as a database backend, meaning that the full database is stored on a single file. There is no need to install a database server, create user accounts, databases, grant privileges, etc.

Although it has been designed to give freedom for authors to update their add-ons, it may be used in more restricted environments. Feel free to read the source code and test it on your server if you wish.

Regards.


El 04/06/2021 a las 12:47, Joseph Lee escribió:

Hi all,

After thinking about Alberto’s suggestion on including NVDA compatibility checks as part of downloading add-ons via Add-on Updater, I figured it would be helpful to “upgrade” get.php found in add-on files repo but using a different format: JSON. In short, I propose that a dedicated JSON file be created to house information found in get.php, enhanced with proposed add-on metadata from the proposed add-ons store (see Noelia’s earlier message about metadata and store proposal). As this is based on a proposed idea, this proposal itself should be viewed as a temporary solution until add-ons store is under development.

The basic infrastructure is using NVDA Add-ons organization GitHub pages repo (https://github.com/nvdaaddons/nvdaaddons.github.io, which powers nvdaaddons.github.io website). Based on notes from stack Overflow, you can access JSON files stored on GitHub repos as though you are using API’s (you can’t host php files because they are dynamic scripts, whereas JSON (Javascript Object Notation) is a static data structure). To isolate contents of current website from add-on metadata, a separate folder should be created to house one or more JSON files.

The format of the JSON file should be based on add-ons store metadata proposal with slight modifications:

  • At least two dictionaries must be created, one to record active add-ons and another to list legacy add-ons.
  • Each dictionary will be composed of further dictionaries, one per add-on with the key being add-on internal identifier (“name” field in the manifest).
  • For simplicity, the following add-on manifest fields should be present: summary, author, minimum NVDA version, last tested NvDA version. It should also record update key as recorded in get.php. Optional fields can include minimum Windows version required (major.minor.build) and SHA 256 value for the latest version installer.
  • In the legacy add-ons dictionary, each add-on must record the manifest fields above, plus a legacy add-on reason such as features included in NVDA or a declaration from add-on author.

 

For example, Windows 10 App Essentials can be represented this way:

Level 0: “active” (a dictionary)

Level 1 (add-on identifier): “wintenApps” (case-sensitive)

Level 2: manifest dictionary consisting of:

“summary”: “Windows 10 App Essentials”,

“author”: “Joseph Lee joseph.lee22590@...”,

“minimumNVDAVersion”: “2020.4”, (or perhaps a tuple consisting of year, major, minor)

“lastTestedNVDAVersion”: “2021.1”, (same considerations as above)

“updateKey”: “w10” (case-sensitive)

 

On the other hand, Screen Curtain can be represented as follows:

Level 0: “legacy” (another dictionary)

Level 1: “screenCurtain”

Level 2: add-on manifest fields and:

“legacyReason”: “Features included in NVDA 2019.3”

 

The biggest benefit of storing add-on manifest structure like this online is that it helps not only Add-on Updater, but any human or a program wishing to look up compatibility information and such can do so without having to download add-ons just to read manifest information or visit community add-ons website and select individual add-ons. The biggest beneficiary is Add-on Updater, as it simplifies the add-on significantly and allows extra checks to be performed when parsing add-on installer file names such as excluding add-on updates not compatible with current NVDA release. This last point is useful because excluding updates not compatible with current NVDA API version leads to bandwidth savings (no need to download an add-on just to learn that it isn’t compatible).

Another benefit, mostly affecting Add-on Updater, is ready availability of update checks for just approved/registered add-ons on community add-ons website without having to wait for a version of the add-on that includes new add-on information. Several Add-on Updater releases are dedicated to such a task (including new add-on info/update key), and releases can take weeks to show up.

The biggest drawback is maintenance. In order to effectively maintain this JSON structure, a person or two must monitor changes made to add-on files repo periodically (maybe once a week or two). This gets a bit messy when add-on authors publish updates compatible with just released beta 1 of backwards incompatible NVDA version.

Another drawback is representing dynamic links. As noted by Jose Manuel, several add-ons are housed behind dynamic links (Windows 10 App Essentials development snapshots and Braille Extender are good examples). I think the simplest solution is a dictionary key that lists add-ons and update channels hosted behind dynamic links.

 

To make lives easier, I think it makes sense to touch the proposed JSON structure/file if:

  1. Add-ons are added/renamed/removed/moved to legacy status/back to active status.
  2. The following manifest information is changed: summary, author, minimum NVDA version, last tested NVDA version.

 

Questions:

  1. Doesn’t this JSON structure sort of resemble NVDA Store add-on? In a way, yes, but the overall purpose of the proposed JSON structure is to store just bare minimum information for add-ons such as Add-on Updater to work when fetching add-on updates.
  2. Why not use a personal repo for all JSON work if it benefits only one add-on? I propose hosting this on NvDA Add-ons organization so people with write access to the GitHub pages repo can submit changes instead of delegating this to an individual. If it is better to host it on my own GitHub pages repo, then I will do so.
  3. How does this proposal relate to the proposed add-ons store? This proposal could be considered a stepping stone for the proposed add-ons store, and as such, this proposal is considered a temporary solution.

 

Comments are appreciated.

Cheers,

Joseph

--

José Manuel Delicado Alcolea
Equipo de gestión web y desarrollo



Asociación Comunidad Hispanohablante de NVDA
- Tel.: (+34) 910 05 33 25 ext. 2001
- jm.delicado@...
- www.NVDA.es
- @nvda_es

***Este mensaje y sus adjuntos están dirigidos a su destinatario y pueden contener información exclusiva o confidencial. La utilización, copia o divulgación de los mismos por parte de alguien diferente a dicho destinatario no está permitida sin autorización. Si ha recibido este mensaje por error, le rogamos que lo comunique por esta misma vía y seguidamente lo destruya.***


Luke Davis
 

First, I will say that I am quite in favor of this. Further comments and questions below.

Joseph Lee wrote:

The biggest drawback is maintenance. In order to effectively maintain this JSON structure, a person or two must monitor changes made to add-on files repo
I am reasonably sure I could script most or all of that. Automation is kind of my thing.

periodically (maybe once a week or two). This gets a bit messy when add-on authors publish updates compatible with just released beta 1 of backwards
incompatible NVDA version.
As long as the file can handle the same key more than once, provided that the compat versions are different, you could get around that.

Putting it in SQL terms, you would have a primary key on (key, minVer, maxVer).

Speaking of SQL, wouldn't it be easier to back this file with a DBM?

I believe I already have fully developed SQL configurations for an add-ons database, from when me and Derek were working on an add-on store system before NVA decided to go another way.

Just a thought, not trying to complicate the idea if you already have it worked out.

Questions:

2. Why not use a personal repo for all JSON work if it benefits only one add-on? I propose hosting this on NvDA Add-ons organization so people with write
access to the GitHub pages repo can submit changes instead of delegating this to an individual.
I agree about using one of the group repos; this needs to be supported by more than one person.
My question is though: why does it need to be under a pages repo, and not its own repo under nvdaaddons?

Comments are appreciated.
Frankly, I was never sure why something like this wasn't done before.

My initial thought, when you sent your earlier message, was that you could have get.php give you an RSS feed of basic add-on data which it already has in its array. That would at least have let Updater handle things more dynamically.
I was going to suggest that once I finished the quick update to devHelper, but you got this out first.

Luke


 

Hi,

I see.

Cheers,

Joseph

 

From: nvda-addons@nvda-addons.groups.io <nvda-addons@nvda-addons.groups.io> On Behalf Of José Manuel Delicado Alcolea via groups.io
Sent: Friday, June 4, 2021 4:12 AM
To: nvda-addons@nvda-addons.groups.io
Subject: Re: [nvda-addons] Proposal: use NVDA Add-ons organization repo to host an enhanced version of get.php but in JSON format

 

Hi Joseph,

This might be considered as a starting point: https://github.com/nvda-es/advancedAddonFiles

The generated JSON doesn't match exactly the propossed structure, but it can be easily adapted. This application has been running in production mode in nvda.es website since I introduced it in this list a few months ago.

Here is the JSON output: https://nvda.es/files/get.php?addonslist

And here, a possible use for get.php without parameters, instead of displaying an error message: https://nvda.es/files/get.php

It would require a server with PHP instead of a static GitHub repository, but there are no more requirements. SQLite is used as a database backend, meaning that the full database is stored on a single file. There is no need to install a database server, create user accounts, databases, grant privileges, etc.

Although it has been designed to give freedom for authors to update their add-ons, it may be used in more restricted environments. Feel free to read the source code and test it on your server if you wish.

Regards.

 

El 04/06/2021 a las 12:47, Joseph Lee escribió:

Hi all,

After thinking about Alberto’s suggestion on including NVDA compatibility checks as part of downloading add-ons via Add-on Updater, I figured it would be helpful to “upgrade” get.php found in add-on files repo but using a different format: JSON. In short, I propose that a dedicated JSON file be created to house information found in get.php, enhanced with proposed add-on metadata from the proposed add-ons store (see Noelia’s earlier message about metadata and store proposal). As this is based on a proposed idea, this proposal itself should be viewed as a temporary solution until add-ons store is under development.

The basic infrastructure is using NVDA Add-ons organization GitHub pages repo (https://github.com/nvdaaddons/nvdaaddons.github.io, which powers nvdaaddons.github.io website). Based on notes from stack Overflow, you can access JSON files stored on GitHub repos as though you are using API’s (you can’t host php files because they are dynamic scripts, whereas JSON (Javascript Object Notation) is a static data structure). To isolate contents of current website from add-on metadata, a separate folder should be created to house one or more JSON files.

The format of the JSON file should be based on add-ons store metadata proposal with slight modifications:

  1. At least two dictionaries must be created, one to record active add-ons and another to list legacy add-ons.
  2. Each dictionary will be composed of further dictionaries, one per add-on with the key being add-on internal identifier (“name” field in the manifest).
  3. For simplicity, the following add-on manifest fields should be present: summary, author, minimum NVDA version, last tested NvDA version. It should also record update key as recorded in get.php. Optional fields can include minimum Windows version required (major.minor.build) and SHA 256 value for the latest version installer.
  4. In the legacy add-ons dictionary, each add-on must record the manifest fields above, plus a legacy add-on reason such as features included in NVDA or a declaration from add-on author.

 

For example, Windows 10 App Essentials can be represented this way:

Level 0: “active” (a dictionary)

Level 1 (add-on identifier): “wintenApps” (case-sensitive)

Level 2: manifest dictionary consisting of:

“summary”: “Windows 10 App Essentials”,

“author”: “Joseph Lee joseph.lee22590@...”,

“minimumNVDAVersion”: “2020.4”, (or perhaps a tuple consisting of year, major, minor)

“lastTestedNVDAVersion”: “2021.1”, (same considerations as above)

“updateKey”: “w10” (case-sensitive)

 

On the other hand, Screen Curtain can be represented as follows:

Level 0: “legacy” (another dictionary)

Level 1: “screenCurtain”

Level 2: add-on manifest fields and:

“legacyReason”: “Features included in NVDA 2019.3”

 

The biggest benefit of storing add-on manifest structure like this online is that it helps not only Add-on Updater, but any human or a program wishing to look up compatibility information and such can do so without having to download add-ons just to read manifest information or visit community add-ons website and select individual add-ons. The biggest beneficiary is Add-on Updater, as it simplifies the add-on significantly and allows extra checks to be performed when parsing add-on installer file names such as excluding add-on updates not compatible with current NVDA release. This last point is useful because excluding updates not compatible with current NVDA API version leads to bandwidth savings (no need to download an add-on just to learn that it isn’t compatible).

Another benefit, mostly affecting Add-on Updater, is ready availability of update checks for just approved/registered add-ons on community add-ons website without having to wait for a version of the add-on that includes new add-on information. Several Add-on Updater releases are dedicated to such a task (including new add-on info/update key), and releases can take weeks to show up.

The biggest drawback is maintenance. In order to effectively maintain this JSON structure, a person or two must monitor changes made to add-on files repo periodically (maybe once a week or two). This gets a bit messy when add-on authors publish updates compatible with just released beta 1 of backwards incompatible NVDA version.

Another drawback is representing dynamic links. As noted by Jose Manuel, several add-ons are housed behind dynamic links (Windows 10 App Essentials development snapshots and Braille Extender are good examples). I think the simplest solution is a dictionary key that lists add-ons and update channels hosted behind dynamic links.

 

To make lives easier, I think it makes sense to touch the proposed JSON structure/file if:

  1. Add-ons are added/renamed/removed/moved to legacy status/back to active status.
  2. The following manifest information is changed: summary, author, minimum NVDA version, last tested NVDA version.

 

Questions:

  1. Doesn’t this JSON structure sort of resemble NVDA Store add-on? In a way, yes, but the overall purpose of the proposed JSON structure is to store just bare minimum information for add-ons such as Add-on Updater to work when fetching add-on updates.
  2. Why not use a personal repo for all JSON work if it benefits only one add-on? I propose hosting this on NvDA Add-ons organization so people with write access to the GitHub pages repo can submit changes instead of delegating this to an individual. If it is better to host it on my own GitHub pages repo, then I will do so.
  3. How does this proposal relate to the proposed add-ons store? This proposal could be considered a stepping stone for the proposed add-ons store, and as such, this proposal is considered a temporary solution.

 

Comments are appreciated.

Cheers,

Joseph

--

José Manuel Delicado Alcolea
Equipo de gestión web y desarrollo



Asociación Comunidad Hispanohablante de NVDA
- Tel.: (+34) 910 05 33 25 ext. 2001
- jm.delicado@...
- www.NVDA.es
- @nvda_es

***Este mensaje y sus adjuntos están dirigidos a su destinatario y pueden contener información exclusiva o confidencial. La utilización, copia o divulgación de los mismos por parte de alguien diferente a dicho destinatario no está permitida sin autorización. Si ha recibido este mensaje por error, le rogamos que lo comunique por esta misma vía y seguidamente lo destruya.***


 

Hi,
It might be possible to use a dedicated repo for JSON files except the URL
can get messy (you can in fact access any file on any public repo using a
URL of a specific format). Another option is ask NV Access to host the JSON
file on add-on files repo, but then we must wait for folks to approve pull
requests; hosting on that repo increases credibility and allows using the
same IP address to obtain both the JSON and get.php files.
Cheers,
Joseph

-----Original Message-----
From: nvda-addons@nvda-addons.groups.io <nvda-addons@nvda-addons.groups.io>
On Behalf Of Luke Davis
Sent: Friday, June 4, 2021 4:24 AM
To: nvda-addons@nvda-addons.groups.io
Subject: Re: [nvda-addons] Proposal: use NVDA Add-ons organization repo to
host an enhanced version of get.php but in JSON format

First, I will say that I am quite in favor of this. Further comments and
questions below.

Joseph Lee wrote:

The biggest drawback is maintenance. In order to effectively maintain
this JSON structure, a person or two must monitor changes made to
add-on files repo
I am reasonably sure I could script most or all of that. Automation is kind
of my thing.

periodically (maybe once a week or two). This gets a bit messy when
add-on authors publish updates compatible with just released beta 1 of
backwards incompatible NVDA version.

As long as the file can handle the same key more than once, provided that
the compat versions are different, you could get around that.

Putting it in SQL terms, you would have a primary key on (key, minVer,
maxVer).

Speaking of SQL, wouldn't it be easier to back this file with a DBM?

I believe I already have fully developed SQL configurations for an add-ons
database, from when me and Derek were working on an add-on store system
before NVA decided to go another way.

Just a thought, not trying to complicate the idea if you already have it
worked out.

Questions:

2. Why not use a personal repo for all JSON work if it benefits only one
add-on? I propose hosting this on NvDA Add-ons organization so people with
write
access to the GitHub pages repo can submit changes instead of
delegating this to an individual.

I agree about using one of the group repos; this needs to be supported by
more than one person.
My question is though: why does it need to be under a pages repo, and not
its own repo under nvdaaddons?

Comments are appreciated.
Frankly, I was never sure why something like this wasn't done before.

My initial thought, when you sent your earlier message, was that you could
have get.php give you an RSS feed of basic add-on data which it already has
in its array. That would at least have let Updater handle things more
dynamically.
I was going to suggest that once I finished the quick update to devHelper,
but you got this out first.

Luke


Doug Lee
 

This is not an objection but a curiosity: What makes JSON a better choice than XML, in the opinions of those who are now defining this spec?

I don't have a strong preference either way, but I find JSON to be less verbose but XML to be easier to read and way easier to search with code, provided of course that you can use a lib that supports XPath.

On Fri, Jun 04, 2021 at 04:43:38AM -0700, Joseph Lee wrote:
Hi,
It might be possible to use a dedicated repo for JSON files except the URL
can get messy (you can in fact access any file on any public repo using a
URL of a specific format). Another option is ask NV Access to host the JSON
file on add-on files repo, but then we must wait for folks to approve pull
requests; hosting on that repo increases credibility and allows using the
same IP address to obtain both the JSON and get.php files.
Cheers,
Joseph

-----Original Message-----
From: nvda-addons@nvda-addons.groups.io <nvda-addons@nvda-addons.groups.io>
On Behalf Of Luke Davis
Sent: Friday, June 4, 2021 4:24 AM
To: nvda-addons@nvda-addons.groups.io
Subject: Re: [nvda-addons] Proposal: use NVDA Add-ons organization repo to
host an enhanced version of get.php but in JSON format

First, I will say that I am quite in favor of this. Further comments and
questions below.

Joseph Lee wrote:

The biggest drawback is maintenance. In order to effectively maintain
this JSON structure, a person or two must monitor changes made to
add-on files repo
I am reasonably sure I could script most or all of that. Automation is kind
of my thing.

periodically (maybe once a week or two). This gets a bit messy when
add-on authors publish updates compatible with just released beta 1 of
backwards incompatible NVDA version.

As long as the file can handle the same key more than once, provided that
the compat versions are different, you could get around that.

Putting it in SQL terms, you would have a primary key on (key, minVer,
maxVer).

Speaking of SQL, wouldn't it be easier to back this file with a DBM?

I believe I already have fully developed SQL configurations for an add-ons
database, from when me and Derek were working on an add-on store system
before NVA decided to go another way.

Just a thought, not trying to complicate the idea if you already have it
worked out.

Questions:

2. Why not use a personal repo for all JSON work if it benefits only one
add-on? I propose hosting this on NvDA Add-ons organization so people with
write
access to the GitHub pages repo can submit changes instead of
delegating this to an individual.

I agree about using one of the group repos; this needs to be supported by
more than one person.
My question is though: why does it need to be under a pages repo, and not
its own repo under nvdaaddons?

Comments are appreciated.
Frankly, I was never sure why something like this wasn't done before.

My initial thought, when you sent your earlier message, was that you could
have get.php give you an RSS feed of basic add-on data which it already has
in its array. That would at least have let Updater handle things more
dynamically.
I was going to suggest that once I finished the quick update to devHelper,
but you got this out first.

Luke



--
Doug Lee dgl@dlee.org http://www.dlee.org
Level Access doug.lee@LevelAccess.com http://www.LevelAccess.com
"While they were saying among themselves it cannot be done, it was
done." --Helen Keller


Luke Davis
 

Doug Lee wrote:

This is not an objection but a curiosity: What makes JSON a better choice than XML, in the opinions of those who are now defining this spec?
Personally, having worked with both in Python, I have always found JSON to have a nice direct mapping to Python dictionaries.

I am assuming that Joseph intends to consume the JSON file as a whole in Add-on Updater, and work with its contents as some sort of dictionary or object tree.

Dealing with XML would just be more work, IMO.

Also, I've noticed that JSON is becoming the standard interchange for all sorts of APIs that used to prefer XML.
(Brokerages, for example.)
It's more compact.
Less human friendly, to be sure, but if you're doing computer generation and computer consumption, do you really need human friendly?

Luke


Luke Davis
 

(Only of interest to completests and people who are bored.)

Regarding my question about why not use a repo instead of a pages site.
For some fun (because who really needs sleep anyway), I decided to find out.

1. Storing a json file in a gist.
https://gist.github.com/XLTechie/c6bd352149f1d1d76d5c811d20a6dd08/raw/addon_info.json
Works fine, but must be retrieved raw. That URL will survive pushes of an updated file.
The result is returned as type text/plain, which may be okay, but is probably not best.

2. Storing in a repo.
Works, but must be retrieved raw.
https://github.com/XLTechie/misc/raw/master/addon_info.json
Also returns as text/plain.

3. Pages, direct URL.
https://xltechie.github.io/misc/addon_info.json
Retrieves perfectly, as an application/json.
Saves to the correct filename if you're saving instead of consuming programatically.

4. Pages, URL without filename (save in a subdirectory as index.json).
https://xltechie.github.io/misc/addon-info/
Retrieves as a raw file, application/json.
The filename presents as index.html, at least while using wget, but I suspect this would be fine if it was in a pipe or urllib retrieval.

Conclusion: my suggestion of using a repo only is less ideal than using a pages site.
Gists work too, but aren't good because as far as I know, you can't give multiple users permissions to them like you can repos.

The best option seems to be the pages option with a direct filename.

To me, for reasons of flexability of permissions, and speed of deployment, the nvdaaddons organization seems best.

Luke


Joseph Lee wrote:

It might be possible to use a dedicated repo for JSON files except the URL
can get messy (you can in fact access any file on any public repo using a
URL of a specific format). Another option is ask NV Access to host the JSON
file on add-on files repo, but then we must wait for folks to approve pull
requests; hosting on that repo increases credibility and allows using the
same IP address to obtain both the JSON and get.php files.


James Scholes
 

Definitely in favour of a GitHub-hosted, static JSON structure. If it is stored with indentation to make it more human-readable, the potential exists for add-on authors to just file a PR to add a new add-on, update something, etc.

Regards,

James Scholes

On 04/06/2021 at 8:45 am, Luke Davis wrote:
(Only of interest to completests and people who are bored.)

Regarding my question about why not use a repo instead of a pages site.
For some fun (because who really needs sleep anyway), I decided to find
out.

1. Storing a json file in a gist.
https://gist.github.com/XLTechie/c6bd352149f1d1d76d5c811d20a6dd08/raw/addon_info.json
Works fine, but must be retrieved raw. That URL will survive pushes
of an updated file.
The result is returned as type text/plain, which may be okay, but is probably not best.

2. Storing in a repo.
Works, but must be retrieved raw.
https://github.com/XLTechie/misc/raw/master/addon_info.json
Also returns as text/plain.

3. Pages, direct URL.
https://xltechie.github.io/misc/addon_info.json
Retrieves perfectly, as an application/json.
Saves to the correct filename if you're saving instead of consuming
programatically.

4. Pages, URL without filename (save in a subdirectory as index.json).
https://xltechie.github.io/misc/addon-info/
Retrieves as a raw file, application/json.
The filename presents as index.html, at least while using wget, but I
suspect this would be fine if it was in a pipe or urllib retrieval.

Conclusion: my suggestion of using a repo only is less ideal than using a
pages site.
Gists work too, but aren't good because as far as I know, you can't give
multiple users permissions to them like you can repos.

The best option seems to be the pages option with a direct filename.

To me, for reasons of flexability of permissions, and speed of
deployment, the nvdaaddons organization seems best.

Luke


Joseph Lee wrote:

It might be possible to use a dedicated repo for JSON files except the URL
can get messy (you can in fact access any file on any public repo using a
URL of a specific format). Another option is ask NV Access to host the JSON
file on add-on files repo, but then we must wait for folks to approve pull
requests; hosting on that repo increases credibility and allows using the
same IP address to obtain both the JSON and get.php files.




 

Hi all,
Keep the comments coming, and I'm interested in hearing from resident NV Access people. In the meantime I'll create a test JSON file on my personal GitHub Pages repo (josephsl.github.io) along with a try version of Add-on Updater that could take advantage of the proposed JSON infrastructure.
Cheers,
Joseph

-----Original Message-----
From: nvda-addons@nvda-addons.groups.io <nvda-addons@nvda-addons.groups.io> On Behalf Of James Scholes
Sent: Friday, June 4, 2021 7:39 AM
To: nvda-addons@nvda-addons.groups.io
Subject: Re: [nvda-addons] Proposal: use NVDA Add-ons organization repo to host an enhanced version of get.php but in JSON format

Definitely in favour of a GitHub-hosted, static JSON structure. If it is stored with indentation to make it more human-readable, the potential exists for add-on authors to just file a PR to add a new add-on, update something, etc.

Regards,

James Scholes

On 04/06/2021 at 8:45 am, Luke Davis wrote:
(Only of interest to completests and people who are bored.)

Regarding my question about why not use a repo instead of a pages site.
For some fun (because who really needs sleep anyway), I decided to
find out.

1. Storing a json file in a gist.
https://gist.github.com/XLTechie/c6bd352149f1d1d76d5c811d20a6dd08/raw/
addon_info.json Works fine, but must be retrieved raw. That URL will
survive pushes of an updated file.
The result is returned as type text/plain, which may be okay, but is probably not best.

2. Storing in a repo.
Works, but must be retrieved raw.
https://github.com/XLTechie/misc/raw/master/addon_info.json
Also returns as text/plain.

3. Pages, direct URL.
https://xltechie.github.io/misc/addon_info.json
Retrieves perfectly, as an application/json.
Saves to the correct filename if you're saving instead of consuming
programatically.

4. Pages, URL without filename (save in a subdirectory as index.json).
https://xltechie.github.io/misc/addon-info/
Retrieves as a raw file, application/json.
The filename presents as index.html, at least while using wget, but I
suspect this would be fine if it was in a pipe or urllib retrieval.

Conclusion: my suggestion of using a repo only is less ideal than
using a pages site.
Gists work too, but aren't good because as far as I know, you can't
give multiple users permissions to them like you can repos.

The best option seems to be the pages option with a direct filename.

To me, for reasons of flexability of permissions, and speed of
deployment, the nvdaaddons organization seems best.

Luke


Joseph Lee wrote:

It might be possible to use a dedicated repo for JSON files except
the URL can get messy (you can in fact access any file on any public
repo using a URL of a specific format). Another option is ask NV
Access to host the JSON file on add-on files repo, but then we must
wait for folks to approve pull requests; hosting on that repo
increases credibility and allows using the same IP address to obtain both the JSON and get.php files.




Oleksandr Gryshchenko
 

Hi all,

It also seems to me that json is a more versatile format.
But when it comes to conciseness and user-friendliness, you can consider, for example, the yaml format.
If the server will use Python, it has simple tools for working with yaml.

This is just my opinion, just to supplement all the previous suggestions.

Good luck!
Oleksandr


James Scholes
 

As far as I know, there are some security concerns with YAML parsing. E.g. the documentation for the popular yaml library for Python leads with this:

Warning: It is not safe to call yaml.load with any data received from an untrusted source! yaml.load is as powerful as pickle.load and so may call any Python function. Check the yaml.safe_load function though.
IMHO, the fact that the default load function is considered unsafe is vaguely ridiculous. Plus, given that Add-On Updater would need to bundle an additional dependency to parse YAML anyway, JSON is a better choice across the board.

Regards,

James Scholes

On 04/06/2021 at 11:53 am, Oleksandr Gryshchenko wrote:
Hi all,

It also seems to me that json is a more versatile format.
But when it comes to conciseness and user-friendliness, you can consider, for example, the yaml format.
If the server will use Python, it has simple tools for working with yaml.

This is just my opinion, just to supplement all the previous suggestions.

Good luck!
Oleksandr





 

Hi,
A rudimentary JSON example was created:
https://josephsl.github.io/addondata.json

It is possible to use indentation, but I'm looking into resolving the following error:
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 10 column 3 (char 280)
Cheers,
Joseph

-----Original Message-----
From: nvda-addons@nvda-addons.groups.io <nvda-addons@nvda-addons.groups.io> On Behalf Of James Scholes
Sent: Friday, June 4, 2021 10:04 AM
To: nvda-addons@nvda-addons.groups.io
Subject: Re: [nvda-addons] Proposal: use NVDA Add-ons organization repo to host an enhanced version of get.php but in JSON format

As far as I know, there are some security concerns with YAML parsing. E.g. the documentation for the popular yaml library for Python leads with this:

Warning: It is not safe to call yaml.load with any data received from an untrusted source! yaml.load is as powerful as pickle.load and so may call any Python function. Check the yaml.safe_load function though.
IMHO, the fact that the default load function is considered unsafe is vaguely ridiculous. Plus, given that Add-On Updater would need to bundle an additional dependency to parse YAML anyway, JSON is a better choice across the board.

Regards,

James Scholes

On 04/06/2021 at 11:53 am, Oleksandr Gryshchenko wrote:
Hi all,

It also seems to me that json is a more versatile format.
But when it comes to conciseness and user-friendliness, you can consider, for example, the yaml format.
If the server will use Python, it has simple tools for working with yaml.

This is just my opinion, just to supplement all the previous suggestions.

Good luck!
Oleksandr





James Scholes
 

In the line:

"minimumWindowsVersion": "10.0.19041",

you have a comma after the value. JSON does not allow trailing commas as Python dictionary syntax does. If you already have the data stored as a dict, you can just dump it to valid JSON with Python:

with open('whatever.json', 'w', encoding='utf-8') as f:
json.dump(yourDict, f, indent='\t')

That will do tab-based indents. If you want spaces, use an int value for the indent argument representing the number of spaces you want for each level.

Regards,

James Scholes

On 04/06/2021 at 12:15 pm, Joseph Lee wrote:
Hi,
A rudimentary JSON example was created:
https://josephsl.github.io/addondata.json

It is possible to use indentation, but I'm looking into resolving the following error:
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 10 column 3 (char 280)
Cheers,
Joseph

-----Original Message-----
From: nvda-addons@nvda-addons.groups.io <nvda-addons@nvda-addons.groups.io> On Behalf Of James Scholes
Sent: Friday, June 4, 2021 10:04 AM
To: nvda-addons@nvda-addons.groups.io
Subject: Re: [nvda-addons] Proposal: use NVDA Add-ons organization repo to host an enhanced version of get.php but in JSON format

As far as I know, there are some security concerns with YAML parsing. E.g. the documentation for the popular yaml library for Python leads with this:

Warning: It is not safe to call yaml.load with any data received from an untrusted source! yaml.load is as powerful as pickle.load and so may call any Python function. Check the yaml.safe_load function though.
IMHO, the fact that the default load function is considered unsafe is vaguely ridiculous. Plus, given that Add-On Updater would need to bundle an additional dependency to parse YAML anyway, JSON is a better choice across the board.

Regards,

James Scholes

On 04/06/2021 at 11:53 am, Oleksandr Gryshchenko wrote:
Hi all,

It also seems to me that json is a more versatile format.
But when it comes to conciseness and user-friendliness, you can consider, for example, the yaml format.
If the server will use Python, it has simple tools for working with yaml.

This is just my opinion, just to supplement all the previous suggestions.

Good luck!
Oleksandr













Doug Lee
 

So far I agree that JSON and XML are better contenders than YAML, though I did not know about the YAML security issue. It is interesting to me that the Python json module is considered capable of serializing YAML.

Other random thoughts:

It is unfortunate that a final trailing comma, allowed by Python dicts, is not allowed in JSON syntax. Without careful planning, this slightly complicates the idea of using the PR approach for allowing add-on authors to
update their own JSON entries, because list position can determine whether one entry needs to modify one line of a neighboring one. XML does not have this problem.

But JSON looks like it will take much less work to implement in a Python environment; and NVDA is just such a thing anyway. Also, even in XML, entries by different authors can be too physically close together to update
without human conflict resolution.

So I think I'm coming around to the JSON side.

On Fri, Jun 04, 2021 at 12:04:10PM -0500, James Scholes wrote:
As far as I know, there are some security concerns with YAML parsing. E.g. the documentation for the popular yaml library for Python leads with this:

Warning: It is not safe to call yaml.load with any data received from an untrusted source! yaml.load is as powerful as pickle.load and so may call any Python function. Check the yaml.safe_load function though.
IMHO, the fact that the default load function is considered unsafe is vaguely ridiculous. Plus, given that Add-On Updater would need to bundle an additional dependency to parse YAML anyway, JSON is a better choice across the board.

Regards,

James Scholes

On 04/06/2021 at 11:53 am, Oleksandr Gryshchenko wrote:
Hi all,

It also seems to me that json is a more versatile format.
But when it comes to conciseness and user-friendliness, you can consider, for example, the yaml format.
If the server will use Python, it has simple tools for working with yaml.

This is just my opinion, just to supplement all the previous suggestions.

Good luck!
Oleksandr





--
Doug Lee dgl@dlee.org http://www.dlee.org
Level Access doug.lee@LevelAccess.com http://www.LevelAccess.com
"The most exciting phrase to hear in science, the one that heralds
new discoveries, is not 'Eureka!' ('I found it!') but rather 'hmm....
that's funny...'" -- Isaac Asimov


 

Hi,
Ah, I see. Thanks.
Cheers,
Joseph

-----Original Message-----
From: nvda-addons@nvda-addons.groups.io <nvda-addons@nvda-addons.groups.io> On Behalf Of James Scholes
Sent: Friday, June 4, 2021 10:22 AM
To: nvda-addons@nvda-addons.groups.io
Subject: Re: [nvda-addons] Proposal: use NVDA Add-ons organization repo to host an enhanced version of get.php but in JSON format

In the line:

"minimumWindowsVersion": "10.0.19041",

you have a comma after the value. JSON does not allow trailing commas as Python dictionary syntax does. If you already have the data stored as a dict, you can just dump it to valid JSON with Python:

with open('whatever.json', 'w', encoding='utf-8') as f:
json.dump(yourDict, f, indent='\t')

That will do tab-based indents. If you want spaces, use an int value for the indent argument representing the number of spaces you want for each level.

Regards,

James Scholes

On 04/06/2021 at 12:15 pm, Joseph Lee wrote:
Hi,
A rudimentary JSON example was created:
https://josephsl.github.io/addondata.json

It is possible to use indentation, but I'm looking into resolving the following error:
json.decoder.JSONDecodeError: Expecting property name enclosed in
double quotes: line 10 column 3 (char 280) Cheers, Joseph

-----Original Message-----
From: nvda-addons@nvda-addons.groups.io
<nvda-addons@nvda-addons.groups.io> On Behalf Of James Scholes
Sent: Friday, June 4, 2021 10:04 AM
To: nvda-addons@nvda-addons.groups.io
Subject: Re: [nvda-addons] Proposal: use NVDA Add-ons organization
repo to host an enhanced version of get.php but in JSON format

As far as I know, there are some security concerns with YAML parsing. E.g. the documentation for the popular yaml library for Python leads with this:

Warning: It is not safe to call yaml.load with any data received from an untrusted source! yaml.load is as powerful as pickle.load and so may call any Python function. Check the yaml.safe_load function though.
IMHO, the fact that the default load function is considered unsafe is vaguely ridiculous. Plus, given that Add-On Updater would need to bundle an additional dependency to parse YAML anyway, JSON is a better choice across the board.

Regards,

James Scholes

On 04/06/2021 at 11:53 am, Oleksandr Gryshchenko wrote:
Hi all,

It also seems to me that json is a more versatile format.
But when it comes to conciseness and user-friendliness, you can consider, for example, the yaml format.
If the server will use Python, it has simple tools for working with yaml.

This is just my opinion, just to supplement all the previous suggestions.

Good luck!
Oleksandr













Luke Davis
 

Joseph

I see in your try-build announcement, you mentioned people making pull requests against the JSON file.

Because JSON is so sensitive to formatting and correctness, I still think
a scripted solution for updating the file is in order.
Are you opposed to this?

Here are the steps that I believe a scripted solution should follow.

Setup:
1. Clone addonFiles.
2. Clone the pages repo containing the JSON.
3. Record current state of get.php.
4. Perform initial run as described below.

Scheduled operations:
1. Check for changes to get.php on some regular basis, perhaps daily. This will involve pulling the repo.
2. Comprehend the changes as a comparison to previous state.
3. For any updated key with URL the same, update the JSON's idea of the key.
4. For any updated add-on URL (presumably meaning new version):
A. Curl/wget the add-on package.
B. Unzip it.
C. Read its manifest.
D. If any of the variables do not match those in the JSON, update them.
E. Delete the add-on archive and tree.
5. For any new add-on, perform same steps as 4, but without comparisons.
6. For any add-on which no longer appears in get.php, remove it from the JSON, or move it to a "removed" dictionary.
7. Update the JSON file with previous changes, generate a commit message, and push the pages repo.

If you are open to this, I will need write access to the repo in question.

I can deploy it on one of my corporate servers for now, and we can eventually move it wherever.

I would probably write this as a bash script for now, but eventually it should be ported to either Python or Powershell. Might even be able to make an action out of it, although I haven't looked into how those differ from Appveyor type arrangements.

Future goals:
1. Move to NV Access server.
2. Switch to an NV Access username (nvaccessauto?).
3. Use a webhook to trigger check runs on PR merge.

Luke


 

Hi,
Excellent, and making sure the script reacts to add-on files pull requests
ensures stable releases are compared against (not dev versions yet).
Eventually a front-facing form should be developed to automate all this
based on author input, which is closest to what we have in terms of the
add-ons store idea envisioned by the community and NV Access last year.
Sometimes I wish I could stay a little while longer to see the fruits of the
add-ons store proposal, but at least we have talented folks who can make it
all happen; such is the beauty of seeing things take shape before a leader
steps down. Thank you from bottom of my heart.
Cheers,
Joseph

-----Original Message-----
From: nvda-addons@nvda-addons.groups.io <nvda-addons@nvda-addons.groups.io>
On Behalf Of Luke Davis
Sent: Saturday, June 5, 2021 1:52 AM
To: nvda-addons@nvda-addons.groups.io
Subject: Re: [nvda-addons] Proposal: use NVDA Add-ons organization repo to
host an enhanced version of get.php but in JSON format

Joseph

I see in your try-build announcement, you mentioned people making pull
requests against the JSON file.

Because JSON is so sensitive to formatting and correctness, I still think a
scripted solution for updating the file is in order.
Are you opposed to this?

Here are the steps that I believe a scripted solution should follow.

Setup:
1. Clone addonFiles.
2. Clone the pages repo containing the JSON.
3. Record current state of get.php.
4. Perform initial run as described below.

Scheduled operations:
1. Check for changes to get.php on some regular basis, perhaps daily. This
will involve pulling the repo.
2. Comprehend the changes as a comparison to previous state.
3. For any updated key with URL the same, update the JSON's idea of the key.
4. For any updated add-on URL (presumably meaning new version):
A. Curl/wget the add-on package.
B. Unzip it.
C. Read its manifest.
D. If any of the variables do not match those in the JSON, update
them.
E. Delete the add-on archive and tree.
5. For any new add-on, perform same steps as 4, but without comparisons.
6. For any add-on which no longer appears in get.php, remove it from the
JSON, or move it to a "removed" dictionary.
7. Update the JSON file with previous changes, generate a commit message,
and push the pages repo.

If you are open to this, I will need write access to the repo in question.

I can deploy it on one of my corporate servers for now, and we can
eventually move it wherever.

I would probably write this as a bash script for now, but eventually it
should be ported to either Python or Powershell. Might even be able to make
an action out of it, although I haven't looked into how those differ from
Appveyor type arrangements.

Future goals:
1. Move to NV Access server.
2. Switch to an NV Access username (nvaccessauto?).
3. Use a webhook to trigger check runs on PR merge.

Luke


James Scholes
 

I do agree that hand-editing JSON isn't a task for everyone. However, I'm not sure I fully comprehend the reasoning behind this server-side approach.

The problem statement: hand-editing a JSON file could be difficult/error-prone. Why not just provide a script in the repo itself to add/modify an add-on to/in the JSON instead?

I.e. I've written an add-on, and want it listed on the website/in add-on updater. I pull down a repo, create a new branch, and run a Python script. It asks me for all of the information required, and updates the JSON. Then I file a PR.

Server administration is difficult, and in a community like this we should operate on the principle of least privilege. I don't like the idea of the necessary tokens for repo write access lying around on some server that the community as a whole doesn't control, nor NVAccess. There is a point to be made that filing a PR in the way I've described may also be too difficult for some people, but we are talking about add-on development here so should set expectations accordingly.

Sorry if I've misunderstood anything here. I just think we're overcomplicating this and honestly would have less trust in add-on updater if the JSON was being modified by some community member's private server without even the oversight of a PR.

Regards,

James Scholes

On 05/06/2021 at 3:52 am, Luke Davis wrote:
Joseph

I see in your try-build announcement, you mentioned people making pull
requests against the JSON file.

Because JSON is so sensitive to formatting and correctness, I still think
a scripted solution for updating the file is in order.
Are you opposed to this?

Here are the steps that I believe a scripted solution should follow.

Setup:
1. Clone addonFiles.
2. Clone the pages repo containing the JSON.
3. Record current state of get.php.
4. Perform initial run as described below.

Scheduled operations:
1. Check for changes to get.php on some regular basis, perhaps daily. This
will involve pulling the repo.
2. Comprehend the changes as a comparison to previous state.
3. For any updated key with URL the same, update the JSON's idea of the
key.
4. For any updated add-on URL (presumably meaning new version):
A. Curl/wget the add-on package.
B. Unzip it.
C. Read its manifest.
D. If any of the variables do not match those in the JSON, update
them.
E. Delete the add-on archive and tree.
5. For any new add-on, perform same steps as 4, but without comparisons.
6. For any add-on which no longer appears in get.php, remove it from the
JSON, or move it to a "removed" dictionary.
7. Update the JSON file with previous changes, generate a commit message,
and push the pages repo.

If you are open to this, I will need write access to the repo in question.

I can deploy it on one of my corporate servers for now, and we can
eventually move it wherever.

I would probably write this as a bash script for now, but eventually it
should be ported to either Python or Powershell. Might even be able to
make an action out of it, although I haven't looked into how those differ
from Appveyor type arrangements.

Future goals:
1. Move to NV Access server.
2. Switch to an NV Access username (nvaccessauto?).
3. Use a webhook to trigger check runs on PR merge.

Luke





Luke Davis
 

James

First, I apologize for the length of this. You raised a number of compact points, that I think deserve addressment.

Here is my thinking.

Regarding your point that it is insecure or unsafe or etc. to have the JSON modified by some community member's private server without the oversight of a PR.

While I agree that hosting this on a non-NV Access server is less than ideal, I reiterate my original point that this should be moved to an NV Access server at first opportunity.
However, as a proof of concept, and a work-out-the-bugs type of solution, I see no problem with it.

Keep in mind, that this is mainly if not exclusively, a support mechanism for Joseph's add-on.
There is nothing in any community guidelines I have ever encountered, that suggests that members can't use their own or someone else's servers to maintain back-end infrastructure, APIs, etc. for their add-ons.
The idea for this is for it to be completely under Joseph's control, as effectively an add-on infrastructure mechanism.


You said "I don't like the idea of the necessary tokens for repo write access lying around on some server that the community as a whole doesn't control".
I will just point out here, that in GitHub, users are given access to repos. If I, or Joseph, or anyone else, chooses to store their ssh key on a private server, this is no less secure than any other usage of the key.
In fact, it's usually more secure.
Corporate machines are usually quite a bit better defended than users' home machines, and those keys already exist on thousands of those.

That said, if security is considered a concern disproportionately here, it is possible for the script to do a PR against the repo where the JSON is stored. That, of course, requires more human involvement, and a less efficient update process.

I will further point out, that Joseph could have chosen to have this JSON generated behind the scenes from the start, and have had add-on updater draw the JSON file from a private webserver instead of GitHub. Had he done that, I doubt we would have ever had this discussion or concern.

As an aside, having it updated by an open source serverside script, with an open source JSON file, seems like a nice amount of transparency to me.


Regarding your statement that server administration is difficult, that is such a broad statement that I find it difficult to formulate a response. Of course server administration can be difficult. But what is the point? Those of us who maintain them for a living deal with that difficulty on a daily basis, and the internet hasn't come to a screeching halt to date. :)
If you mean that server administration is difficult for whoever maintains the JSON maintenance scripts that I propose, fortunately, whether hosted on a server of mine, a server of Joseph's, or a server belonging to NV Access, server administration is not the responsibility of the person maintaining the scripts.


To your point that people should just clone the JSON repo, run a script, and do a PR: this adds yet another (a third, by my count) set of steps and PRs that add-on authors will have to do to update their add-ons.
We are increasingly making this process more complex and error-prone, and I for one consider it a sad direction. So much of this could be more automated as it is.
We have limited human capital to spend on add-on infrastructure and maintenance, and we should be automating away any tasks that do not absolutely require human intervention.

As it stands, including your proposed workflow, we now have:

Phase 0:
0. All of your own workflow steps necessary to update your add-on in GitHub or the software distribution platform of your choice.
Phase 1:
1. Clone addonFiles and keep it up to date.
2. Branch addonFiles.
3. Update get.php with your add-on's new information.
4. Generate a pull request to addonFiles, explaining your updates.
5. Wait for first approval.
Phase 2:
6. Clone and keep updated, the repo with the JSON.
7. Wait for merger in addonFiles. We must wait here, because if the JSON is updated before merger in addonFiles, Updater and the website will be desynchronized.
8. Once merger in addonFiles is done (likely weeks), Create a branch for the new JSON update.
9. Update the JSON file, either manually or by a script.
10. Issue a pull request to the repo containing the JSON.
11. Wait for another human to take time to review and apply the PR, which may require merge conflict resolution, JSON fix-up, etc.
Phase 3:
12. Update the stable branch of the add-on, so translators can pick it up.
13. Update the fork of the add-on in the nvdaaddons repo if permissions are given, or get someone who can do it to do it, so it can be picked up for translations.
13. Other steps necessary for translations (been a long time since I've done that).

(I note that the translations workflow could also be streamlined quite a bit; the forks to nvdaaddons are very unnecessary.)

Steps 6-10 of the above could be completely eliminated, saving time, human intervention, for both the author and maintainers. It also saves possible errors, both in the JSON itself, and in the various git processes that must be employed to get updates into it.

We are trying to encourage (I think, anyway) people to make their useful add-ons available on the community site. Giving them even more update steps to follow, in this case to make it easier for updater to track them, seems counter productive.

I can tell you without doubt, that if I personally came to this process fresh, my first question would be: why in the world do users have to maintain the update add-on's state file manually? Especially when all of the information necessary to construct it is available already by two variables in get.php, and various others in the add-on's manifest? That just cries out for centralized automation, and it is very foreign to my idea of common sense design. You don't make humans do things manually, that a computer could do more easily and efficiently.


Lastly, regarding having an author run a script manually for each update.
I will note here that in your proposal, the script asks the author for the necessary details.
By my count, that's something like seven pieces of information, some of it numeric, that must be entered perfectly. This introduces yet another layer where human error is probable.
I'm not saying this couldn't be worked around by requiring manifest access, and having the script clone addonFiles to obtain get.php, but it still seems like unnecessary delegation and expansion of the process to me.
Plus, you then have to have people localize the script, if it is doing user interaction.

Luke




James Scholes wrote:

I do agree that hand-editing JSON isn't a task for everyone. However, I'm not sure I fully comprehend the reasoning behind this server-side approach.

The problem statement: hand-editing a JSON file could be difficult/error-prone. Why not just provide a script in the repo itself to add/modify an add-on to/in the JSON instead?

I.e. I've written an add-on, and want it listed on the website/in add-on updater. I pull down a repo, create a new branch, and run a Python script. It asks me for all of the information required, and updates the JSON. Then I file a PR.

Server administration is difficult, and in a community like this we should operate on the principle of least privilege. I don't like the idea of the necessary tokens for repo write access lying around on some server that the community as a whole doesn't control, nor NVAccess. There is a point to be made that filing a PR in the way I've described may also be too difficult for some people, but we are talking about add-on development here so should set expectations accordingly.

Sorry if I've misunderstood anything here. I just think we're overcomplicating this and honestly would have less trust in add-on updater if the JSON was being modified by some community member's private server without even the oversight of a PR.

Regards,

James Scholes

On 05/06/2021 at 3:52 am, Luke Davis wrote:
Joseph

I see in your try-build announcement, you mentioned people making pull
requests against the JSON file.

Because JSON is so sensitive to formatting and correctness, I still think
a scripted solution for updating the file is in order.
Are you opposed to this?

Here are the steps that I believe a scripted solution should follow.

Setup:
1. Clone addonFiles.
2. Clone the pages repo containing the JSON.
3. Record current state of get.php.
4. Perform initial run as described below.

Scheduled operations:
1. Check for changes to get.php on some regular basis, perhaps daily. This
will involve pulling the repo.
2. Comprehend the changes as a comparison to previous state.
3. For any updated key with URL the same, update the JSON's idea of the
key.
4. For any updated add-on URL (presumably meaning new version):
A. Curl/wget the add-on package.
B. Unzip it.
C. Read its manifest.
D. If any of the variables do not match those in the JSON, update
them.
E. Delete the add-on archive and tree.
5. For any new add-on, perform same steps as 4, but without comparisons.
6. For any add-on which no longer appears in get.php, remove it from the
JSON, or move it to a "removed" dictionary.
7. Update the JSON file with previous changes, generate a commit message,
and push the pages repo.

If you are open to this, I will need write access to the repo in question.

I can deploy it on one of my corporate servers for now, and we can
eventually move it wherever.

I would probably write this as a bash script for now, but eventually it
should be ported to either Python or Powershell. Might even be able to
make an action out of it, although I haven't looked into how those differ
from Appveyor type arrangements.

Future goals:
1. Move to NV Access server.
2. Switch to an NV Access username (nvaccessauto?).
3. Use a webhook to trigger check runs on PR merge.

Luke