Re: Next couple of questions relating to retrieving an object based on controlID, performing actions, typing in text automatically and event binding
Travis, in terms of #1 if I, for example, try using that in
notepad - purely as a test, with the controlID value being passed
as the controlID have literally just pulled from the file menu, I
am still getting the following error message in the log:
LookupError: No matching descendant window found
And, I tried switching over from api.getForegroundObject() to
api.getDesktopObject() just in case, but, no difference.
Also 'see' that the controlID value is different each time, so,
this would not really be feasible as such - would need to rather
look into a form of object-tree traversing, comparing values and
types if wanted to look for a GUI element like this.
Not the end of the world - will work with mouse coordinates, etc., for now.
And, in terms of #2, I was just asking for confirmation that obj.doAction() would just choose a form of default activity per element - as in focus on a text field, click on a button, etc, but, this is also less relevant if cannot figure out a way to get references to instances of the GUI elements.
In terms of #3 agree with you fully with regard to clipboard hijacking, so was thinking that typing one character at a time would be more suitable.
Let me sort of rephrase #4 - I presume there's no way to bind keystrokes to script blocks at runtime? As in, no way to manipulate the contents of __gestures at runtime? But, the original question related to being able to read forms of monitoring instructions in from the text file loading the queued command sets - as in, read an entry from that text file that instructed me to notify the user actively if some specific element fired an event, but, without having hard-coded the event binding in before-hand.
Let me, lastly, explain why I am working with this text file entries approach at the moment as well - the specific piece of software they're asking me to develop an add-on for is the goldmine CRM package, but, an older version, and, I have no experience with it, and, don't think it's general interface is in any case all that usable out-of-the-box from an accessibility perspective, so, the idea is to let the sighted training instructor work via my text file input mode, to compile interaction command sets, which I can then, later on, hard-code into the actual python code, etc., but anyway - they're not really experienced with NVDA at all, and, they'll be popping round tomorrow for me to explain all sorts of things to them... 😉
The one advantage is have been meaning to look into working on
real NVDA add-on code for quite a while, and this is both an
excuse as well as a launching platform from my perspective, since
have been working with python code for quite a few years now, but,
mainly using it to handle data transformation, command line
utilities, etc., with only a little bit of actual wx GUI
experience in the past.
Thanks for all your efforts
+2782 413 4791
"...resistance is futile...but, acceptance is versatile..."
On 2021-01-13 04:21 PM, Travis Roth wrote: