Answer the question
In order to leave comments, you need to log in
Is it possible to use machine learning or neural networks in website testing automation?
Tell a newbie who faced a similar problem.
It is necessary to adapt the testing script to changes in the html layout on the site.
How it works now - there are about 50-60 web pages, for each page there is a C # + Selenium testing script - filling out the form, selecting elements by XPath' filling them in, submitting the form, getting the result, evaluating.
The problem is that the page markup changes periodically, the input elements move within the page, so you have to update the XPath values and constantly monitor the validity of the scripts.
The question is whether it is possible with the help of the technologies described above (machine learning, neural networks) to recognize elements on pages and determine what to do with them instead of looking for input elements by hard binding using XPath.
Those. as input, a certain set of rules for filling and values, then recognizing elements by their type and name, and then filling them in according to the rules.
Thus, if some element is renamed or the location is changed, the script will not have to be changed.
Has anyone encountered a similar task, is it feasible in principle?
Answer the question
In order to leave comments, you need to log in
As a tool NS fit everywhere. NS is such a meat grinder where vectors are thrown and something is received at the output. Or classification. Or a new vector.
But it seems to me that the weak point of this task is the practical impossibility of learning without a teacher.
You still need to show this network and explain something.
Another weak point of this task is the formalization of entry and exit. What are you entering? The classical grid
operates with continuous quantities. What about you? To the html input? In XPath output?
Maybe NS is still overengineering?
Collect the properties of each object (tag properties, class names, styles, etc.) and its environment: all objects of the same node where the current object is located, a chain of all parent nodes (xPath), and a "tree" of all nested nodes.
And so - for each.
By the number of matches of properties/paths of nodes AND their mismatches , one can very accurately determine which previous object is similar to the current one.
Those. corny by the "weights" of the metrics, you can give an extremely accurate conclusion about how a particular object has been changed relative to the previous html code.
you can do anything
, even check for interface slippage in new versions of browsers (that’s still masochism),
but it’s easier to learn how to write Xpath correctly so that this doesn’t happen
search for input elements by hard-binding by XPath
What prevents to mark key containers with id and classes? At the same time, shorten your nightmarish selectors. Ps and for testing, you can write the values \u200b\u200bthat you know into the placeholders and look for them in the document, after initializing your xpath
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question