Антон Семенченко | (epam systems, dpi.solutions )Сравнительный...
TRANSCRIPT
Anton Semenchenko
Сравнительный анализ инструментов
Автоматизации Desktop AUT
Agenda, part 1 (general)
1. Problem2. Solutions 2016
Agenda, part 2 (tools and criteria's)
1. Tools to be compared (15)
2. How \ why we selected this list of tools?
3. Comparison criteria types (3)
4. Stakeholders oriented comparison criteria (7)
5. Mixed comparison criteria (7)
6. Tech stuff oriented comparison criteria (8)
7. How \ why we selected these lists of criteria's?
8. How to select proper criteria's for your project
Agenda, part 3 (comparison analyses)
1. Mixed comparison criteria
2. Tech stuff oriented comparison criteria
3. Stakeholders oriented comparison criteria
4. Define our “standard” context
5. Summarized scores
6. How to calculate scores
7. How to use scores / presentation
8. 4 summarized tables
Agenda, part 4 (tools, “how to” and examples)
1. How to define proper tool based on selected criteria's
2. How to link information from presentation to QA Automation
metrics
3. How to link information from presentation to Project Health
Check
4. How to link information from presentation to QA Automation
ROI
5. Tools tiny overview
6. Tools overview structure
7. Example of tool usage structure
Agenda, part 5 (trends, science and “what’s next”)
1. Define a Trend! Is it possible ..?
2. Trend – an option
3. Why so?
4. What’s next
Problem
• There is an implicit leader for Web automation
Problem
• It’s not that simple if to talk about desktop apps
Tools to be compared
• TestComplete Desktop
• Unified Functional Testing (UFT)
• Ranorex
• Telerik Test Studio
• Zeenyx AscentialTest
• MS VS Coded UI
• CUIT
• AUTOIT
• Sikuli
• Jubula
• Robot Framework
• Winium
• WinAppDriver
• QTWebDriver
• PyWinAuto
How \ why we selected this list of tools?
Comparison criteria types
1. Stakeholders oriented2. Tech stuff oriented3. Mixed
Stakeholders oriented comparison criteria
1. Approximate complexity of auto-test development2. Approximate complexity of auto-test support3. Approximate “entrance” level4. Required technical skills level5. Tests readability6. How fast tests run7. Ability to re-use "Business-Logic" layer in other technical
context
Mixed comparison criteria
1. Supported platforms2. Supported technologies3. Licensing4. Maturity5. Record-Play system support6. Standard actions pack
Tech stuff oriented comparison criteria
1. Programming languages support2. Have tools for mapping3. Self-Made architecture support4. Data-Driven testing support5. Test-Driven development support6. Key-word driven7. Behavior Driven Development support8. Continues integration system support
How \ why we selected these lists of criteria's?
How to select proper criteria's for your project
Mixed comparison criteria
Supported platforms – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Supported platformsTool Platforms Mark
TestComplete Desktop Windows
Unified Functional Testing
Windows
Ranorex WindowsTelerik Test Studio WindowsZeenyx AscentialTest Windows
MS VS Coded UI ; CUIT Windows
AUTOIT WindowsSikuli Windows, Unix-like GoodJubula Windows, Unix-like GoodRobot Framework Windows, Unix-like GoodWinium / WinAppDriver ;QTWebDriver
Windows / Windows; Cross-Platform
/ ; Good
PyWinAuto Windows
Supported technologies – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10.Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Supported technologiesTool Technologies Mark
TestComplete Desktop C/C++, WinForms, WPF, Java, Qt
Unified Functional Testing
WinForms, WPF, Java, SAP
Ranorex WinForms, WPF, Java, Qt, SAP
Telerik Test Studio WPF BadZeenyx AscentialTest Win Forms, WPF, Java BadMS VS Coded UI ; CUIT Win Forms (partial), WPF BadAUTOIT OS level GoodSikuli Image recognition based GoodJubula WinForms, WPF, Java BadRobot Framework Uses AutoIT (and co inside) GoodWinium / WinAppDriver ;QTWebDriver
WinForms, WPF / Any ; QT Bad
PyWinAuto Win32 API, WinForms (partial, Win32 API bases)
Bad
Licensing – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
LicensingTool License Mark
TestComplete Desktop Paid BadUnified Functional Testing
Paid Bad
Ranorex Paid BadTelerik Test Studio Paid BadZeenyx AscentialTest Paid BadMS VS Coded UI ; CUIT Paid BadAUTOIT FreeSikuli Open source GoodJubula Open source GoodRobot Framework Open source GoodWinium / WinAppDriver ;QTWebDriver
Open source Good
PyWinAuto Open source Good
Maturity – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10.Jubula
11.Robot Framework
12.Winium
13.WinAppDriver
14.QTWebDriver
15.PyWinAuto
MaturityTool Maturity
TestComplete Desktop GoodUnified Functional Testing
Good
Ranorex GoodTelerik Test Studio GoodZeenyx AscentialTest MS VS Coded UI ; CUIT GoodAUTOITSikuliJubulaRobot FrameworkWinium / WinAppDriver ;QTWebDriver
Bad
PyWinAuto
Record-Play support – do we really need it?
Record-Play supportTool Record-Play Mark
TestComplete Desktop Yes GoodUnified Functional Testing
Yes Good
Ranorex Yes GoodTelerik Test Studio Yes GoodZeenyx AscentialTest NoMS VS Coded UI ; CUIT NoAUTOIT NoSikuli NoJubula NoRobot Framework NoWinium / WinAppDriver ;QTWebDriver
No
PyWinAuto No
Standard actions pack – do we really need it?
Standard actions packTool STD actions Mark
TestComplete Desktop NoUnified Functional Testing
No
Ranorex NoTelerik Test Studio NoZeenyx AscentialTest Yes GoodMS VS Coded UI ; CUIT NoAUTOIT NoSikuli Yes GoodJubula Yes GoodRobot Framework NoWinium / WinAppDriver ;QTWebDriver
No
PyWinAuto Yes / No (via SWAPY)
Tech stuff oriented comparison criteria
Programming languages – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Programming languages supportTool Language Mark
TestComplete Desktop Python, C#Script, JScript, C++Script, VBScript,
DelphiScript
Good
Unified Functional Testing
VBScript Bad
Ranorex C#, VB.NetTelerik Test Studio C#, VB.NetZeenyx AscentialTest Own DSL BadMS VS Coded UI ; CUIT C#, VB.NetAUTOIT Own Basic-like language BadSikuli Jython, JavaJubula -Robot Framework Own DSL, Java, Python
Winium / WinAppDriver ;QTWebDriver
Java, JavaScript, PHP, Python, Ruby, C#
Good
PyWinAuto CPython
Tools for mapping – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Tools for mappingTool Tools for mapping Mark
TestComplete Desktop Yes GoodUnified Functional Testing
Yes Good
Ranorex Yes GoodTelerik Test Studio Yes GoodZeenyx AscentialTest Yes / No GoodMS VS Coded UI ; CUIT NoAUTOIT NoSikuli Yes / NoJubula Yes GoodRobot Framework No
Winium / WinAppDriver ;QTWebDriver
No
PyWinAuto No
Custom architecture – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Custom architecture
Tool Custom architecture MarkTestComplete Desktop Yes / NoUnified Functional Testing
Yes / No
Ranorex Yes / NoTelerik Test Studio Yes / NoZeenyx AscentialTest No BadMS VS Coded UI ; CUIT Yes GoodAUTOIT No BadSikuli Yes GoodJubula No / YesRobot Framework Yes GoodWinium / WinAppDriver ;QTWebDriver
Yes Good
PyWinAuto Yes Good
DDT support – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
DDT supportTool DDT support Mark
TestComplete Desktop Yes GoodUnified Functional Testing
Yes Good
Ranorex Yes GoodTelerik Test Studio Yes GoodZeenyx AscentialTest Yes GoodMS VS Coded UI ; CUIT Yes GoodAUTOIT No BadSikuli Yes / No GoodJubula Yes GoodRobot Framework Yes Good
Winium / WinAppDriver ;QTWebDriver
Yes Good
PyWinAuto Yes Good
TDD support – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10.Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
TDD supportTool TDD Mark
TestComplete Desktop Yes / No GoodUnified Functional Testing
Yes / No Good
Ranorex Yes / No GoodTelerik Test Studio Yes / No GoodZeenyx AscentialTest No BadMS VS Coded UI ; CUIT Yes GoodAUTOIT No BadSikuli Yes / No GoodJubula Yes GoodRobot Framework Yes GoodWinium / WinAppDriver ;QTWebDriver
Yes Good
PyWinAuto Yes Good
Key-word driven – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Key-word driven supportTool Key-word Mark
TestComplete Desktop No BadUnified Functional Testing
No Bad
Ranorex Yes / NoTelerik Test Studio Yes / NoZeenyx AscentialTest Yes GoodMS VS Coded UI ; CUIT Yes / NoAUTOIT No BadSikuli Yes / NoJubula No BadRobot Framework Yes GoodWinium / WinAppDriver ;QTWebDriver
Yes / No
PyWinAuto Yes / No
BDD support – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
BDD supportTool BDD Mark
TestComplete Desktop No BadUnified Functional Testing
No Bad
Ranorex Yes GoodTelerik Test Studio Yes GoodZeenyx AscentialTest No BadMS VS Coded UI ; CUIT Yes GoodAUTOIT No BadSikuli Yes GoodJubula No BadRobot Framework Yes / NoWinium / WinAppDriver ;QTWebDriver
Yes Good
PyWinAuto Yes Good
CI support – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
CI supportTool CI Mark
TestComplete Desktop Automated Build StudioUnified Functional Testing
Jenkins plugin
Ranorex JenkinsTelerik Test Studio BambooZeenyx AscentialTest Test Execution
ManagementMS VS Coded UI ; CUIT Any GoodAUTOIT - / AnySikuli - / Any Java-compatibleJubula No BadRobot Framework Jenkins pluginWinium / WinAppDriver ;QTWebDriver
Any Good
PyWinAuto Any Good
Stakeholders oriented comparison criteria
Define our “standard” context
Approximate complexity of auto-test development
Tool Development MarkTestComplete Desktop ~3hUnified Functional Testing
~3h
Ranorex ~2h GoodTelerik Test Studio ~2h GoodZeenyx AscentialTest ~2h GoodMS VS Coded UI ; CUIT ~3h ; 2h ; GoodAUTOIT ~1h GoodSikuli ~2h GoodJubula ~2h GoodRobot Framework ~4hWinium / WinAppDriver ;QTWebDriver
~3h / 6h -> 2h / Bad -> Good
PyWinAuto ~1h Good
Approximate complexity of auto-test support (per year)
Tool Support MarkTestComplete Desktop ~3h BadUnified Functional Testing
~3h Bad
Ranorex ~2h GoodTelerik Test Studio ~2h GoodZeenyx AscentialTest ~3h BadMS VS Coded UI ; CUIT ~2h ; 1h GoodAUTOIT ~4h BadSikuli ~5h BadJubula ~2h GoodRobot Framework ~1h GoodWinium / WinAppDriver ;QTWebDriver
~2h / 10h -> 1h Good / Bad -> Good
PyWinAuto ~2h Good
Approximate “entrance” level – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10.Jubula
11.Robot Framework
12.Winium
13.WinAppDriver
14.QTWebDriver
15.PyWinAuto
Approximate “entrance” levelTool Level
TestComplete Desktop HighUnified Functional Testing
High
RanorexTelerik Test StudioZeenyx AscentialTest MS VS Coded UI ; CUIT HighAUTOIT LowSikuli LowJubulaRobot Framework HighWinium / WinAppDriver ;QTWebDriver
High ->
PyWinAuto
Required “technical skills” level – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Required “technical skills” levelTool Level
TestComplete DesktopUnified Functional Testing RanorexTelerik Test StudioZeenyx AscentialTest LowMS VS Coded UI ; CUIT High ; AUTOIT LowSikuli LowJubula LowRobot Framework HighWinium / WinAppDriver ;QTWebDriver
High ->
PyWinAuto Low
Test readability – “the worst” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
Test readabilityTool Level
TestComplete DesktopUnified Functional Testing RanorexTelerik Test StudioZeenyx AscentialTest HighMS VS Coded UI ; CUITAUTOIT LowSikuli HighJubula HighRobot Framework - > HighWinium / WinAppDriver ;QTWebDriver
- > High
PyWinAuto High
How fast tests run – “the best” tool?
1. TestComplete Desktop
2. Unified Functional Testing (UFT)
3. Ranorex
4. Telerik Test Studio
5. Zeenyx AscentialTest
6. MS VS Coded UI
7. CUIT
8. AUTOIT
9. Sikuli
10. Jubula
11. Robot Framework
12. Winium
13. WinAppDriver
14. QTWebDriver
15. PyWinAuto
How fast tests runTool Level
TestComplete Desktop BadUnified Functional Testing
Bad
RanorexTelerik Test StudioZeenyx AscentialTest MS VS Coded UI ; CUIT GoodAUTOIT GoodSikuli BadJubula BadRobot Framework GoodWinium / WinAppDriver ;QTWebDriver
Good
PyWinAuto Good
Ability to re-use "Business-Logic" layerTool “BDD” Mark
TestComplete Desktop No BadUnified Functional Testing
No Bad
Ranorex Yes GoodTelerik Test Studio Yes GoodZeenyx AscentialTest No BadMS VS Coded UI ; CUIT Yes GoodAUTOIT No BadSikuli Yes GoodJubula No BadRobot Framework Yes GoodWinium / WinAppDriver ;QTWebDriver
Yes Good
PyWinAuto Yes Good
Summarized scores
How to calculate scores
How to use scores
Stakeholders oriented scoreTool Score
TestComplete Desktop -2Unified Functional Testing
-2
Ranorex +3Telerik Test Studio +3Zeenyx AscentialTest +1MS VS Coded UI ; CUIT +1AUTOIT +1Sikuli +3Jubula +2Robot Framework +2Winium / WinAppDriver ;QTWebDriver
+2
PyWinAuto +6
Mixed scoreTool Score
TestComplete Desktop +1Unified Functional Testing
+1
Ranorex +1Telerik Test Studio 0Zeenyx AscentialTest -1MS VS Coded UI ; CUIT -1AUTOIT +1Sikuli +4Jubula +1Robot Framework +2Winium / WinAppDriver ;QTWebDriver
-2
PyWinAuto -1
Tech stuff oriented scoreTool Score
TestComplete Desktop +2Unified Functional Testing
0
Ranorex +4Telerik Test Studio +4Zeenyx AscentialTest -1MS VS Coded UI ; CUIT +4AUTOIT -6Sikuli +4Jubula +1Robot Framework +4Winium / WinAppDriver ;QTWebDriver
+6
PyWinAuto +5
Summarized scoreTool Stub
TestComplete Desktop +1Unified Functional Testing
-1
Ranorex +8Telerik Test Studio +7Zeenyx AscentialTest -1MS VS Coded UI ; CUIT +4AUTOIT -4Sikuli +11Jubula +4Robot Framework +8Winium / WinAppDriver ;QTWebDriver
+6
PyWinAuto +10
How to define proper tool based on selected criteria's
How to
1. link information from presentation to QA Automation metrics
2. link information from presentation to Project Health Check3. link information from presentation to QA Automation ROI
Tools tiny overview
Tools overview structure
1. Pros2. Cons3. What kind of project / product / problem /
situation certain tools could be used for!
Example of tool usage structure
1. Plus several examples of each tool usage— Example structure:
• Values:— Value the individual— Act as a team— Strive for excellence— Focus on customer— Act with integrity
• Prisms:— Technology— Delivery— Leadership
Project A
Project A
Project A
Test Complete Desktop
1. Pros• Low entrance level• High level of test scripts’ flexibility• Huge knowledge base (at about MSDN level)• Wide choice of script languages which look like common
languages
2. Cons• Very expensive license• Very specific own script languages
Unified functional testing
1. Pros• Low “entrance” level• High level of test scripts’ flexibility• Good tech support
2. Cons• Strict integration with other HP solutions • Very specific own DSL
Ranorex
1. Pros• Low “entrance” level• Script tests are written on common languages (C#, VB,Net)• Good tech support
2. Cons• Paid license
Telerik Test Studio (Desktop)
1. Pros• Low “entrance” level• Great parameterization of Keyword tests• DDT support using common formats (CSV, XLS, DB)• Converting tests to common languages (C#, VB.NET)
2. Cons• Only WPF-applications
Zeenyx
1. Pros• Supporting complex logic• Great organization of DDT• Using standard .Net libraries support
2. Cons• Need time to learn how to use• Specific own DSL
MS VS Coded UI
1. Pros• “Native” for Windows• Supports a huge set of UI technologies• Generated UI Map• Ready to go infrastructure• Good documentation and support
2. Cons• License cost• Relatively “low level” API
MS VS Coded UI + CUIT
1. Pros• The same as for MS VS Coded UI• Elegant “High level” API
2. Cons• The same as for MS VS Coded UI
AutoIT
1. Pros• Easy• Universal• Free
2. Cons• There is no ready-to-use verification instruments• Test = exe file• There is no ready-to-use reports
Sikuli
1. Pros• IDE is easy to learn and use• Standard actions pack• Supports an ability to write tests using common languages (Java,
Python)• Supports an ability to work on different platforms and with any
applications• Free
2. Cons• Low test’s reliability• Slow tests work• No ability to work with texts• Complicated to support tests
Jubula
1. Pros• IDE is easy to use• Supports an ability to work on requirement base• Integrated DB for storing test data and results• Free
2. Cons• No flexibility which is ingrain to script tests• No CI support
Robot Framework
1. Pros• Their own not complex and easy-to-read Keyword-based language• Plugins for different IDE’s• Work with different Oss• Different programming languages support• Tools for creating user-own libraries• Free
2. Cons• High entrance level
Winium
1. Pros• Familiar syntax and API• Supports all the languages that are supported by Selenium
WebDriver• Free
2. Cons• “Immature” testing tool• Incomplete way of locating elements• A lack of documentation
WinAppDriver
1. Pros• Familiar syntax and API• “Native” for Windows• Free
2. Cons• “Immature” testing tool• Complicated (in special case usage)• A lack of documentation
QTWebDriver
1. Pros• Familiar syntax and API• QT Applications oriented / “Native” (unique tool)• Free
2. Cons• “Immature” testing tool• Complicated (in special case usage)• A lack of documentation
PyWinAuto
1. Pros• Extremely simple to use• Easy to support• Free
2. Cons• Do not support all popular UI technologies• CPython only
Define a Trend! Is it possible ..?
Trend
1. There is a potential leader for Desktop Automation
Why so?
“Scientific” technical prove of Trend
• Hegel’s dialectics
• Bifurcation mathematical apparat (Bifurcation Theory)
• Sedov’s law of hierarchical compensation
• Pannov-Snuks Vertical
• Big History
Why so?
Non-technical scientific prove of Trend
• Peter Drucker “Management. Challenges for the 21st Century”
Note: It’s a topic of the whole big conversation, and I’m sure we’re going to get back to it, but not today…
How to
1. use this presentation on different project phases2. use this presentation based on main project roles
What’s next (just a possible way)
• Shu1. Use Presentation
1. Please, follow recommendationsa) “How to select proper criteria's for your project” b) “How to define proper tool based on selected criteria's”c) “How to link information from presentation to QA Automation
metrics”d) “How to link information from presentation to Project Health Check”e) “How to link information from presentation to QA Automation ROI”f) “How to use this presentation on different project phases”g) “how to use this presentation based on main project roles”
What’s next
• Ha1. Update a set of criteria's2. Update a set of tools3. Update Presentation4. Read “Scientific” prove of Trend
What’s next
• Ri1. Re-Read “Scientific” prove of Trend2. Update a set of criteria's3. Update a set of tools4. Update Presentation5. Predict the “Trend”6. Manage the “Trend”
Next iteration • Move from static (Presentation) to dynamic (Application)• For example, “https://telescope.epam.com”
CONTACT [email protected]
semenchenko_anton_v
https://www.linkedin.com/in/anton-semenchenko-612a926b
https://www.facebook.com/semenchenko.anton.v
https://twitter.com/comaqa
Thanks for your attention
Anton SemenchenkoDPI.Solutions
EPAM Systems
www.comaqa.bywww.corehard.by