Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering 2020
DOI: 10.1145/3377811.3380404
|View full text |Cite
|
Sign up to set email alerts
|

Interpreting cloud computer vision pain-points

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(30 citation statements)
references
References 29 publications
0
30
0
Order By: Relevance
“…Finally we concluded that the understanding of how Software Engineering Model Development practices and the adoption of a Machine Learning Workflow in accordance with those practices, more specifically, with the Software Engineering life-cycle is a subject of vital importance for the evolution of Machine Learning/Artificial Intelligence and continuing development of its applications (especially on a large scale), even if further research on the topic is needed. -Provenance tag for data models [1], [2], [13] Documentation and Versioning -Extract metadata from repositories is difficult -Catalog of ML models to support design and maintenance [15] Non-functional Requirements -security -unassured reliability and lacking transparency -Identify parts of the ISO 26262 to be adapted to ML -An approach based on dependability assurances [30], [31] Design and Implementation -APIs look and feel like conventional APIs, but abstract away data-driven behavior -catalog of design patterns for ML development -information to support documentation and design of APIs [6], [32] Evaluation -Testing interpretability, privacy, or efficiency of ML -Proposal of new test semantic -Tests based on quality score [3], [33], [34] Deployment and Maintenance -Lack of support to adapt based on feedback -An approach to support adaptation based on quality gates [34] Software Capability Maturity Model (CMM)…”
Section: Discussionmentioning
confidence: 99%
“…Finally we concluded that the understanding of how Software Engineering Model Development practices and the adoption of a Machine Learning Workflow in accordance with those practices, more specifically, with the Software Engineering life-cycle is a subject of vital importance for the evolution of Machine Learning/Artificial Intelligence and continuing development of its applications (especially on a large scale), even if further research on the topic is needed. -Provenance tag for data models [1], [2], [13] Documentation and Versioning -Extract metadata from repositories is difficult -Catalog of ML models to support design and maintenance [15] Non-functional Requirements -security -unassured reliability and lacking transparency -Identify parts of the ISO 26262 to be adapted to ML -An approach based on dependability assurances [30], [31] Design and Implementation -APIs look and feel like conventional APIs, but abstract away data-driven behavior -catalog of design patterns for ML development -information to support documentation and design of APIs [6], [32] Evaluation -Testing interpretability, privacy, or efficiency of ML -Proposal of new test semantic -Tests based on quality score [3], [33], [34] Deployment and Maintenance -Lack of support to adapt based on feedback -An approach to support adaptation based on quality gates [34] Software Capability Maturity Model (CMM)…”
Section: Discussionmentioning
confidence: 99%
“…Recently, due to the rapid development of big data and machine learning, researchers have also started to study the problems and challenges that developers encounter in the development practices of these two types of applications based on the data on Stack Overlow [5,7,51]. For example, studies of deep learning bug characteristics [23], TensorFlow program bugs [52], deep learning deployment [14,19], and cloud computer vision [16] have emerged. These studies are about how to identify or understand the challenges and characteristics related to software development based on the developers' discussions on Stack Overlow.…”
Section: Related Workmentioning
confidence: 99%
“…This means that their large training datasets may continuously update the prediction classifiers making the inferences, resulting both in probabilistic and non-deterministic outcomes [11,17]. Critically for software engineers using the services, these non-deterministic aspects have not been sufficiently documented in the service's API documented, which has been shown to confuse developers [9]. A strategy to combat such service changes, which we often observe in traditional software engineering practices, are for such services to be versioned upon substantial change.…”
Section: 'Intelligent' Vs 'Traditional' Web Servicesmentioning
confidence: 99%
“…However, emerging evidence indicates that 'intelligent' services do not communicate changes explicitly [10]. Intelligent services evolve in unpredictable ways, provide no notification to developers and changes are undocumented [9]. To illustrate this, consider fig.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation