An interdisciplinary computer-based information tool can be developed to support the different work practices and information needs of interdisciplinary team members, but explicit requirements must be sought from all prospective users of such a tool. Qualitative methods such as the hybrid GT-PD approach used in this research are particularly helpful for articulating computer tool design requirements.
Social computing is a relatively new approach to systems design that emphasizes the importance of facilitating collaboration and communication between users. Although social networking is now part of mainstream culture, the use of these applications in the health care space is still in its infancy in Canada. As major vendors are preparing to enter the marketplace, it is important for a wide variety of stakeholders to discern the ramifications of this next wave of technological innovation. This paper discusses social networking applications for health care, and the challenges of dealing with this new type of information management system under current Canadian law. While regulatory authorities have considered the privacy and security implications of social networking in the course of investigating complaints, this paper contains the first explicit analysis of the legal difficulties surrounding the use of social networking for health care applications in Canada. Those risks not covered by the current regulatory framework are assessed from the standpoint of privacy-by-design, as we discuss how software developers can build privacy protection into social networking applications.
BackgroundTokenization is an important component of language processing yet there is no widely accepted tokenization method for English texts, including biomedical texts. Other than rule based techniques, tokenization in the biomedical domain has been regarded as a classification task. Biomedical classifier-based tokenizers either split or join textual objects through classification to form tokens. The idiosyncratic nature of each biomedical tokenizer’s output complicates adoption and reuse. Furthermore, biomedical tokenizers generally lack guidance on how to apply an existing tokenizer to a new domain (subdomain). We identify and complete a novel tokenizer design pattern and suggest a systematic approach to tokenizer creation. We implement a tokenizer based on our design pattern that combines regular expressions and machine learning. Our machine learning approach differs from the previous split-join classification approaches. We evaluate our approach against three other tokenizers on the task of tokenizing biomedical text.ResultsMedpost and our adapted Viterbi tokenizer performed best with a 92.9% and 92.4% accuracy respectively.ConclusionsOur evaluation of our design pattern and guidelines supports our claim that the design pattern and guidelines are a viable approach to tokenizer construction (producing tokenizers matching leading custom-built tokenizers in a particular domain). Our evaluation also demonstrates that ambiguous tokenizations can be disambiguated through POS tagging. In doing so, POS tag sequences and training data have a significant impact on proper text tokenization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.