ResumoO assunto Brasil foi analisado na base de teses francesas DocThèses, compreendendo os anos de 1969 a 1999. Utilizou-se a técnica de Data Mining como ferramenta para obter inteligência e conhecimento. O software utilizado para a limpeza da base DocThèses foi o Infotrans, e, para a preparação dos dados, empregou-se o Dataview. Os resultados da análise foram ilustrados com a aplicação dos pressupostos da Lei de Zipf, classificando-se as informações em trivial, interessante e ruído, conforme a distribuição de freqüência. Conclui-se que a técnica do Data Mining associada a softwares especialistas é uma poderosa aliada no emprego de inteligência no processo decisório em todos os níveis, inclusive o nível macro, pois oferece subsídios para a consolidação, investimento e desenvolvimento de ações e políticas.
Palavras-chaveData Mining; Bibliometria, Análise bibliométrica, Teses francesas, Brasil, Descoberta de conhecimento, Base de dados; Lei de Zipf.
Intelligence obtained with the application of data mining analysing the French DocThéses on subjects about Brazil AbstractThe subject Brazil was analysed within the context of the French data base DocThéses, comprising the years 1969 up to 1999. The data minig technique was used to obtain intelligence and infer knowledge. The software used to do the cleaning of the base DocThèses was Infotrans; and for the preparation of the data was Dataview. The results of the analysis were illustrated by making use of the assumptions of the Zipf Law, on bibliometrics, classifying the information in trivial information; of interest; and "noise", according to the distribution of frequency. The conclusion is that the Data Mining technique associated with specialist software is a powerful ally for the competitive intelligence applied on all levels of the decision making process, including the macro level. It can enhance the consolidation, investment and development of actions and policies.
O trabalho tem como objetivo analisar as principais características de divulgação de informações financeiras pela Internet, por meio da linguagem eletrônica XBRL eXtensible Business Reporting Language. Foram estudados a sua trajetória, estado da arte, funcionalidades, principais locais de desenvolvimento, grupos de pesquisa, institutos envolvidos, eventos relacionados, taxonomia de sua estrutura, tendo como base a bibliografia sobre o tema.
"Big Data is the oil of the new economy" is the most famous citation during the three last years. It has even been adopted by the World Economic Forum in 2011. In fact, Big Data is like crude! It's valuable, but if unrefined it cannot be used. It must be broken down, analyzed for it to have value. But what about Big Data generated by the Petroleum Industry and particularly its upstream segment? Upstream is no stranger to Big Data. Understanding and leveraging data in the upstream segment enables firms to remain competitive throughout planning, exploration, delineation, and field development. Oil & Gas Companies conduct advanced geophysics modeling and simulation to support operations where 2D, 3D & 4D Seismic generate significant data during exploration phases. They closely monitor the performance of their operational assets. To do this, they use tens of thousands of data-collecting sensors in subsurface wells and surface facilities to provide continuous and real-time monitoring of assets and environmental conditions. Unfortunately, this information comes in various and increasingly complex forms, making it a challenge to collect, interpret, and leverage the disparate data. As an example, Chevron's internal IT traffic alone exceeds 1.5 terabytes a day. Big Data technologies integrate common and disparate data sets to deliver the right information at the appropriate time to the correct decision-maker. These capabilities help firms act on large volumes of data, transforming decision-making from reactive to proactive and optimizing all phases of exploration, development and production. Furthermore, Big Data offers multiple opportunities to ensure safer, more responsible operations. Another invaluable effect of that would be shared learning. The aim of this paper is to explain how to use Big Data technologies to optimize operations. How can Big Data help experts to decision-making leading the desired outcomes
International audienceZipf’s law was used to qualify all the key-words of documents in a data set. This qualification was used to build a graphical representation of the resulting indicator in each document. The graphical resolution leads to a document dispatch in a three dimensional space. This graphical representation was used as an information retrieval tool without using any keyword. The presentation of a case study is internet available. The graph is drawn in Virtual Reality Markup Language (VRML) allowing a dynamic picture which is linked to a Database Management System (FreeWais). The experimentation was drawn to get a first impression of documents data set by querying without any keyword
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.