Translate

Mostrando postagens com marcador machine learning. Mostrar todas as postagens
Mostrando postagens com marcador machine learning. Mostrar todas as postagens

30 novembro 2018

Deep Racer

Há dois anos os pesquisadores da Alphabet, com um software de inteligência artificial (IA), derrotaram um vencedor mundial no jogo de tabuleiro “Go”. A Amazon agora tenta democratizar a técnica de IA por trás desse marco, com um carro de corrida totalmente autônomo, impulsionado por técnicas de machine learning, o Deep Racer.

A ideia é ajudar os programadores com o machine learning, ao ensinar a técnica de reinforcement learning que é um tipo de aprendizado por reforço.

Isso também gerará mais negócios para a Amazon, devido ao uso da computação em nuvem da Amazon Web Services, que é proprietária e faz a manutenção do hardware conectado à rede necessário para esses serviços de aplicativos. A computação em nuvem oferece uma forma simples de acessar servidores, armazenamento, bancos de dados e serviços de aplicativos via internet. 

O Deep Racer estará disponível na Amazon dos Estados Unidos por US$ 249.

Mais: Aqui

21 julho 2015

Teoria Econômica + Inteligência Artificial = Machina Economicus

Segue um artigo muito interessante publicado na Revista Science, um dos periódicos científicos mais prestigiados do mundo ( com um fator de impacto de 33,611 ano passado). Em seguida uma entrevista com os autores da pesquisa.

Science

Vol. 349 no. 6245 pp. 267-272
DOI: 10.1126/science.aaa8403  

 Economic reasoning and artificial intelligence

The field of artificial intelligence (AI) strives to build rational agents capable of perceiving the world around them and taking actions to advance specified goals. Put another way, AI researchers aim to construct a synthetic homo economicus, the mythical perfectly rational agent of neoclassical economics. We review progress toward creating this new species of machine, machina economicus, and discuss some challenges in designing AIs that can reason effectively in economic contexts. Supposing that AI succeeds in this quest, or at least comes close enough that it is useful to think about AIs in rationalistic terms, we ask how to design the rules of interaction in multi-agent systems that come to represent an economy of AIs. Theories of normative design from economics may prove more relevant for artificial agents than human agents, with AIs that better respect idealized assumptions of rationality than people, interacting through novel rules and incentive systems quite distinct from those tailored for people.

http://www.sciencemag.org/content/349/6245/267.full


Outros artigos sobre o tema de inteligência artificial podem ser encontrados na mesma revista aqui. Em seguida a entrevista com os autores do artigo.


The unintended consequences of rationality


DAVID PARKES DISCUSSES HOW ARTIFICIAL INTELLIGENCE IS CHANGING ECONOMIC THEORY
July 16, 2015


A century of economic theory assumed that, given their available options, humans would always make rational decisions. Economists even had a name for this construct: homo economicus, the economic man.

Have you ever met a human? We’re not always the most rational bunch. More recent economic theory confronts that fact, taking into account the importance of psychology, societal influences and emotion in our decision-making.

So, are the theories that are predicated on homo economicus extinct? David C. Parkes, the George F. Colony Professor and Area Dean of Computer Science at Harvard John A. Paulson School of Engineering and Applied Sciences, doesn’t think so. Humans may not always make rational decisions, but well-conceived algorithms do.

In a paper out today in the journal Science, Parkes and co-author Michael Wellman, of the University of Michigan, argue that rational models of economics can be applied to artificial intelligence (AI) and discuss the future of machina economicus.

 Continua aqui

 

 

03 julho 2015

Machine learning como ferramenta de gestão

Machine learning is based on algorithms that can learn from data without relying on rules-based programming. It came into its own as a scientific discipline in the late 1990s as steady advances in digitization and cheap computing power enabled data scientists to stop building finished models and instead train computers to do so. The unmanageable volume and complexity of the big data that the world is now swimming in have increased the potential of machine learning—and the need for it.

[...]

Dazzling as such feats are, machine learning is nothing like learning in the human sense (yet). But what it already does extraordinarily well—and will get better at—is relentlessly chewing through any amount of data and every combination of variables. Because machine learning’s emergence as a mainstream management tool is relatively recent, it often raises questions. In this article, we’ve posed some that we often hear and answered them in a way we hope will be useful for any executive. Now is the time to grapple with these issues, because the competitive significance of business models turbocharged by machine learning is poised to surge. Indeed, management author Ram Charan suggests that “any organization that is not a math house now or is unable to become one soon is already a legacy company.

1. How are traditional industries using machine learning to gather fresh business insights?

Well, let’s start with sports. This past spring, contenders for the US National Basketball Association championship relied on the analytics of Second Spectrum, a California machine-learning start-up. By digitizing the past few seasons’ games, it has created predictive models that allow a coach to distinguish between, as CEO Rajiv Maheswaran puts it, “a bad shooter who takes good shots and a good shooter who takes bad shots”—and to adjust his decisions accordingly.

You can’t get more venerable or traditional than General Electric, the only member of the original Dow Jones Industrial Average still around after 119 years. GE already makes hundreds of millions of dollars by crunching the data it collects from deep-sea oil wells or jet engines to optimize performance, anticipate breakdowns, and streamline maintenance. But Colin Parris, who joined GE Software from IBM late last year as vice president of software research, believes that continued advances in data-processing power, sensors, and predictive algorithms will soon give his company the same sharpness of insight into the individual vagaries of a jet engine that Google has into the online behavior of a 24-year-old netizen from West Hollywood.

2. What about outside North America?

In Europe, more than a dozen banks have replaced older statistical-modeling approaches with machine-learning techniques and, in some cases, experienced 10 percent increases in sales of new products, 20 percent savings in capital expenditures, 20 percent increases in cash collections, and 20 percent declines in churn. The banks have achieved these gains by devising new recommendation engines for clients in retailing and in small and medium-sized companies. They have also built microtargeted models that more accurately forecast who will cancel service or default on their loans, and how best to intervene.

Closer to home, as a recent article in McKinsey Quarterly notes,3 our colleagues have been applying hard analytics to the soft stuff of talent management. Last fall, they tested the ability of three algorithms developed by external vendors and one built internally to forecast, solely by examining scanned résumés, which of more than 10,000 potential recruits the firm would have accepted. The predictions strongly correlated with the real-world results. Interestingly, the machines accepted a slightly higher percentage of female candidates, which holds promise for using analytics to unlock a more diverse range of profiles and counter hidden human bias.
As ever more of the analog world gets digitized, our ability to learn from data by developing and testing algorithms will only become more important for what are now seen as traditional businesses. Google chief economist Hal Varian calls this “computer kaizen.” For “just as mass production changed the way products were assembled and continuous improvement changed how manufacturing was done,” he says, “so continuous [and often automatic] experimentation will improve the way we optimize business processes in our organizations.”4

3. What were the early foundations of machine learning?

Machine learning is based on a number of earlier building blocks, starting with classical statistics. Statistical inference does form an important foundation for the current implementations of artificial intelligence. But it’s important to recognize that classical statistical techniques were developed between the 18th and early 20th centuries for much smaller data sets than the ones we now have at our disposal. Machine learning is unconstrained by the preset assumptions of statistics. As a result, it can yield insights that human analysts do not see on their own and make predictions with ever-higher degrees of accuracy.
More recently, in the 1930s and 1940s, the pioneers of computing (such as Alan Turing, who had a deep and abiding interest in artificial intelligence) began formulating and tinkering with the basic techniques such as neural networks that make today’s machine learning possible. But those techniques stayed in the laboratory longer than many technologies did and, for the most part, had to await the development and infrastructure of powerful computers, in the late 1970s and early 1980s. That’s probably the starting point for the machine-learning adoption curve. New technologies introduced into modern economies—the steam engine, electricity, the electric motor, and computers, for example—seem to take about 80 years to transition from the laboratory to what you might call cultural invisibility. The computer hasn’t faded from sight just yet, but it’s likely to by 2040. And it probably won’t take much longer for machine learning to recede into the background

[...]

5. What’s the role of top management?

Behavioral change will be critical, and one of top management’s key roles will be to influence and encourage it. Traditional managers, for example, will have to get comfortable with their own variations on A/B testing, the technique digital companies use to see what will and will not appeal to online consumers. Frontline managers, armed with insights from increasingly powerful computers, must learn to make more decisions on their own, with top management setting the overall direction and zeroing in only when exceptions surface. Democratizing the use of analytics—providing the front line with the necessary skills and setting appropriate incentives to encourage data sharing—will require time.

C-level officers should think about applied machine learning in three stages: machine learning 1.0, 2.0, and 3.0—or, as we prefer to say, description, prediction, and prescription. They probably don’t need to worry much about the description stage, which most companies have already been through. That was all about collecting data in databases (which had to be invented for the purpose), a development that gave managers new insights into the past. OLAP—online analytical processing—is now pretty routine and well established in most large organizations.

There’s a much more urgent need to embrace the prediction stage, which is happening right now. Today’s cutting-edge technology already allows businesses not only to look at their historical data but also to predict behavior or outcomes in the future—for example, by helping credit-risk officers at banks to assess which customers are most likely to default or by enabling telcos to anticipate which customers are especially prone to “churn” in the near term (exhibit).




Continua aqui

20 outubro 2014

Detecção de fraudes em demonstrações financeiras via machine learning

This study presents a method of assessing financial statement fraud risk. The proposed approach comprises a system of financial and non-financial risk factors, and a hybrid assessment method that combines machine learning methods with a rule-based system. Experiments are performed using data from Chinese companies by four classifiers (logistic regression, back-propagation neural network, C5.0 decision tree and support vector machine) and an ensemble of those classifiers. The proposed ensemble of classifiers outperform each of the four classifiers individually in accuracy and composite error rate. The experimental results indicate that non-financial risk factors and a rule-based system help decrease the error rates. The proposed approach outperforms machine learning methods in assessing the risk of financial statement fraud.

Song X. P. Hu Z. H. Du J. G. and Sheng Z. H. (2014), Application of Machine Learning Methods to Risk Assessment of Financial Statement Fraud: Evidence from China, Journal of Forecasting. doi:10.1002/for.2294