How to earn a million dollars through your AI and meanwhile benefit humanity



[ad_1]

You may not realize that you are already sitting with a million dollars and simply need to embark on a relatively modest effort to turn the hidden treasure into a pile of cash in your hands.

And at the same time benefit humanity.

Well, in the second part, which benefits humanity, it is a central requirement to get the money and it will probably be followed immediately by fame and acclaim if you like that kind of thing.

How can you hook the dough?

There is an ongoing contest that promises a prize of $ 1,000,000 to someone or some entity that has innovatively managed to perform an extraordinarily good action with AI that clearly benefits humanity.

It is legitimate and on the rise.

The estimated millions of dollars will be awarded in the next instance of the annual conference by the esteemed Association for the Advancement of Artificial Intelligence (AAAI).

As a longtime AI expert and having worked as an AI academic researcher and practitioner, I have been an active participant in the AAAI for decades and can attest to the seriousness and dedication that the AAAI has towards advancing the IA (AAAI was originally founded in 1979).

To make things clearer, I have been a speaker at your conferences and symposia, in addition to serving on various committees over the years, and therefore I firmly believe in this non-profit scientific society and its stated mission, to know, “advance in the scientific understanding of the mechanisms underlying intelligent thinking and behavior and their incarnation in machines”.

I recently mentioned to some of my fellow colleagues both within AI and outside the AI ​​realm that there is a million dollar contest going on; Unfortunately, many had not heard or seen any news headlines or media coverage on the matter. It’s a shame, as there are plenty of slaves in artificial intelligence systems who could fit the criteria of competition, but unaware of the beefy reward due to them for their coffee-laden humanitarian jobs late at night.

Even if you’re not sitting in an AI system that you might qualify for, the event, however, might be of interest to you if you’re wondering what types of AI systems are being built and deployed, especially for those of you who focus on AI forever, which is a growing locution for AI creation that benefits the world in one way or another.

In case you are wondering if there is a counterparty, like AI for badYes, unfortunately there is such a thing.

There are a lot of nasty guys in the devil zone who are creating artificial intelligence systems to break into our everyday computers and steal your information or ruin your privacy. There are also groups of criminals who want to use AI to tear down the power grid, or hope to mess up our stoplights, or want to use AI to help create chaos, start wars, and undermine society.

AI is without a doubt a two-sided coin.

Hopefully the AI forever is capable of overcoming and mastering the AI for bad.

In that sense, it is moving that some choose to be on the side of good, and are putting together artificial intelligence systems that will improve life and improve our living conditions.

Aside from being poignant, it would also be nice to add a bit of sweetener to those who follow that line of work, and as such perhaps the million dollar competition will provide that icing on the cake.

For AI-related startups, the million dollars could be more than the icing, and instead it could be the cake, which means that with the prize money they could afford to get on with their AI For Good, whatever it is, and use the funds to promote their compassionate goals.

Those who wish to apply must do so by May 24, 2020 and must complete an online nomination form (see this link here), and the award ceremony will take place at the AAAI conference scheduled for February 2021.

Here are some important cleaning details:

· This AI for the benefit of humanity The objective of the contest is to recognize “the positive impacts of artificial intelligence to protect, improve and improve human life significantly with long-lasting effects”.

· This is the first time the award has been held.

· The award is being administered by AAAI, along with support from the European Association for Artificial Intelligence (EurAI), the China Association for Artificial Intelligence (CAAI), and through financial support provided by Squirrel AI.

· Applicants may be individuals, groups or organizations, provided that the applicants have been the main contributors to the indicated aspects described in the application form.

· There are several conflict of interest rules that must be observed and may limit or prevent some applicants.

· If you send this year and do not win, you can resend the following year and continue to do so annually, but only for three consecutive years.

· And so on (be sure to read the instructions carefully carefully).

In general, you may be wondering what constitutes an AI system that benefits humanity.

There is a lot of freedom of action within that general notion.

According to the award instructions, this is what the official description of intent consists of:

· Implementation of artificial intelligence techniques that improve the way critical resources or infrastructure are managed.

· Artificial intelligence applications to support disadvantaged or marginalized populations.

· Learning tools that significantly improve the access and quality of education.

· Intelligent systems that improve the quality of life of its users.

Consider an example of an AI system that could fit under that rubric.

A company has decided to present its artificial intelligence-based autonomous car project as an indication of the use of artificial intelligence to support those from a disadvantaged or marginalized population (the second item in the list above).

How could an autonomous car relate to a humanitarian purpose?

The goal of your AI powered autonomous car is to allow those who are mobility-impaired today to have access to mobility, through the advent of properly designed autonomous cars, and it is a matter of growing awareness and importance (see my coverage at this link here from the annual Princeton Summit involving such driverless car designs and uses.)

In general, some claim that if we are able to produce safe and reliable autonomous cars based on artificial intelligence, there will be a transformative impact on society and we will achieve a mobility achievement for all.

By the way, yes, I do notice that I have left the cat out of the bag in terms of its presentation, which may seem a bit off-putting on my part, but I asked them beforehand if it was okay for me to mention their intention, and they said that they welcomed my doing it (without naming them per se) and that perhaps the mere generic mention of their AI forever the effort could inspire others accordingly.

There is an awards committee that will finally decide on the winner of the competition.

One is not envious of the difficulties they will have, as there are likely to be many valid submissions, each with its own respective heart-jerk and good faith use of AI forever. That said, the advantage is, in fact, the opportunity to discover the variety and liveliness of AI that benefits the humanity that is being worked on around the world, and to be dazzled and euphoric to know that so many efforts are underway. .

The official awards committee as indicated and described according to the competition publication, and consists of (listed in alphabetical order by last name):

· Yoshua Bengio is a professor in the Department of Computer Science and Operations Research at the University of Montreal and holds the Canadian Research Chair in Statistical Learning Algorithms.

· Tara Chklovski is CEO and founder of Technovation Nonprofit Global Technology Education (formerly Iridescent).

· Edward A. Feigenbaum is a Kumagai Professor of Computer Science Emeritus at Stanford University.

· Yolanda Gil (Chair of the Awards Committee) is Director of Knowledge Technologies at the Institute of Information Sciences at the University of Southern California and Research Professor in Computer Science and Space Sciences.

· Xue Lan is a distinguished professor of the Cheung Kong chair and dean of Schwarzman College, and dean emeritus of the Tsinghua University School of Policy and Public Management.

· Robin Murphy is the Raytheon Professor of Computer Science and Engineering at Texas A&M and directs the Robot Assisted Search and Rescue Center.

· Barry O’Sullivan holds the Chair of Restriction Programming at University College Cork in Ireland.

Coming soon with AI for humanity ideas

Let’s shift gears and move on to another topic, despite a related issue that demonstrably underlies the general theme of AI forever.

If you are an artificial intelligence developer or perhaps an investor in artificial intelligence systems, you might be thinking about trying to target an artificial intelligence project that would be considered an artificial intelligence system for the benefit of humanity, and yet You have no immediate idea of ​​what an effort could be focused on.

Sometimes one of the hardest parts of chasing an AI system is identifying what the AI ​​will try to accomplish.

This may seem surprising to those who are not interested in artificial intelligence, but be aware that there are often artificial intelligence specialists who are similar to the classic line about having a hammer and wanting to use it in everything you see. In other words, you might know how to design an artificial intelligence system, and not be especially sure where and what to focus on, meanwhile, you’re ready to apply artificial intelligence to something that hopefully has merit and common sense .

In advising those who have chosen to be versed in AI, I like to point to those bravely bent on AI forever consider the nature of the world’s pressing problems. It seems likely that trying to solve a global problem through AI is beneficial.

Of course, a single AI system is not going to miraculously and suddenly “solve” a whole planetary difficulty. Let’s not kid ourselves and overinflate what could be done through AI. However, it would be useful to start removing the corners and edges of global problems, hoping that AI will become a means of gradually or inexorably reducing or mitigating those problems.

We can hope so.

A useful source of global risks at the global level is an annual survey conducted through the World Economic Forum (WEF).

Here is an abbreviated list of Global Risks 2020 through the WEF report:

· Economic

or asset bubbles

or deflation

o Failure of the main financial mechanisms.

o Critical infrastructure failure

or fiscal crisis

o Highly structured unemployment

or illegal trade

o Severe shock to energy prices

or unmanageable inflation

· Environmental

or extreme weather

o Failure of climate change mitigation.

o Great loss of biodiversity

or major natural disaster

or environmental damage caused by man

· Geopolitical

o Failure of national governance

o Failure of global governance

or interstate conflict

or large-scale terrorist attacks

or collapse of the state

o Weapons of mass destruction.

· Social

o Failure of urban planning.

or food crisis

Large-scale involuntary migration

o Deep social instability

o The rapid spread of infectious diseases.

or water crisis

· Technological

o Adverse consequences of technological advances.

o Breakdown of critical infrastructure

or large-scale cyber attacks

o A massive incident of fraud or data theft

As you can see, the list is pretty daunting.

According to the WEF, each of these aspects represents an uncertain event or condition, so if it occurs, it could cause a significant and severe negative impact, do it within and between countries, and occur in the next 10 years.

You may have noticed that one of the items listed is the rapid spread of infectious diseases, especially on the list and as published prior to the current pandemic.

Here we show you how to use the list.

Ask yourself these questions:

· Is there an item on the list that resonates as a particular focus or interest to you?

· Could AI be devised to reduce the chances of that element occurring?

· Could AI be devised to mitigate impacts if the element arises?

· What would the AI ​​do and is it feasible for the AI ​​to perform such tasks?

· How much effort would it take to create AI to do it?

· If such an AI existed, who would want to use it and how would they do it?

· Could AI be combined with other AI systems that address the same element?

· Could AI be interspersed with AI that addresses similar items in the list?

· Are there barriers to designing and deploying such AI?

· Is the planned AI reasonably feasible or an impossible dream?

Those are many hard-hitting questions, but it makes sense to consider them properly.

There is no point in embarking on a path that will be a dead end or that could usurp your attention towards some other AI project that might have a better chance of coming to fruition.

AI directed to AI

Next, let’s take a macro view of the matter.

There is an AI that you could create for a particular purpose, such as the global risks mentioned above that could be mitigated through the AI.

There is also AI that can help AI that is looking to help the world.

What what?

Well, if you take an overview, an interesting angle involves trying to make sure that AI is done and implemented in a AI forever way, and not for a AI for bad Fashion.

Thus, he could use artificial intelligence for the overall purpose, targeting pedantic artificial intelligence that could otherwise be directed off-road and into never-abandoned lands of malevolent activity.

As readers know, I’ve been covering AI and social ethics for quite some time (see the links here), and in addition to asking people to be aware of their AI, there is also additional reinforcement through the use of AI to serve as a guide for those who create AI and its resulting AI systems (this seems almost recursive, for those of you who enjoy software development and programming).

Some have been calling for a kind of International AI Treaty, which governs the direction and future of AI and its implementations.

One of those discussions by Oren Etzioni, CEO of the Allen Institute for AI (AI2) and professor at the University of Washington, and his main assistant at the famous AI2, Nicole DeCario, offered these relatively common principles that seem to be in conflict on this issue. :

  • Defend human rights and values
  • Promote collaboration.
  • Guarantee equity
  • Provide transparency
  • Establish responsibility
  • Limit the harmful uses of AI
  • Guarantee the security
  • Recognize the legal and political implications.
  • Reflect diversity and inclusion
  • Respect privacy
  • Avoid concentrations of power
  • Contemplate the implications for employment

Regardless of what such a list may contain, the point here is that there is an opportunity for those familiar with AI to try to use AI for the sake of helping the future of AI in its social implications (including these ethical considerations related to the AI ​​DoD) that I analyze in this link here).

If you are an AI developer or investor who says you don’t know anything about the elements of global risks and you are not sure how you could help mitigate climate risks or global risks of financial instability, you may know about AI what enough to look inside the AI ​​field itself.

In short, could you devise AI that will help another AI stay within the protective barriers of one day to be AI principles?

This long-range notion can be characterized through the upstream parabola (see my analysis at this link here), meaning that instead of waiting until the horse is already out of the stable, it can do so much good if you keep the Horse in the barn, or once the horse leaves, you are guided where you will go, which otherwise becomes a big problem due to the lack of initial steps that should have been taken to get started.

AI, as they say, could be used to heal itself.

Or, stand up when turning towards AI for bad camp.

conclusion

It can be difficult to be altruistic and try to design an AI that is for the benefit of humanity. Sure, there is pride and it offers a means of making the world a better place.

In the meantime, you must have food on your plate and somehow livelihood to use your energies and efforts towards that altruistic AI target.

Why not earn a million dollars?

And, in terms of submitting your own nomination, it’s like buying a lottery ticket in the sense that if you don’t play, you have no chance of winning.

Best of luck and I’ll be reporting on the winner, perhaps contributing to the fame and acclaim that legitimately breeds those seeking to do AI for the benefit of humanity.

You are all a very precious lot.

.

[ad_2]