Dear Gate Post users, we’re excited to announce a brand-new upgrade to our user interface! The new version is simpler, smoother, and packed with many thoughtful new features. Update now and explore what's new! What do you think of the new Gate Post experience? Which features do you like most? Have you noticed any surprises or improvements? Share your experience now to split a $50 prize pool!
🎁 We'll select 5 users with outstanding posts, each winning $10!
How to participate:
1. Follow Gate_Post;
2. Create a post with the hashtag #MyGatePostUpgradeExperience# , sharing your feedback and experie
Afterwards of Zhiyuan Conference 2023: I am more confident in AI and more worried about humans
On June 9, the two-day "Beijing Zhiyuan Conference" was successfully opened at the Zhongguancun National Independent Innovation Demonstration Zone Conference Center.
Zhiyuan Conference is an annual international high-end professional exchange event on artificial intelligence hosted by Zhiyuan Research Institute (also known as the strongest Chinese AI Research Institute of OpenAI in China). Spring Festival Gala" - as can be seen from the lineup of participating guests:
Turing Award winners Geoffrey Hinton, Yann LeCun (this is also the second of the Big Three in deep learning, another Bengio attended the previous conference), Joseph Sifakis and Yao Qizhi, Zhang Bo, Zheng Nanning, Xie Xiaoliang, Zhang Hongjiang, Zhang Yaqin and other academicians, Stuart Russell, founder of the Center for Artificial Intelligence Systems at the University of California, Berkeley, Max Tegmark, founder of the Future of Life Institute at the Massachusetts Institute of Technology, OpenAI CEO Sam Altman (this is also his first speech in China , although it is online), Meta, Microsoft, Google and other big companies and DeepMind, Anthropic, HuggingFace, Midjourney, Stability AI and other star team members, a total of more than 200 top artificial intelligence experts...
In the past two days, I followed the live broadcast of the conference. As a liberal arts student who does not understand technology, I actually listened with great interest and gained a lot.
However, after reading the speech of Geoffrey Hinton, the final Turing Award winner and "father of deep learning", a strong and complex emotion enveloped me:
On the one hand, seeing AI researchers exploring and imagining various cutting-edge technologies, they will naturally have more confidence in the realization of AI and even the future general artificial intelligence AGI**;
On the other hand, hearing the cutting-edge experts and scholars discuss the risks of AI, as well as human ignorance and contempt for how to deal with risks, and full of worries about the future of human beings - the most essential problem, in Hinton's words, is: history There has never been a precedent in the world where a more intelligent thing was controlled by a less intelligent thing. **If frogs invent humans, who do you think will take control? Is it a frog, or a human? **
Due to the explosion of information in the two-day conference, I took some time to sort out the materials of some important speeches, and recorded some of my thoughts by the way, so that I can review it later and share it with everyone who cares about the progress of AI.
Explanation: The part marked with [note] below is my personal opinion, and the content is summarized as a quotation (I can't write it out -_-|| due to my limited ability), the source is the link at the end of each part, and some of them have been modified.
OpenAI CEO Sam Altman: AGI may appear within ten years
At the "AI Safety and Alignment" forum held all day on June 10, OpenAI co-founder Sam Altman gave an opening keynote speech - also his first speech in China, although it was online.
The talk gave insights around model interpretability, scalability, and generalizability. Subsequently, Sam Altman and Zhang Hongjiang, chairman of Zhiyuan Research Institute, conducted a summit Q&A, mainly discussing how to deepen international cooperation, how to carry out safer AI research, and how to deal with the future risks of AI in the current era of large-scale AI models.
Excellent summary:
When asked by Zhang Hongjiang how far away from the era of general artificial intelligence (AGI), Sam Altman said, "In the next 10 years, super AI systems will be born, but it is difficult to predict the specific time point." , he also emphasized, "The speed at which new technologies have completely changed the world is far beyond imagination."
When asked whether OpenAI will open source large models, **Altman said that there will be more open source in the future, but there is no specific model and timetable. In addition, he also said that there will be no GPT-5 anytime soon. ** After the meeting, Altman issued a message to express his gratitude for being invited to give a speech at the Zhiyuan Conference.
Yang Likun, winner of the Turing Award: No one will use the GPT model in five years, and the world model is the future of AGI
Yang Likun, one of the three giants of deep learning and winner of the Turing Award, delivered a keynote speech titled "Towards Machines that can Learn, Reason, and Plan". As always, he questioned the current The route of LLM proposes another idea of a machine that can learn, reason, and plan: the world model.
Key points of the speech:
**Secondly, learn to reason. **This corresponds to psychologist Daniel Kahneman's concept of System 1 and System 2. System 1 is the human behavior or action that corresponds to subconscious calculations, those things that can be done without thinking; while System 2 is the tasks that you consciously and purposefully use all your thinking power to complete. At present, artificial intelligence can basically only realize the functions in System 1, and it is not complete;
The final challenge is how to plan complex sequences of actions by decomposing complex tasks into simpler ones, operating in a hierarchical fashion.
LeCun has always expressed disdain for the idea that AI will destroy human beings, and believes that today's AI is not as intelligent as a dog, and this worry is superfluous. **When asked whether the AI system would pose an existential risk to humans, LeCun said, **We don’t have a super AI yet, so how can we make the super AI system safe? **
** "Ask people today, can we guarantee that a superintelligent system is safe for humans, that is an unanswerable question. Because we don't have a design for a superintelligent system. So until you have a basic design, you can't Make a thing safe.**It's like if you asked an aerospace engineer in 1930, can you make a turbojet safe and reliable? And the engineer would say, "What is a turbojet?" "Because the turbojet hadn't been invented in 1930. So we're kind of in the same situation. It's a little premature to claim that we can't make these systems safe because we haven't invented them. Once we've invented them— —maybe they will be similar to the blueprint I've come up with, then it's worth discussing."
Professor Max Tegmark, MIT Center for Artificial Intelligence and Basic Interaction Research: Control AI with Mechanical Explainability
Max Tegmark, currently a tenured professor of physics at the Massachusetts Institute of Technology, scientific director of the Institute for Fundamental Issues, founder of the Institute for the Future of Life, and the well-known "suspension of AI research initiative initiator" (the initiative at the end of March has Elon Musk, Turing Award winner Yoshua Bengio, Apple co-founder Steve Wozniak and other 1000+ celebrities) gave a wonderful speech at the Zhiyuan Conference on "How to Control AI" (Keeping AI under control), and had a dialogue with Zhang Yaqin, academician of Tsinghua University, to discuss AI ethical security and risk prevention issues.
The speech discussed in detail AI's mechanical explainability, which is actually a study of how human knowledge is stored in those complex connections in neural networks. If research in this direction continues, it may finally be able to really explain the ultimate question of why the LLM large language model produces intelligence.
In addition to the speech, the interesting fact is that, as the initiator of the "Pause AI Research Initiative", the keynote speech focuses on how to conduct a more in-depth AI large-scale model research. Maybe as Max himself said at the end, he is not the doomed person that Professor Yang Likun, one of the AI giants, said, he is actually full of hope and yearning for AI, but we can ensure that all these more powerful intelligences serve us and use them It comes to create a more inspiring future than science fiction writers have dreamed of in the past. **
Note: I thought it would be very boring, but unexpectedly it was very exciting, and I watched the longest speech for an hour with relish! **As expected of a professor who often lectures, he is very fascinating, very theoretically deep, and easy to understand. What's even more surprising is that not only is he not a staid AI opponent, but he is actually an advocate of a better AI! **I can also speak Chinese, and I don’t forget to recruit myself while giving a speech...
Excerpts from wonderful ideas:
How do we do this? You can have three different levels of ambition. **The lowest level of ambition is just to diagnose its credibility, how much you should trust it. **For example, when you drive a car, even if you don't understand how your brakes work, you at least want to know if you can trust it to slow you down.
**The next level of ambition is to understand it better so that it can be more trustworthy. **The ultimate ambition is very ambitious, and that's what I expect, is that we'll be able to extract all the knowledge they learn from machine learning systems and reimplement them in other systems to demonstrate that they will Do what we want. **
Now, the reason is that these are the exact systems that can get us out of control the quickest, super powerful systems that we don't understand enough. The purpose of the pause is just to make artificial intelligence more like biotechnology, in the field of biotechnology, you can't just say that you are a company, hey, I have a new drug, I discovered it, and it will start selling in major supermarkets in Beijing tomorrow. First you have to convince experts in the Chinese government or the U.S. government that this is a safe drug, that its benefits outweigh its disadvantages, that there is a review process, and then you can do it.
Let's not make that mistake, let's become more like biotech, using our most powerful systems, unlike Fukushima and Chernobyl.
**3、**Zhang Yaqin: Well, Max, your career has been spent in mathematics, physics, neuroscience, and of course artificial intelligence. Clearly, in the future we will rely more and more on interdisciplinary competencies and knowledge. We have many graduate students, many future young people.
What advice do you have for young people on how to make career choices?
Max Tegmark: First, My advice is to focus on the fundamentals in the age of AI. Because the economy and the job market are changing faster and faster. So, we are moving away from this pattern of studying for 12 or 20 years and then doing the same thing for the rest of our lives. It won't be like that.
What's more, **You have a solid foundation and are very good at creative, open-minded thinking. Only in this way can we be agile and follow the trend. **
Of course, keep an eye on what's happening in the AI field as a whole, not just in your own field**. Because in the job market, the first thing that happens is not the replacement of humans by machines. But **people who don't work with AI will be replaced by people who do. **
Can I add a little more? I see the time flickering there.
I just want to say something optimistic. I think Yann LeCun is making fun of me. He called me the doomed one. But if you look at me, I'm actually very happy and cheerful. **I am actually more optimistic than Yann LeCun about our ability to understand future AI systems. **I think it's very, very promising.
I think that if we go full speed ahead and hand over more control from humans to machines we don't understand, it will end in a very bad way. But we don't have to do that. I think that if we work hard on mechanistic explainability and many of the other technical topics that will be heard here today, we can actually make sure that all of this greater intelligence is at our service and use it to create a world of A more inspiring future.
Conversation with the founder of Midjourney: Pictures are only the first step, AI will revolutionize learning, creativity and organization
MidJourney is currently the hottest image generation engine. Under the fierce competition of OpenAI's DALL·E 2 and the open source model Stable Diffusion, it still maintains an absolute lead in various style generation effects.
Midjourney is an amazing company, 11 people changing the world and creating a great product, destined to be the story of Pre AGI's early years.
Note: The long-awaited dialogue between Midjourney founder and CEO David Holz and Geek Park Zhang Peng, all in English, without subtitles, I did not expect to understand it completely, and I am very interested in it, because the questions and answers are so wonderful, especially David , I couldn’t help laughing when I answered. He laughed like an innocent child. With experience in managing large teams, he said, “I never wanted to have a company, I wanted to have a home.” He took it until At present, Midjourney, which has only 20 people, has become a unicorn that attracts worldwide attention, which may change the paradigm of future startups.
Entrepreneurial Drive: Unlocking the Human Imagination
**Zhang Peng:**In the past 20 years, I have met many entrepreneurs at home and abroad. I found that they have something in common. They all have a strong drive to explore and create "out of nothing".
I was wondering, when you started MidJourney, what was your driving force? At that moment, what are you longing for?
**David Holz: I never thought about starting a company. I just want a "home". **
I hope that in the next 10 or 20 years, here at Midjourney, I can create things that I really care about and really want to bring to this world.
I often think about various problems. Maybe I can't solve every problem, but I can** try to make everyone more capable of solving problems. **
So I try to think about how to solve it, how to create something. In my opinion, this can be boiled down to three points. **First, we have to reflect on ourselves: what do we want? What exactly is the problem? ****Then we have to imagine: where are we headed? What are the possibilities? ****In the end, we have to coordinate with each other and collaborate with others to achieve what we imagine. **
I think there's a huge opportunity in AI to bring those three parts together and create significant infrastructure that makes us better at solving this problem. In a way, **artificial intelligence should be able to help us reflect on ourselves, better imagine our future directions, help us better find each other and cooperate. We can do these things together and fuse them into some sort of single framework. I think it's going to change the way we create things and solve problems. This is the big thing I want to do. **
I think sometimes (we did first) image generation can get confusing, but in many ways image generation is a well-established concept. **Midjourney has become a super-imaginative collection of millions of people exploring the possibilities of this space. **
**In the coming years there will be opportunities for more visual and artistic explorations than all previous historical explorations combined. **
This doesn't solve all the problems we face, but I think it's a test, an experiment. If we can complete this exploration of the visual field, then we can also do it in other things. All other things that require us to explore and think together, I think can be solved in a similar way.
So when I thought about how to start solving this problem, we had a lot of ideas, we built a lot of prototypes, but suddenly there was a breakthrough in the field of AI, especially in vision, and we realized that this was a unique opportunity to be able to Create something that no one else has tried before. It made us want to try it.
We think maybe it won't be too long before it all comes together to form something very special. This is just the beginning.
Zhang Peng: So, the picture (generation) is only the first step, and your ultimate goal is to liberate human imagination. Is this what drew you to Midjourney?
David Holz: I really like imaginative things. I also hope that the world can have more creativity. It's so fun to see crazy ideas every day.
Reunderstanding Knowledge: Historical Knowledge Becomes the Power of Creation
Zhang Peng: This is very interesting. We usually say empty words, show me your code (Idea is cheap, show me the code). But right now, ideas seem to be the only thing that matters. As long as you can express your ideas through a series of excellent words, AI can help you realize them. So, are the definitions of learning and creativity changing? What do you think?
**David Holz: **I think one of the interesting things is that when you give people more time to be creative, they are also more interested in learning itself. **
For example, there is a very popular art style in the United States called Art Deco. I never cared about what this kind of art is, until one day, when I could make works of this kind of art style through instructions, I suddenly became very interested in it and wanted to know more about its history.
I think it's interesting that we're more interested in history when it's something you can use immediately and make it easier for you to create. **If the user interface becomes good enough, users feel that AI has become an extension of our thinking. AI is like a part of our body and mind, and AI is closely connected to history to a certain extent, and we will also be closely connected to history. This is so interesting.
When we ask our users what they want most, the number one and second answer is that they want learning materials, not just how to use tools, but art, history, camera lenses, brilliance, Want to understand and master all the knowledge and concepts available to create.
**Before, knowledge was just past history, but now, knowledge is the power to create. **
**Knowledge can play a greater role immediately, and people are eager to acquire more knowledge. **This is so cool.
Brian Christian: The Chinese version of the new book "Human-Machine Alignment" is released
The Chinese version of "Human-Machine Alignment" was released. The author Brian Christian briefly introduced the main content of the whole book in 10 minutes. It sounds very rich and exciting, and it also echoes the current rapid development of AI.
Brian Christian is an award-winning science author. His book "The Beauty of Algorithms" was named Amazon's Best Science Book of the Year and MIT Technology Review's Best Book of the Year. His new book, The Alignment Problem: Machine Learning and Human Values, currently being translated into Chinese, has been named by Microsoft CEO Satya Nadella as the top five books to inspire him in 2021 one**.
The book "Human-Machine Alignment" is divided into 3 parts.
The first part explores the ethical and security issues affecting machine learning systems today.
The second part is called agents, which shifts the focus from supervised and self-supervised learning to reinforcement learning.
Part III builds on supervision, self-supervision, and reinforcement learning to discuss how we can align complex AI systems in the real world.
Yang Yaodong, Assistant Professor, Institute of Artificial Intelligence, Peking University: A Review of the Progress of Safe Alignment of Large Language Models
Note: The speech "Security Alignment of Large Language Models" given by Yang Yaodong, an assistant professor at the Institute of Artificial Intelligence of Peking University, was very exciting. First, he could understand the speech in Chinese. Secondly, he explained the current safety of large language models in a very easy-to-understand language. The main research progress of the alignment, outlines the key points, and exceeds many contents about the progress of RLHF in depth.
Because I don't know the detailed technology, I can only understand the principle roughly and record some interesting points:
Three Align methods proposed by OpenAI:
AI large model alignment market is still a blue ocean:
3 Alignment Safe Ways:
From RLHF to RLAIF: Constitutional AI
Turing Award Winner Geoffrey Hinton: Superintelligence will be much faster than expected, and I am very worried that humans will be controlled by them
Turing Award winner, "Father of Deep Learning" Geoffrey Hinton's finale speech, the theme is Two Paths to Intelligence.
The godfather of AI brings us the research that convinced him that superintelligence will be much faster than expected: Mortal Computation. The speech described a new computing structure, after abandoning the principle of separation of software and hardware, that is, how to realize intelligent computing without using backpropagation to describe the internal path of the neural network.
Key points of the speech:
The reason why the brand-new computing model is called Mortal computing by Hinton has a profound meaning:
Hinton said before that immortality has actually been achieved. Because the current AI large language model has learned human knowledge into trillions of parameters, and it is hardware-independent: as long as the instruction-compatible hardware is reproduced, the same code and model weight can be directly run in the future. In this sense, human intelligence (not humans) is immortalized.
However, this separate computing of hardware and software is extremely inefficient in terms of energy efficiency and scale of implementation. If we abandon the computer design principle of separating hardware and software, and realize intelligence in a unified black box, it will be a new way to realize intelligence.
This kind of computing design that no longer separates software and hardware will greatly reduce energy consumption and computing scale (consider that the energy consumption of the human brain is only 20 watts)
But at the same time, it means that the weight cannot be copied efficiently to copy the wisdom, that is, the eternal life is given up.
Are Artificial Neural Networks Smarter Than Real Neural Networks?
What if a large neural network running on multiple digital computers could acquire knowledge directly from the world, in addition to imitating human language for human knowledge?
Obviously, it will become much better than humans because it has observed more data.
If this neural network can do unsupervised modeling of images or videos, and its copies can also manipulate the physical world-that is not a fantasy.
Note: Just when everyone thought the speech Xingjiang was over, on the penultimate page, Hinton—in a tone that is different from all previous scientists, a little emotional and mixed feelings—said his thoughts on the current rapid development AI concerns, which is also the curious voice of the world after he recently resolutely left Google and "regrets his life's work and worries about the dangers of artificial intelligence":
I think the realization of these superintelligence may be much faster than I used to think.
Bad guys will want to use them to do things like manipulate voters. They already use them in the US and many other places for this. And it will be used to win wars.
To make digital intelligence more effective, we need to allow it to set some goals. However, there is an obvious problem here. There's a very obvious sub-goal that's very helpful for almost anything you want to achieve, and that's gaining more power, more control**. Having more control makes it easier to achieve your goals. And I find it hard to imagine how we can prevent digital intelligence from striving to gain more control in order to achieve other goals.
**Once digital intelligence begins to seek more control, we may face more problems. **
**In contrast, humans seldom think about species that are more intelligent than themselves, and how to interact with these species. In my observation, this type of artificial intelligence has mastered the actions of deceiving humans proficiently, because it can use Read novels to learn how to deceive others, and once artificial intelligence has the ability to "deceive", it will also have the aforementioned ability to easily control humans. **The so-called control, for example, if you want to hack a building in Washington, you don't need to go there in person, you just need to trick people into thinking that by hacking the building, they can save democracy and ultimately achieve your Purpose (sarcasm Trump).
At this time, Gerffery Hinton, who was over sixty years old and devoted his life to artificial intelligence, said:
"I feel horrible, I don't know how to prevent this from happening, but I'm old, and I hope that many young and talented researchers like you will figure out how we have these superintelligences that will make our lives Get better and stop this kind of control through deceit...maybe we can give them moral principles, but at the moment, I'm still nervous, **because so far, I can't think of it-enough in the intelligence gap Big time - an example of something more intelligent being controlled by something less intelligent.****If frogs invented humans, who do you think would take control? Frogs or humans?**This is also Bring up my last PPT, the ending."
When I listened to it, I seemed to be listening to "the boy who slayed the dragon once, when he reached his twilight years and looked back on his life, he made a doomsday prophecy". I am deeply aware of the huge risk of AI to human beings, and I am infinitely embarrassed.
Compared with Hinton, Lecun, one of the younger deep learning giants, is obviously more optimistic:
When asked whether the AI system would pose an existential risk to humans, LeCun said, **We don’t have a super AI yet, so how can we make the super AI system safe? **
It makes people think of the different attitudes of the earth people towards the three-body civilization in "The Three-Body Problem"...
That day, I was still planning to turn off the computer in the mood of infinite regret. Unexpectedly, Huang Tiejun, the director of Zhiyuan Research Institute, made a perfect closing speech: "Can't Close".
Huang Tiejun first summarized the views of the previous speeches:
AI is getting stronger and stronger, and the risks are obvious and increasing day by day;
How to build safe AI, we know very little;
Can learn from historical experience: drug management, nuclear weapons control, quantum computing...
But high-complexity AI systems are difficult to predict: risk testing, mechanism explanation, generalization of understanding... just the beginning
New challenge goal: AI serving its own goals or human goals?
Essentially, do people want to build GAI general artificial intelligence or AGI artificial general intelligence?
The academic consensus is AGI artificial general intelligence: artificial intelligence that has reached human levels in all aspects of human intelligence, and can adaptively respond to external environmental challenges and complete all tasks that humans can complete; it can also be called autonomous artificial intelligence, superhuman intelligence, and strong intelligence. artificial intelligence.
On the one hand, everyone is enthusiastic about building general artificial intelligence and is rushing to invest.
On the other hand, I sneer at AI causing human beings to become second-class citizens, but this binary opposition is not the most difficult thing. The big deal is voting. The most difficult thing is, what should we do in the face of artificial intelligence like Near AGI like ChatGPT? manage? **
If humans respond to risks with the same enthusiasm as investing in building artificial intelligence, it may still be possible to achieve safe artificial intelligence,** but do you believe humans can do it? I don't know, thanks! **