![Berkeley EECS](/img/default-banner.jpg)
- 326
- 547 342
Berkeley EECS
United States
Приєднався 23 лют 2015
The Department of Electrical Engineering & Computer Sciences at UC Berkeley. Pioneering the frontiers of science & technology with broad impact on society.
If you would like closed captions for any of these videos, please contact:
esg@eecs.berkeley.edu
If you would like closed captions for any of these videos, please contact:
esg@eecs.berkeley.edu
Computer Science Golden Bear Advising for First Years, 2024
Computer Science Golden Bear Advising for First Years, 2024
Переглядів: 17
Відео
Computer Science Transfer Golden Bear Advising 2024
Переглядів 61День тому
Computer Science Transfer Golden Bear Advising 2024
Mark D. Weiser Excellence in Computing Scholarship 2024
Переглядів 3914 днів тому
Mark D. Weiser Excellence in Computing Scholarship 2024
Kevin K. Gong Memorial Scholarship for Bright Minds and Big Hearts 2024
Переглядів 2914 днів тому
Kevin K. Gong Memorial Scholarship for Bright Minds and Big Hearts 2024
Jim and Donna Gray Endowment Award 2024
Переглядів 1414 днів тому
Jim and Donna Gray Endowment Award 2024
James Tullock Memorial Scholarship Award 2024
Переглядів 914 днів тому
James Tullock Memorial Scholarship Award 2024
James H. Eaton Memorial Scholarship 2024
Переглядів 1314 днів тому
James H. Eaton Memorial Scholarship 2024
Sevin Rosen Funds Award for Innovation 2024
Переглядів 1814 днів тому
Sevin Rosen Funds Award for Innovation 2024
Samuel Silver Memorial Scholarship Award 2024
Переглядів 1114 днів тому
Samuel Silver Memorial Scholarship Award 2024
Timothy B. Campbell Innovation Award 2024
Переглядів 1214 днів тому
Timothy B. Campbell Innovation Award 2024
EECS Major Citation & EECS Outstanding TA Award 2024
Переглядів 1114 днів тому
EECS Major Citation & EECS Outstanding TA Award 2024
Demetri Angelakos Memorial Achievement Award 2024
Переглядів 1314 днів тому
Demetri Angelakos Memorial Achievement Award 2024
C. V. & Daulat Ramamoorthy Distinguished Research Award 2024
Переглядів 1614 днів тому
C. V. & Daulat Ramamoorthy Distinguished Research Award 2024
Art and Mary Fong Undergraduate Scholarship 2024
Переглядів 3214 днів тому
Art and Mary Fong Undergraduate Scholarship 2024
IEEE Standard 754 Milestone Plaque Dedication 2023
Переглядів 207Місяць тому
IEEE Standard 754 Milestone Plaque Dedication 2023
The Evolution of CS Education: Embracing Diversity, AI, and Competency Amidst Historical Context
Переглядів 4308 місяців тому
The Evolution of CS Education: Embracing Diversity, AI, and Competency Amidst Historical Context
The Joseph Thomas Gier Memorial Dedication Ceremony
Переглядів 2429 місяців тому
The Joseph Thomas Gier Memorial Dedication Ceremony
Baccalaureate Ceremony: Class of 2019 L&S Computer Science Commencement
Переглядів 207Рік тому
Baccalaureate Ceremony: Class of 2019 L&S Computer Science Commencement
Computer Science Transfer Golden Bear Advising 2023
Переглядів 687Рік тому
Computer Science Transfer Golden Bear Advising 2023
Baccalaureate Ceremony: Class of 2018 L&S Computer Science Commencement
Переглядів 235Рік тому
Baccalaureate Ceremony: Class of 2018 L&S Computer Science Commencement
Congratulations!
Really sad that people still share Charles Duell's apocryphal "quote" over 100 years later. In fact, he said this in 1902: "In my opinion, all previous advances in the various lines of invention will appear totally insignificant when compared with those which the present century will witness. I almost wish that I might live my life over again to see the wonders which are at the threshold."
He explains very clearly, and the pauses allow me to organize the ideas into a flow, making it easier to follow the topic. Thanks for sharing this
감사합니다. 교수님
Seeing this after gpt-4o 😊
my boy Jimmy is swollen af
Got into EECS.
yooo good job! same
It's amazing that a professor of a top university, no less, thinks this will be a game changer. It'll hardly speed up the easy things in a SoC. Seems a rather naive idea to me.
Jim Keller if fucking jacked hahaha
𝙼𝚘𝚜𝚝 𝚎𝚡𝚌𝚎𝚕𝚕𝚎𝚗𝚝.
this dhruv fellow has charmed me
Stacked LEGO-PoL CPU Voltage Regulator . What do you think about it.
I am very excited by this line of research
1:00 dawg 💀
Best subtitling I've seen in any video.
The supposed benefits are extremely naive. Supposing you got all state for a huge distributed OS in one database… that does not mean any application can read any data - which is what you require for many of the supposed benefits including monitoring - this would be a security nightmare so you have to lock everything down with complex auth systems. Also the supposed OAOO benefits assume debit and credit DBs are of the same company and would therefore be moved to one VoltDB. They are not.
video is interlaced? wtf? replace potato with a camera already
I am proud of with Dr.Shanaz Shabazov. Because she is the pride of Azerbaijan ❤
Request captions!
"이 이후로 apple 뿐 아니라 meta, ms, amazon 같은 비반도체 회사들도 저마다 ai칩을 만들기 시작"
Amazing! I had no idea there was already so much progress in this area
Great lecture, but there is some kind of a buzzing, is there any way to EQ the audio?
Michael Stonebraker nice to see you again. Hope to hear more.
What a wonderful story teller.
There is some high frequency sound is coming from the video I don't know if anyone able to listen it
There is some high frequency sound is coming from the video I don't know if anyone able to listen it
IT IS A BIG NEWS, thanks for sharing.
Very nice talk....
Claire Tomlin is amazing
Thank you for sharing this with the world!
@32:08 "'Turning Brain Waves Into Words'"
*What's the clear definition of (human) intelligence?* 🤔
Fascinating
❤
57:05 I think Lex Fridman broke the record of a half an hour conversation with codec avatars.
" Oh brother where art thou "
So beautiful ❤️❤️👌👌 However, people like Rana Kar, Sudipto Pal, Aniruddha Dasgupta are criticizing these beautiful ventures saying that industries should not embrace trade quantum computing, AI investment. They advocate for old system of work.
amazing thanks @Berkeley EECS
One thing people don't talk about much is the evolutionary training humans have received. With death before reproduction as the loss function, humans have developed a complete pre-trained neural network operational from before birth. So what we are doing after is analogous to transfer learning. I've often considered that sleep might be when transfer learning takes place. Working memory may be more akin to a vectorized database. In any case, we modern humans don't start learning at birth from nothing. I remember you showing how newly born birds are afraid of a hawk's shadow. Birds born without this knowledge don't live very long and therefore likely don't reproduce.
Amandio Olmpiio Naaasvaanga electrecista aasvuttto
Onde fica o sensor d Dr da Bomba de preçaso?
Open problem: valuing the weight of evidence. Just because something is widely beleived and repeated doesn't mean it's true.
Microsoft Catapult's FPGA solution was latency optimized for batch size of 1, i.e. it delivered the fastest inference and full utilization without the need to group input samples in a batch. The TPU is also written in Verilog or some RTL, and the FPGA is configured once and used as a soft ASIC, with firmware and SW stack on top.
强 great
Ni hao ma
"The expectation and mindset sets your direction and your possibilities, and it's important..." 8:14 Amazing perspective.
7:20 "everything that can be invented has been invented. -1899", god i wish in 20-100 years time this is what we would sound like to our future generation🤣
Very special work ❤
Wonderful lecture!
Here is a ChatGPT summary of John's talk: - Welcome to the fifth seminar in the Berkeley AI series, hosted by Ken and featuring John Schulman, a Berkeley graduate and co-founder of OpenAI. - John is the inventor of modern deep learning based policy gradient algorithms, including Trust Region Policy Optimization and Proximal Policy Optimization. - John's talk focuses on the problem of truthfulness in language models, which often make things up convincingly. - John proposes a conceptual model of what's going on when two neural nets are used for question answering tasks, which involves a knowledge graph stored in the weights of the neural net. - John claims that any attempt to train a model with behavior cloning will result in a hallucination problem, as the correct target depends on the knowledge in the network, which is unknown to the experimenter. - John suggests that reinforcement learning may be part of the solution to fixing the truthfulness problem. - Language models can be trained to output their state of knowledge with the correct amount of hedging and expressing uncertainty. - Models can be trained to minimize log loss, which is a proper scoring rule, and this results in a model that is calibrated and can output reasonable probabilities. - Models can be trained with RL from human feedback to learn when to say 'I don't know' and how much to hedge. - ChatGPT is an instruction following model from OpenAI that uses a similar methodology with RL from human feedback. - Evaluations of the model show that it is improving on factuality metrics. - Retrieval in the language model context means accessing an external source of knowledge. - Retrieval is important for verifiability, as it allows humans to easily check the accuracy of a model's answer. - WebGPT was a project that focused on a narrower type of question answering, where the model would do research online and answer questions. - ChatGPT is an alpha product that uses the same methods as WebGPT, but only browses when it doesn't know the answer. - An open problem is how to incentivize the model to accurately express its uncertainty in words. - Another open problem is how to go beyond what labelers can easily do, as it is hard to check a long answer about a technical or niche subject. - John discussed the idea that it is often easier to verify that a solution is correct than to generate a correct solution. - He discussed the P versus NP problem and how a weak agent can provide an incentive to a strong agent to solve a hard class of problems. - He discussed the idea of delegating tasks and using mechanism design to set up incentives. - He discussed the difficulty of rating answers when there is no fixed answer and the idea of redirecting the question to a more factual one. - He discussed the idea of using an inner monologue format for interpretability and the potential theoretical concerns with it. - He discussed the difference in capabilities of the model when the knowledge is inside or outside of the model. - He discussed the conflict between training the model not to withhold information in open domain contexts and not producing unsupported information in closed domain contexts.
If you listen closely, the "knowledge graph" bit is an analogy. GPT does not contain nor rely on a knowledge graph in the proper sense.
Here is a summary of the key points from the talk: • John discusses the issue of hallucination and factual accuracy with large language models. He argues that behavior cloning or supervised learning is not enough to avoid the hallucination problem. Reinforcement learning from human feedback can help improve the model's truthfulness and ability to express uncertainty. • His conceptual model is that language models have some internal "knowledge graph" stored in their weights. Fine-tuning allows the model to output correct answers based on that knowledge. But it also leads the model to hallucinate when it lacks knowledge. • John claims that models do know about their own uncertainty based on the probability distributions they output. However, incentivizing the model to properly express uncertainty in words remains an open problem. The current reward model methodology does not perfectly capture hedging and uncertainty. • Retrieval and citing external sources can help improve verifiability and fact-checking during training. John discusses models that can browse the web to answer technical questions, citing relevant sources. • Open problems include how to train models to express uncertainty in natural language, go beyond what human labelers can easily verify, and optimizing for true knowledge rather than human approval.
Thanks for the great summary!
Main point that was missed in the summary IMO: The finetune has to match what the model knows. If you tell it to say facts it doesn't know (even if the labeller does), you're teaching it to make stuff up, and if you tell it to say "I don't know" when it actually does know the answer (even if the labeller doesn't), you're teaching it to withhold knowledge.
I think the shader thing could be improved even more by using automatic *pixel perfect* segmentation masks or depth masks or normals or motion vectors or whatever other information you can get directly from the actual generated images to also train directly on recognizing individual entities and what not. Perhaps some of these shader toys could also be combined into a Supershader that can basically smoothly interpolate between all of them or something, as, quite likely, quite a few of them are directly related