Berkeley EECS
Berkeley EECS
  • 326
  • 547 342

Відео

Computer Science Transfer Golden Bear Advising 2024
Переглядів 61День тому
Computer Science Transfer Golden Bear Advising 2024
Lotfi A. Zadeh Prize 2024
Переглядів 3014 днів тому
Lotfi A. Zadeh Prize 2024
Ross N. Tucker Memorial Award 2024
Переглядів 2114 днів тому
Ross N. Tucker Memorial Award 2024
Mark D. Weiser Excellence in Computing Scholarship 2024
Переглядів 3914 днів тому
Mark D. Weiser Excellence in Computing Scholarship 2024
Kevin K. Gong Memorial Scholarship for Bright Minds and Big Hearts 2024
Переглядів 2914 днів тому
Kevin K. Gong Memorial Scholarship for Bright Minds and Big Hearts 2024
Leon O. Chua Award 2024
Переглядів 2014 днів тому
Leon O. Chua Award 2024
Jim and Donna Gray Endowment Award 2024
Переглядів 1414 днів тому
Jim and Donna Gray Endowment Award 2024
James Tullock Memorial Scholarship Award 2024
Переглядів 914 днів тому
James Tullock Memorial Scholarship Award 2024
James H. Eaton Memorial Scholarship 2024
Переглядів 1314 днів тому
James H. Eaton Memorial Scholarship 2024
Sevin Rosen Funds Award for Innovation 2024
Переглядів 1814 днів тому
Sevin Rosen Funds Award for Innovation 2024
Samuel Silver Memorial Scholarship Award 2024
Переглядів 1114 днів тому
Samuel Silver Memorial Scholarship Award 2024
Timothy B. Campbell Innovation Award 2024
Переглядів 1214 днів тому
Timothy B. Campbell Innovation Award 2024
Warren Y. Dere Design Award 2024
Переглядів 6014 днів тому
Warren Y. Dere Design Award 2024
Eugene L. Lawler Prize 2024
Переглядів 714 днів тому
Eugene L. Lawler Prize 2024
Eli Jury Award 2024
Переглядів 814 днів тому
Eli Jury Award 2024
EECS Major Citation & EECS Outstanding TA Award 2024
Переглядів 1114 днів тому
EECS Major Citation & EECS Outstanding TA Award 2024
Demetri Angelakos Memorial Achievement Award 2024
Переглядів 1314 днів тому
Demetri Angelakos Memorial Achievement Award 2024
David J. Sakrison Memorial Prize 2024
Переглядів 1614 днів тому
David J. Sakrison Memorial Prize 2024
C. V. & Daulat Ramamoorthy Distinguished Research Award 2024
Переглядів 1614 днів тому
C. V. & Daulat Ramamoorthy Distinguished Research Award 2024
Arthur M. Hopkin Award 2024
Переглядів 3514 днів тому
Arthur M. Hopkin Award 2024
Art and Mary Fong Undergraduate Scholarship 2024
Переглядів 3214 днів тому
Art and Mary Fong Undergraduate Scholarship 2024
IEEE Standard 754 Milestone Plaque Dedication 2023
Переглядів 207Місяць тому
IEEE Standard 754 Milestone Plaque Dedication 2023
CS Major Information Session 2024
Переглядів 418Місяць тому
CS Major Information Session 2024
EECS Major Information Session 2024
Переглядів 312Місяць тому
EECS Major Information Session 2024
The Evolution of CS Education: Embracing Diversity, AI, and Competency Amidst Historical Context
Переглядів 4308 місяців тому
The Evolution of CS Education: Embracing Diversity, AI, and Competency Amidst Historical Context
The Joseph Thomas Gier Memorial Dedication Ceremony
Переглядів 2429 місяців тому
The Joseph Thomas Gier Memorial Dedication Ceremony
Baccalaureate Ceremony: Class of 2019 L&S Computer Science Commencement
Переглядів 207Рік тому
Baccalaureate Ceremony: Class of 2019 L&S Computer Science Commencement
Computer Science Transfer Golden Bear Advising 2023
Переглядів 687Рік тому
Computer Science Transfer Golden Bear Advising 2023
Baccalaureate Ceremony: Class of 2018 L&S Computer Science Commencement
Переглядів 235Рік тому
Baccalaureate Ceremony: Class of 2018 L&S Computer Science Commencement

КОМЕНТАРІ

  • @simplicity8402
    @simplicity8402 15 днів тому

    Congratulations!

  • @senatorlainez
    @senatorlainez 23 дні тому

    Really sad that people still share Charles Duell's apocryphal "quote" over 100 years later. In fact, he said this in 1902: "In my opinion, all previous advances in the various lines of invention will appear totally insignificant when compared with those which the present century will witness. I almost wish that I might live my life over again to see the wonders which are at the threshold."

  • @carvalhoribeiro
    @carvalhoribeiro Місяць тому

    He explains very clearly, and the pauses allow me to organize the ideas into a flow, making it easier to follow the topic. Thanks for sharing this

  • @Thebetterlife_9991
    @Thebetterlife_9991 Місяць тому

    감사합니다. 교수님

  • @SUZAKU__007
    @SUZAKU__007 Місяць тому

    Seeing this after gpt-4o 😊

  • @SM-qo9gr
    @SM-qo9gr Місяць тому

    my boy Jimmy is swollen af

  • @temle7489
    @temle7489 Місяць тому

    Got into EECS.

  • @DJTrancenergy
    @DJTrancenergy 2 місяці тому

    It's amazing that a professor of a top university, no less, thinks this will be a game changer. It'll hardly speed up the easy things in a SoC. Seems a rather naive idea to me.

  • @chefatchangs4837
    @chefatchangs4837 2 місяці тому

    Jim Keller if fucking jacked hahaha

  • @mondomonkey8581
    @mondomonkey8581 2 місяці тому

    𝙼𝚘𝚜𝚝 𝚎𝚡𝚌𝚎𝚕𝚕𝚎𝚗𝚝.

  • @isabellaborkovic2448
    @isabellaborkovic2448 2 місяці тому

    this dhruv fellow has charmed me

  • @azamatbezhan1653
    @azamatbezhan1653 2 місяці тому

    Stacked LEGO-PoL CPU Voltage Regulator . What do you think about it.

  • @sethcoast
    @sethcoast 3 місяці тому

    I am very excited by this line of research

  • @waterblah6167
    @waterblah6167 3 місяці тому

    1:00 dawg 💀

  • @18000rpm
    @18000rpm 3 місяці тому

    Best subtitling I've seen in any video.

  • @marccawood
    @marccawood 3 місяці тому

    The supposed benefits are extremely naive. Supposing you got all state for a huge distributed OS in one database… that does not mean any application can read any data - which is what you require for many of the supposed benefits including monitoring - this would be a security nightmare so you have to lock everything down with complex auth systems. Also the supposed OAOO benefits assume debit and credit DBs are of the same company and would therefore be moved to one VoltDB. They are not.

  • @metalim
    @metalim 3 місяці тому

    video is interlaced? wtf? replace potato with a camera already

  • @maharram8
    @maharram8 4 місяці тому

    I am proud of with Dr.Shanaz Shabazov. Because she is the pride of Azerbaijan ❤

  • @grimsk
    @grimsk 5 місяців тому

    Request captions!

  • @grimsk
    @grimsk 5 місяців тому

    "이 이후로 apple 뿐 아니라 meta, ms, amazon 같은 비반도체 회사들도 저마다 ai칩을 만들기 시작"

  • @won20529jun
    @won20529jun 5 місяців тому

    Amazing! I had no idea there was already so much progress in this area

  • @wege8409
    @wege8409 5 місяців тому

    Great lecture, but there is some kind of a buzzing, is there any way to EQ the audio?

  • @utube3805
    @utube3805 6 місяців тому

    Michael Stonebraker nice to see you again. Hope to hear more.

  • @tejas81
    @tejas81 6 місяців тому

    What a wonderful story teller.

  • @royalfalcon2021
    @royalfalcon2021 6 місяців тому

    There is some high frequency sound is coming from the video I don't know if anyone able to listen it

  • @royalfalcon2021
    @royalfalcon2021 6 місяців тому

    There is some high frequency sound is coming from the video I don't know if anyone able to listen it

  • @adolephasmith6116
    @adolephasmith6116 6 місяців тому

    IT IS A BIG NEWS, thanks for sharing.

  • @koushikdey8609
    @koushikdey8609 6 місяців тому

    Very nice talk....

  • @redSTEM
    @redSTEM 6 місяців тому

    Claire Tomlin is amazing

  • @gagegr
    @gagegr 6 місяців тому

    Thank you for sharing this with the world!

  • @idfubar
    @idfubar 7 місяців тому

    @32:08 "'Turning Brain Waves Into Words'"

  • @AlgoNudger
    @AlgoNudger 7 місяців тому

    *What's the clear definition of (human) intelligence?* 🤔

  • @user-ee8te3dh2b
    @user-ee8te3dh2b 8 місяців тому

    Fascinating

  • @AlgoNudger
    @AlgoNudger 8 місяців тому

  • @Shiffo
    @Shiffo 8 місяців тому

    57:05 I think Lex Fridman broke the record of a half an hour conversation with codec avatars.

  • @ctankep
    @ctankep 9 місяців тому

    " Oh brother where art thou "

  • @ceceropanini6644
    @ceceropanini6644 9 місяців тому

    So beautiful ❤️❤️👌👌 However, people like Rana Kar, Sudipto Pal, Aniruddha Dasgupta are criticizing these beautiful ventures saying that industries should not embrace trade quantum computing, AI investment. They advocate for old system of work.

  • @yotubecreators47
    @yotubecreators47 9 місяців тому

    amazing thanks @Berkeley EECS

  • @dr.mikeybee
    @dr.mikeybee 9 місяців тому

    One thing people don't talk about much is the evolutionary training humans have received. With death before reproduction as the loss function, humans have developed a complete pre-trained neural network operational from before birth. So what we are doing after is analogous to transfer learning. I've often considered that sleep might be when transfer learning takes place. Working memory may be more akin to a vectorized database. In any case, we modern humans don't start learning at birth from nothing. I remember you showing how newly born birds are afraid of a hawk's shadow. Birds born without this knowledge don't live very long and therefore likely don't reproduce.

  • @user-bw9gt6pk6g
    @user-bw9gt6pk6g 9 місяців тому

    Amandio Olmpiio Naaasvaanga electrecista aasvuttto

    • @user-bw9gt6pk6g
      @user-bw9gt6pk6g 9 місяців тому

      Onde fica o sensor d Dr da Bomba de preçaso?

  • @johan.j.bergman
    @johan.j.bergman 10 місяців тому

    Open problem: valuing the weight of evidence. Just because something is widely beleived and repeated doesn't mean it's true.

  • @SergiuM
    @SergiuM 10 місяців тому

    Microsoft Catapult's FPGA solution was latency optimized for batch size of 1, i.e. it delivered the fastest inference and full utilization without the need to group input samples in a batch. The TPU is also written in Verilog or some RTL, and the FPGA is configured once and used as a soft ASIC, with firmware and SW stack on top.

  • @beautifulmind684
    @beautifulmind684 10 місяців тому

    强 great

  • @weathrman
    @weathrman 10 місяців тому

    "The expectation and mindset sets your direction and your possibilities, and it's important..." 8:14 Amazing perspective.

  • @KB-gy5gg
    @KB-gy5gg 11 місяців тому

    7:20 "everything that can be invented has been invented. -1899", god i wish in 20-100 years time this is what we would sound like to our future generation🤣

  • @evastetsenko4302
    @evastetsenko4302 11 місяців тому

    Very special work ❤

  • @francisdelacruz6439
    @francisdelacruz6439 Рік тому

    Wonderful lecture!

  • @mbrochh82
    @mbrochh82 Рік тому

    Here is a ChatGPT summary of John's talk: - Welcome to the fifth seminar in the Berkeley AI series, hosted by Ken and featuring John Schulman, a Berkeley graduate and co-founder of OpenAI. - John is the inventor of modern deep learning based policy gradient algorithms, including Trust Region Policy Optimization and Proximal Policy Optimization. - John's talk focuses on the problem of truthfulness in language models, which often make things up convincingly. - John proposes a conceptual model of what's going on when two neural nets are used for question answering tasks, which involves a knowledge graph stored in the weights of the neural net. - John claims that any attempt to train a model with behavior cloning will result in a hallucination problem, as the correct target depends on the knowledge in the network, which is unknown to the experimenter. - John suggests that reinforcement learning may be part of the solution to fixing the truthfulness problem. - Language models can be trained to output their state of knowledge with the correct amount of hedging and expressing uncertainty. - Models can be trained to minimize log loss, which is a proper scoring rule, and this results in a model that is calibrated and can output reasonable probabilities. - Models can be trained with RL from human feedback to learn when to say 'I don't know' and how much to hedge. - ChatGPT is an instruction following model from OpenAI that uses a similar methodology with RL from human feedback. - Evaluations of the model show that it is improving on factuality metrics. - Retrieval in the language model context means accessing an external source of knowledge. - Retrieval is important for verifiability, as it allows humans to easily check the accuracy of a model's answer. - WebGPT was a project that focused on a narrower type of question answering, where the model would do research online and answer questions. - ChatGPT is an alpha product that uses the same methods as WebGPT, but only browses when it doesn't know the answer. - An open problem is how to incentivize the model to accurately express its uncertainty in words. - Another open problem is how to go beyond what labelers can easily do, as it is hard to check a long answer about a technical or niche subject. - John discussed the idea that it is often easier to verify that a solution is correct than to generate a correct solution. - He discussed the P versus NP problem and how a weak agent can provide an incentive to a strong agent to solve a hard class of problems. - He discussed the idea of delegating tasks and using mechanism design to set up incentives. - He discussed the difficulty of rating answers when there is no fixed answer and the idea of redirecting the question to a more factual one. - He discussed the idea of using an inner monologue format for interpretability and the potential theoretical concerns with it. - He discussed the difference in capabilities of the model when the knowledge is inside or outside of the model. - He discussed the conflict between training the model not to withhold information in open domain contexts and not producing unsupported information in closed domain contexts.

    • @KathyiaS
      @KathyiaS Рік тому

      If you listen closely, the "knowledge graph" bit is an analogy. GPT does not contain nor rely on a knowledge graph in the proper sense.

  • @Jack-vv7zb
    @Jack-vv7zb Рік тому

    Here is a summary of the key points from the talk: • John discusses the issue of hallucination and factual accuracy with large language models. He argues that behavior cloning or supervised learning is not enough to avoid the hallucination problem. Reinforcement learning from human feedback can help improve the model's truthfulness and ability to express uncertainty. • His conceptual model is that language models have some internal "knowledge graph" stored in their weights. Fine-tuning allows the model to output correct answers based on that knowledge. But it also leads the model to hallucinate when it lacks knowledge. • John claims that models do know about their own uncertainty based on the probability distributions they output. However, incentivizing the model to properly express uncertainty in words remains an open problem. The current reward model methodology does not perfectly capture hedging and uncertainty. • Retrieval and citing external sources can help improve verifiability and fact-checking during training. John discusses models that can browse the web to answer technical questions, citing relevant sources. • Open problems include how to train models to express uncertainty in natural language, go beyond what human labelers can easily verify, and optimizing for true knowledge rather than human approval.

    • @mirwox
      @mirwox Рік тому

      Thanks for the great summary!

    • @arirahikkala
      @arirahikkala Рік тому

      Main point that was missed in the summary IMO: The finetune has to match what the model knows. If you tell it to say facts it doesn't know (even if the labeller does), you're teaching it to make stuff up, and if you tell it to say "I don't know" when it actually does know the answer (even if the labeller doesn't), you're teaching it to withhold knowledge.

  • @Kram1032
    @Kram1032 Рік тому

    I think the shader thing could be improved even more by using automatic *pixel perfect* segmentation masks or depth masks or normals or motion vectors or whatever other information you can get directly from the actual generated images to also train directly on recognizing individual entities and what not. Perhaps some of these shader toys could also be combined into a Supershader that can basically smoothly interpolate between all of them or something, as, quite likely, quite a few of them are directly related