What makes the brain faster than a computer? We asked Justin Kinney, a bioengineer, neuroscientist, and technologist at WD, and here’s what he said.
Western Digital’s Post
More Relevant Posts
-
Our A.I. Paper is out! WE DIDN'T KNOW! Click here, it's Open Access: https://lnkd.in/gjkpTMEP Margaret Boone Rappaport and Christopher Corbally are delighted to announce a really interesting paper on Artificial Intelligence, in the JOURNAL OF SOCIAL COMPUTING, "Hypothesis and Thought Experiment: Comparing Cognitive Competition of Neandertals and Early Humans, to Our Coming Contest with AIs."
To view or add a comment, sign in
-
Very nice article by Alison Snyder at Axios Science on machine unlearning, interviewing myself, Seth Neel, PhD from Harvard, Zachary Lipton from CMU, and Gautam Kamath from Waterloo, discussing fundamental challenges and progress made thus far. Read the article here: https://lnkd.in/eqdxQwCH.
To view or add a comment, sign in
-
Learn more about quantitative and intuitive polymer similarity tomorrow with Dr. Debra Audus, project leader of the Polymer Analytics Project at the National Institute of Standards and Technology (NIST) as part of the AI+Science Schmidt Fellows Speaker Series!
To view or add a comment, sign in
-
🚀 We're excited to announce a new 500M parameter model 🌌 Space Time LLM. Recent breakthroughs in LLMs have resulted in an uncanny and game changing ability to predict future outcomes. Impressive advances in quantization and compression such as 1-bit LLMs have contributed to this phenomenal breakthrough in predictive capabilities. This model redefines our understanding of what and how LLMs learn. Check out this model and see what you can predict today! https://lnkd.in/exdy4R5G
NeuML/spacetimellm · Hugging Face
huggingface.co
To view or add a comment, sign in
-
After a long processing delay, the 2023 International Symposium on the Tsetlin Machine (ISTM) proceedings are now out on IEEExplore. Another 15 excellent articles researching the new ML ecosystem; enjoy reading: https://lnkd.in/dG3uyJXz
To view or add a comment, sign in
-
(Not) looking at the election results - but thinking if the turnout was so low, and a majority of one party so high, is it just the folks who traditionally voted for one specific party just abstained en masse? At least the ones not susceptible to simplistic “blame outsiders” rhetoric by parties whose leaders or paymasters have interests 180 degrees away from the folks who voted their way? Brain imagines harbours with low water level exposing what lies beneath. In other news, thinking that a Turing test is actually a measure of human gullibility. Reading another article on Medium this morning on designing systems by imagining how you’d design something if the objective was to completely fail. Time to read “Poor Charlie’s Almanac” that the article references. Think my brain is full of random bits I need to go rationalise :-)
To view or add a comment, sign in
-
Quantum Science | Deep Learning | Pure Mathematics | Startups | Formal Verification of Quantum software | Programmable Cryptography| Quantum Materials
David Wakeham ,Maria Schuld this is a very novel perspective. I have been keen on Topological deep learning lately and I can't help but connect ideas here : quantum algorithms and topological methods seek to identify and exploit underlying structures in data that are invariant under certain transformations. In quantum computing, this is done through operations like the QFT that reveal symmetries in Fourier space. In TDL, the focus is on topological invariants that persist across scales and are resistant to perturbations in the data. here , Quantum Fourier Transform (QFT) is used to expose symmetries in the data, particularly through the annihilator subspace in the Hidden Subgroup Problem (HSP). The invariance under subgroup transformations, as revealed by QFT, is key to identifying hidden structures. In TDL, the topological features of data (e.g., persistence diagrams, homology groups) often reflect underlying symmetries and invariances. For instance, homological features remain invariant under continuous deformations, capturing the essential structure of the data. This is akin to how quantum algorithms use Fourier space to reveal invariant subspaces. often ,data is represented in the form of quantum states and operating in a high-dimensional Hilbert space. The use of Fourier transforms in quantum algorithms allows the data to be analyzed in a dual space where symmetries become explicit. TDL similarly involves representing data in high-dimensional spaces, often through embeddings that capture topological features. Persistent homology, for instance, studies the data at various scales, identifying features that persist across dimensions. The connection here lies in the abstraction of data to spaces where essential features (topological in TDL, symmetry-related in quantum) are more readily analyzed. The proposed Data-Annihilator Overlap (DAO) heuristic suggests learning by finding the quantum subspace that best matches the data’s inherent symmetries. In TDL, learning often involves identifying and preserving topological invariants within the data during the training process. For example, neural networks might be designed to respect certain topological properties (e.g., preserving the Betti numbers during transformations), ensuring that the learned model captures the intrinsic structure of the data. thus, I don't know yet ,but I can see a proper Quantum topological deep learning emerging from this . Xanadu ,PennyLane
What useful features are exposed by quantum algorithms like Shor’s? And how can we use them to learn from data? Our new paper looks at the engine behind traditional quantum routines and how it can lead to new quantum machine learning heuristics. arxiv.org/abs/2409.00172 Spoiler alert: this is different from your usual QML paper - you won’t find any variational circuits that beat classical ML, nor any proof for quantum advantage.
To view or add a comment, sign in
-
This week, we will continue the "Fairness and discrimination in predictive models" PhD Course, with some discussions on machine learning, loss functions, distances, dissimilarities, divergences https://lnkd.in/gMTneVbb (#3) and we will present Wasserstein distance, and the connexions with Optimal Transport https://lnkd.in/gZMHR2Q4 (#4)
To view or add a comment, sign in
-
As a student of human behavior, I find this fascinating: The latest episode of The Futurist discusses brain computer interfaces and the connection between technology, pain, and the brain. It's been both exciting and educational to watch this season unfold. #MedtronicEmployee https://dy.si/doDA2
To view or add a comment, sign in
-
What useful features are exposed by quantum algorithms like Shor’s? And how can we use them to learn from data? Our new paper looks at the engine behind traditional quantum routines and how it can lead to new quantum machine learning heuristics. arxiv.org/abs/2409.00172 Spoiler alert: this is different from your usual QML paper - you won’t find any variational circuits that beat classical ML, nor any proof for quantum advantage.
To view or add a comment, sign in
543,114 followers
very interdasting!