MIT Reality Hack Hackathon: Part 1

\"\"

This is the transcript to accompany a GIIDE post.

I just spent a wonderfully fun, intelligent, scintillating, and completely engaging 5 days at MIT at the \”Reality Hack\” hackathon as a mentor, and judge. 

This is part 1, where I\’ll be talking about the experience and what happens in a hackathon like this.  Part 2 will be about my impressions, insights and takeaways. 

But first, for those who aren\’t familiar – a hackathon is an event that takes developers, designers, UX people, and others, and throws them together for a few days to create and build something in the short time they\’re given. This hacktahon was focused on the virtual and augmented reality industry, and he companies that were there as sponsors  brought their latest tech with them, in many cases tech that hasn\’t hit the market yet. 

There were roughly 200 people participating; they had 2 and a half  days to come up with an idea of what to make, form teams, and then create it. And contrary to popular assumption, it wasn\’t all nerdy male 20 year olds; sure thee was some of that, but it was a refreshing mix of all ages, genders, and races. 

Each company had a team of people there to help with the technicalities of developing on their tech. Microsoft was a massive sponsor (thank you!) and were there with their hololense 2s, Snap was there with their not yet launched AR  Spectacles , Arctop with their brain sensing device (yes, it reads your brain waves!) as was Magic Leap, Solana was there with their blockchain infrastructure, Looking Glass Factory with their super cool 8k headset-free, hologram-powered displays, and a bunch more. Suffice it to say we got to play with some of the most cutting edge XR technology out there. 

The process is time honored, but a little chaotic: the first day was dedicated to workshops by all the various sponsor teams, to introduce hackers to their devices and software, and answer questions about developing something using them. That night (in a remarkably lo-tech way) large sheets of paper with various categories like \”health and wellness\” and \”the future of work\” were hung up and everyone ran around writing their ideas on the paper, and finding other people who were interested in working on that idea with them. It was a rush of frenetic chaos!  Eventually the groups formed and registered as teams. 

The next morning the hacking started in earnest. I was one of a few mentors there in person, but there was a village of virtual mentors available to help with any questions, technical, design, business, whatever they needed; it really does take a village. The organizers had set up a Discord channel and hashtags to \”call out\” a mentor when they needed one, but I found that walking around and just talking to groups was super effective. Plus I got to know a lot of people that way. 

Unlike many hackathons where participants furiously work all nighters, fueled by pizza and bad smells, this one was super well run and we were kept well fed and watered with delicious (and healthy!) meals 3 times a day. The first two nights there was a \”networking event\” at the MIT media lab, a few (alcohol free) hours where everyone was encouraged to come take a break and have some fun. Lucas Rizzotto of Lucas Builds the Future and AR House did a chill fireside chat with Sultan Sharrief, one of the organizers and Founder of The Quasar Lab  on the second night, 

The hackathon closed at 11:30 pm each night, as opposed to the usual 24 hours a day. Most went home to continue working well into the wee hours of the night, of course, but officially the day was over. 

The third day went to 2:30 in the afternoon, and judging kicked in! For me this was the most fun part. Each team set up at a numbered table to demo their project, and we used a software called Gavel to go from assigned table to assigned table with only one remit: did we think the current table was better or worse than the previous one we saw. Using that info, 80 teams were pared down to a semi final round, and eventually only a few judges went into closed doors to discuss and deliberate. 7 hours later (yes, 7 hours – they took this very seriously) the winners emerged. 

That night we were treated to a real party, at a club with a DJ and an open bar; and I\’m not embarrassed to admit that after 2 years of covid quarantining, I partied like I was 20 years old. And paid for it the next day. 

The awards ceremony the last morning was the final cherry on the top; the mood was convivial and very supportive. By then it felt like a big family, and we all celebrated each win. The excitement as each category\’s winners were announced – and the prizes revealed, some of which were pretty amazing – was palpable. It was a feel good experience as one ever gets to be a part of. 

\"\"

I want to say thank you to the amazing group of people who organized and ran this incredible event; Sultan Sharrief was an inspiration and his energy is infectious; Austin Edelman a fountain of organizational energy. Athena Demos kept the mood fun and from being too serious; I got to spend many hours hanging out with Dulce Baerga, Damon Hernandez. Mitch Chaiet and Ben Erwin, among others; what more could you ask far? 

Part 2 of this GIIDE series will be about my  impressions, thoughts, and takeaways from  the hackathon, as well as some of my favorite projects. I\’ll be releasing that later this week.

In the meantime, if you want to watch the final awards ceremony, click here.

Reading my mind

Fascinating stuff. And, whoa. The inevitable march towards brain-computer interface continues! \”Researchers from Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have found a way to visualize a person\’s brain activity as actual images mimicking what they observe in real time. \”

We are rapidly moving from keyboard and mouse input – which, although we\’ve done it so long that it *seems* natural, but it is not – to spatial input; this is truly an astounding leap towards natural computing.

I applaud the application that this particular work is working towards (helping post-stroke patient with rehabilitation devices controlled by brain signals), but imagine a world where we don\’t have to interact with technology – and each other – through screens!

One of the many challenges is that although there is a standard model for brain architecture, everyone has their own variation, so there are no specific templates that can be applied. No doubt there will be a \”training\” period for the interface. But once \”trained\” our personal brain reader will be able to function across all interfaces; unless of course Apple and Microsoft put up the usual walled garden model (personal gripe, also true with VR headsets; this game only works with this system etc).

But inevitably, the early stage development is paid off, enough people adopt, the squinky convoluted hoops early adopters need to jump through are ironed out, and mass adoption takes off. And while I realize that true brain computing interface is a long way off, I\’m heartened by all the work I\’ve seen by teams like this (CTRL-Labs in particular – interestingly, just bought by Facebook) . And hope that it will help the quality of life for both patients with limitations, and mundane every day life.

https://techxplore.com/news/2019-10-neural-network-reconstructs-human-thoughts.html

\"undefined\"

Scroll to Top