Sunday Snaps #05
Taking Selfies in the Past, Tuning Any Space Into a 3D World and A New Programming Language
A language as easy as Python & as fast as C
Mojo is a new programming language that is being developed by a startup called Modulr. It is designed to be a strict superset of Python that allows developers to write high-performance code that takes advantage of modern hardware accelerators like GPUs and TPUs. Mojo aims to achieve this by leveraging MLIR (Multi-Level Intermediate Representation), which is a replacement for LLVM's IR designed specifically for AI workloads and many-core computing.
The motivation behind the creation of Mojo is to provide Python developers with a more performant alternative to writing code that can take advantage of modern hardware. While Python is a popular and widely-used language, it is not always the best choice for high-performance computing tasks. By building on top of Python and providing developers with access to powerful hardware accelerators, Mojo aims to bridge this gap and provide developers with a language that is both easy to use and highly performant.
Further Reading:
Using AI to travel through time and take selfies
The creation of AI art may seem simple, but it actually involves a time-consuming process that requires both AI tools and the artist's skills. The video below showcases the workflow of an anonymous artist who creates "Stelfie," a recurring character that takes time-traveling selfies.
The artist uses custom 3D-generated heads, sketches, and a combination of Photoshop and the AI program Stable Diffusion to achieve the character's ideal appearance. The process includes various AI art tools such as inpainting, outpainting, and denoising to control image alterations. Despite the labor-intensive process, the outcome of AI art is unique and distinct from traditional art. The result is realistic selfies that maintain consistency with the recurring character, Stelfie.
Scientists use brain scans and AI to 'decode' thoughts
Scientists from the University of Texas at Austin have developed a system that can "decode" the general meaning of people's thoughts using brain scans and artificial intelligence (AI) modelling. The language decoder is the first to be able to reconstruct continuous language without an invasive brain implant, according to a study in the journal Nature Neuroscience.
Three people spent 16 hours in an fMRI machine listening to spoken narrative stories, which were fed into a neural network language model trained to predict how each person's brain would respond to perceived speech. The decoder could recover the gist of what the users were hearing, albeit not with complete accuracy. The technology is aimed at helping people who have lost the ability to communicate, but concerns have been raised about "mental privacy."
This AI model can can turn any space into a 3D model
Luma AI, a California-based startup, has introduced an API for converting images and videos into 3D models with the aim of democratizing 3D scenes. Luma AI, which recently raised $20 million in a Series A funding round, wants to make the process of creating a 3D model faster and cheaper, reducing the time to 30 minutes and the cost to as little as $1 per model.
The startup provides guidelines for achieving the best possible result and allows developers to use the new API, while end users can access a web interface. Luma AI promises improvements in quality and processing time in the near future.
Combining it with the potential of Unity AI, we can see that the Game Dev industry is getting more and more efficient by day.