9
1
Software Engineer
Ukraine
6 years of experience
We are building a Text-to-Video Human avatar generator. It is a pipeline of AI tools that works in 3 Steps: 1. Text-to-Speech (Creating a audio file from text input) 2. Text-to-Video (Creating a video from text input) 3. Text-to-Human (Using deep fake technology to make video less random and merge audio with lips of subject). This is just the beginning of AI media and the next natural steps for AI, it is better then most use cases because you have much more control of the subject who can actually interact with their environment, be trained easier and have movement which opens so many more doors in use cases then other technology out there in the same bracket. We have text-to-text, We have Text-to-video and we have text-to-video, Welcome to Text-To-Human.
Join us for a hackathon where we will be using OpenAI Whisper to create innovative solutions! Whisper is a neural net that has been trained to approach human level robustness and accuracy for English speech recognition. We will be using this tool to create applications that can transcribe in multiple languages, as well as translate from those languages into English. This will be a great opportunity for you to learn more about speech processing and to create some useful applications!
ποΈ This will be a 7-day virtual hackathon from 13-20 January π» Access AI21 Labs' state-of-the-art language models to build innovative applications π Prizes and awards of up to $9000 API credits + $3500 cash π‘ Meet and learn from AI21 Labs and Lablab experts π₯ Join the community and find your team βοΈ All levels are welcome π AI tutorials and mentors to help you β Receive a certificate of completion and Wordtune premium for submission π±βπ» Sign up now! Itβs free!
A new text-to-image model called Stable Diffusion is set to change the game once again β‘οΈ. Stable Diffusion is a latent diffusion model that is capable of generating detailed images from text descriptions. It can also be used for tasks such as inpainting, outpainting, and image-to-image translations. Join us for a epic stable diffusion makers event 11-13 November π