Ilya Sutskever, co-founder of OpenAI, has launched a new AI firm called Safe Superintelligence (SSI), raising $1 billion from prominent investors such as Andreessen Horowitz and Sequoia Capital. Founded in June with a focus on safe AI models, SSI is dedicated to addressing the critical problem of ensuring AI remains aligned with human values, with operational offices in Palo Alto and Tel Aviv.
Ilya Sutskever, a co-founder of OpenAI, has established a new artificial intelligence firm named Safe Superintelligence (SSI), which has successfully secured $1 billion in funding from prominent investors. Notable contributors to this funding round include venture capital firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. Additionally, NFDG, a venture capital partnership led by Nat Friedman, former CEO of GitHub, and Daniel Gross, co-founder of SSI, also participated in this investment round. Founded in June, SSI is focused on developing artificial intelligence models with an emphasis on safety. This initiative comes shortly after Mr. Sutskever departed from OpenAI, where he served as chief scientist and played a pivotal role in creating safety systems designed to ensure that artificial intelligence remains aligned with human values. SSI articulates its mission on its website, stating, “Building safe superintelligence is the most important technical problem of our time.” The company currently operates offices in Palo Alto, California, and Tel Aviv, Israel, and it aims to assemble a select cadre of elite technical experts, including engineers and researchers, to advance its objectives. Mr. Sutskever’s exit from OpenAI was marked by controversy, occurring just months after he was involved in the removal of Chief Executive Sam Altman, a decision that he acknowledged regretting shortly thereafter.
The establishment of Safe Superintelligence (SSI) represents a significant step in the ongoing conversation about the ethics and safety of artificial intelligence. With increased public attention on potential risks associated with advanced AI systems, the demand for companies focused on implementing safety measures has escalated. Ilya Sutskever’s venture highlights the importance of prioritizing safety standards as AI technologies continue to evolve at an unprecedented pace. This development, alongside substantial financial backing from high-profile investors, underscores the growing investment in the future of responsible AI development.
In summary, Ilya Sutskever’s new venture, Safe Superintelligence, has garnered substantial interest and investment with its ambitious goal of building safe artificial intelligence models. With backing from leading venture capital firms and a clear commitment to addressing the critical issue of AI safety, SSI is poised to make impactful strides in ensuring that AI technologies align with human values. Mr. Sutskever’s leadership, combined with a dedicated team, positions SSI as an important player in shaping the future landscape of artificial intelligence.
Original Source: www.livemint.com
Leave a Reply