Shortly OpenAI, Microsoft and GitHub have been named in a class action lawsuit alleging that their AI code-generating software Copilot violates copyright laws.
Lawyer and developer Matthew Butterick announced last month that he had teamed up with the law firm Joseph Saveri to investigate Copilot. They wanted to know if and how the software violates the legal rights of programmers by scrapping and releasing their work without proper attribution under current open source licenses.
The law firm has now filed a class action lawsuit in the District Court of Northern California in San Francisco. “We question the legitimacy of GitHub Copilot,” Butterick said.
“This is the first step on a long journey. As far as we know, this is the first class action lawsuit in the US challenging the training and issuance of AI systems. It won’t be the last. AI systems are not exempt from the law. Those who create and operate these systems must remain accountable,” he continued in a statement.
“When companies like Microsoft, GitHub and OpenAI choose to flout the law, they shouldn’t expect us, the public, to sit still. AI must be fair and ethical for everyone. If this is not the case, it can never achieve its vaunted goals of the exaltation of humanity. It will be just another way for the privileged few to benefit from the labor of the many.”
The Software Freedom Conservancy, which declined to comment on the legal claims in the case, said of the move, “We determine that this lawsuit is a class action,” adding, “Given that almost every line of FOSS ever written is likely to be included in the copilot training set, it is very likely that almost everyone reading this message will find themselves as part of the class when the court certifies the class. As such, each of you, perhaps in the distant future or maybe very soon , has to make a decision whether or not to join this action. We at SFC are also making this decision right now.”
Scotland rips out Chinese AI security cameras
Edinburgh City Council pledged to scrap CCTV cameras bought by HikVision, a company accused of using facial recognition to monitor Uyghur Muslims in China.
Asked if and when they plan to remove HikVision’s equipment at a council meeting, officials confirmed, “After the completion of the public area CCTV upgrade project, there will be no more HikVision cameras on the public network,” it said a representative to Edinburgh Live.
Edinburgh City Council estimated that there were reportedly over 1,300 cameras in council buildings but did not know the total number of HikVision units installed. These systems will reportedly be replaced with “compliant devices” in public areas by February 2023; It is not clear when the cameras in the community buildings will all be replaced.
Politicians in the UK have called on the government to ban HikVision surveillance cameras after privacy activist group Big Brother Watch launched a campaign claiming the technology could introduce security holes and was linked to human rights abuses against Uyghur Muslims. In the US, HikVision has been placed on the company list that prevents US companies from importing the company’s products without express permission.
OpenAI launches new AI investment program; DALL-E is available as an API
Converge, the first program launched by the OpenAI Startup Fund, will commit $1 million and share resources and expertise to support ten early-stage companies.
Workers will have the opportunity to participate in a five-week program and gain access to OpenAI’s latest models before they are released to the public. Interested engineers, executives and researchers can apply for Converge before November 25th.
The move is a win-win situation for OpenAI. The startups could become customers in the future by using the company’s APIs to develop products. However, if they don’t and continue to grow and become successful, OpenAI will also benefit financially.
OpenAI also released its text-to-image model DALL-E as an API this week. The Images API will allow developers to integrate DALL-E into their applications. The API is still in beta mode and will initially be limited to generating up to 25 frames per five minutes.
You can play with Google’s imagen, so to speak
Google is releasing its AI text-to-image model, Imagen, in a mobile app that only allows users to create images of fake cities and monsters.
Imagen will be introduced in its AI Test Kitchen app, but the installed version will be very limited. People hoping to play around with the tool will only have the ability to conjure up AI-created images using two models called City Dreamer and Wobble.
You can choose from different keyword options to describe an object that you want Imagen to generate. For example, the Wobble model lets you choose what material you want your monster to look like it’s made of clay, felt, marzipan, or rubber, The Verge first reported.
The AI Kitchen app serves as a portal for Google to test some of its AI models for public feedback. The company’s infamous LaMDA chatbot is also available on the app in limited form. Generative AI models can be unpredictable and guided to generate toxic or objectionable content. By limiting Imagen’s capabilities, Google’s text-to-image model will be less likely to be inappropriate.
Josh Woodward, senior director of product management at Google, provided an example of how a request for the location of Tulsa, Oklahoma, can be offensive. “There was a series of race riots in Tulsa in the ’20s,” he said. “And if someone puts in ‘Tulsa,’ the model might not even refer to it, and you can imagine that with places around the world.” ®