Friday, September 13, 2024

Creating liberating content

Realme 12X 5G Tipped...

The Realme 12x 5G was launched by Realme last week in China. The...

iQOO will launch a member...

iQOO Neo 10 series's new member will feature SDG3 SoC In April, iQOO is...

Samsung Galaxy A35 and...

Samsung Galaxy A35 and A55 Specs and featuresRelated Samsung released the Galaxy A35 and...

Motorola confirms upcoming smartphone...

Motorola has begun to tease the release of its next smartphone. It is...
HomeAIBefore the Confidential...

Before the Confidential Computing Summit, Opaque Systems releases confidential AI and analytics technologies

Opaque Systems, an AI and analytics business, today unveiled updates to its private computing platform. The new products prioritise organisational data security while using large language models (LLMs).

The business said that it will present these developments during Opaque’s keynote speech at the first Confidential Computing Summit, which will take place on June 29 in San Francisco.

They include Data Clean Room (DCR), a zero-trust analytics platform, and a privacy-preserving generative AI that is optimised for Microsoft Azure’s Confidential Computing Cloud. The business claims that by including safe hardware enclaves and distinctive cryptographic fortifications, their generative AI harnesses many levels of security.

“The Opaque platform ensures data remains encrypted end to end during model training, fine-tuning, and inference, thus guaranteeing that privacy is preserved,” Jay Harel, VP of product at Opaque Systems, told VentureBeat. “Our platform secures data at rest, in transit, and while in use to reduce the likelihood of data breaches throughout the lifecycle.”

Opaque wants to provide businesses the ability to safely analyse private data while preserving its security and guarding against unauthorised access.

The platform has increased its capacity to protect machine learning and AI models in order to serve confidential AI use cases. It does this by using trusted execution environments (TEEs) to execute them on encrypted data, limiting unauthorised access.

The business claims that their zero-trust Data Clean Rooms (DCRs) can encrypt data while it is in use, while it is in transit, and at rest. This strategy guarantees that all information delivered to the clean room is kept secret at all times.

Using private computing to guarantee data security

LLMs like ChatGPT train on openly available data. According to Opaque, in order to train these models to their full potential, they must have access to a company’s private information without running the danger of being discovered.

To reduce this danger, Opaque suggests that businesses use secret computing. Data security is possible using the confidential computing technique throughout the whole model training and inference process. The technique, according to the business, may enable LLMs’ transformational potential.

According to Opaque’s Harel, “We use Confidential Computing technology to leverage specialised hardware made available by cloud providers.” Through the whole machine learning lifetime, datasets are secured end-to-end thanks to our privacy-enhancing technology. The model, prompt, and context are kept encrypted using Opaque’s platform when executing inference and throughout training.

According to Harel, the absence of safe data sharing and analysis in organisations with numerous data owners has resulted in data access limitations, data set erasure, data field masking, and outright data sharing denial.

In relation to generative AI and privacy, particularly in the context of LLMs, he claimed that there are three key problems:

  • Queries: LLM providers may see user inquiries, which increases the risk of access to private data like confidential source code or personally identifiable information (PII). The threat of hacking is increasing, which heightens this privacy problem.
  • Training models: Companies access and examine their own training data in order to enhance AI models. However, by keeping training data around, there may eventually be a buildup of private data, making it more susceptible to data breaches.
  • IP concerns for businesses using exclusive models: Giving proprietary LLM providers access to the data or implementing private models internally are both required for fine-tuning models utilising corporate data. The danger of hacking and data breaches rises when outsiders get access to private and sensitive information.

With these concerns in mind, the business created its generative AI technology. In addition to guaranteeing regulatory compliance, it attempts to facilitate secure communication between organisations and data owners.

A specialised LLM, for instance, may be trained and improved upon by one firm, while being used for inference by another. The data of both firms is kept secret, and neither has access to the other’s.

Organisations would be able to train, improve, and conduct inference on LLMs without actually having access to the raw data, said to Harel, since Opaque’s technology ensures that all data is encrypted during its entire lifespan.

For the zero-trust Data Clean Room (DCR) service, the business emphasised the usage of secure hardware enclaves and cryptographic fortification. It asserts that this method to secret computing offers numerous levels of defence against online threats and data breaches.

The system runs within a secure enclave on the user’s cloud instance (such as Azure or GCP), operating in a cloud-native environment. This configuration limits data migration, allowing companies to keep their current data architecture.

Whether it’s consumer PII or secret corporate process data, “our objective is to guarantee that everyone can trust the protection of their private data. We provide companies the ability to keep their data encrypted and safe for AI workloads across the whole lifespan, from model training and fine-tuning to inference, ensuring that privacy is maintained, said Harel. “Data is kept confidential while in use, in transit, and at rest, significantly reducing the likelihood of loss.”

Checkout This

  1. TruEra has released a free tool to test LLM apps for hallucinations.
  2. What exactly is generative AI (artificial intelligence)?
  3. Captions packs, an automatic video captioning software Top venture capitalists have invested $25 million.

Get notified whenever we post something new!

Continue reading

Realme 12X 5G Tipped to Launch in India Soon

The Realme 12x 5G was launched by Realme last week in China. The Realme 12x 5G sits lower than other current models, such as the Realme 12 5G and 12+ 5G. There are multiple rumors that the smartphone will...

iQOO will launch a member of the Neo 10 series featuring a Snapdragon 8 Gen3 chipset.

iQOO Neo 10 series's new member will feature SDG3 SoC In April, iQOO is planning to release a new Z series of smartphones in the domestic market of China. The newly released will feature the Snapdragon 8s Gen 3 processor,...

Samsung Galaxy A35 and Galaxy A55 have best displays in the price range: DxOMark

Samsung Galaxy A35 and A55 Specs and featuresRelated Samsung released the Galaxy A35 and A55 smartphones worldwide earlier this week. DxOMark, a well-known authority on camera and display tests, gave both devices good ratings soon after they were released. To top...