< >
< >
< >
< >
< >
Google Cloud’s data agents promise to end the 80% effort in which company data teams are plagued - current-scope.com
< >
< >

Google Cloud’s data agents promise to end the 80% effort in which company data teams are plagued


Would you like to insight in your inbox? Register for our weekly newsletters to only receive the company manager of Enterprise AI, data and security managers. Subscribe now


Data not only magically appear in the right place for corporate analyzes or AI, but must also be created and directed with data pipelines. This is the area of data technology and it has long been one of the most ungrateful and boring tasks that companies have to do with.

Today, Google Cloud aims directly at the tedium of data preparation with the introduction of a series of AI agents. The new agents include the entire data life cycle. The data engineering agent in BigQuery automates the complex pipeline creation through natural voice commands. A data science agent transforms notebooks into intelligent work areas that can autonomously perform autonomous workflows. The extended conversation analysis now contains a code interpreter that takes over extended python analyzes for business users.

“If I think about who carries out data technology today, it is not just engineers, data analysts, data scientists, every data personality complains about how difficult it is to find data, how difficult it is to argue how difficult it is to get access to high -quality data,” said Yasmeen Ahmad, Managing Managing Director, Data Cloud, said, Venturebeat. “Most workflows that we hear from our users are 80% in these effort jobs to handle data, data, engineering and good quality data with which you can work.”

Targeting of data preparation EngPass

Google has created the data engineering agent in BigQuery to create complex data pipelines through natural language requirements. Users can describe multi -stage workflows, and the agent takes care of technical implementation. This includes taking data from the cloud storage, the use of transformations and the implementation of quality tests.


The AI Impact series returns to San Francisco – August 5th

The next phase of the AI is here – are you ready? Join the managers of Block, GSK and SAP to get an exclusive look at how autonomous agents redesign of decision-making from real time up to end-to-end automation.

Secure your place now – space is limited: https://bit.ly/3guuplf


The agent automatically writes complex SQL and Python scripts. It takes care of the anomaly recognition, plans pipelines and error fixing errors. These tasks traditionally require considerable technical expertise and ongoing maintenance.

The agent divides natural language requests into several steps. First of all, it understands the need to create connections to data sources. Suitable table structures are then created, data loaded that identifies the primary key for connections, reasons for data quality problems and use of cleaning functions.

“Usually, this entire workflow would have written a lot of complex code for a data engineer and created this complex pipeline and then managed and iterated this code over time,” said Ahmad. “With the data development agent, it can now create new pipelines for the natural language. It can change existing pipelines. It can fix problems.”

How company data teams will work with the data agents

Data engineers are often a very practical group of people.

The various tools that are usually used to create a data pipeline, including data streaming, orchestration, quality and transformation, do not go away with the new data engineering agent.

“Engineers are still aware of this underlying tools, because we see how data people, yes, they love the agent and they actually see these agents as an expert, partner and employee,” said Ahmad. “But often our engineers actually want to see the code, they actually want to see the pipelines created by these agents visually.”

While the data can work autonomously, data engineers can actually see what the agent does. She explained that data experts often examine the code written by the agent and make additional suggestions to the agent to further adapt or adapt the data pipeline.

Building a data agent -ecosystem with an API foundation

There are several providers in the data space that expand actual AI workflows.

Like startups Old estimates you Expand certain agents for data workflows. Large providers, including databasePresent Snowflake And Microsoft All of their own agents -KI technologies build up that can also help data experts.

The Google approach differs a little differently in that it creates its agent -KI services for data with the Gemini dataagent -api. It is an approach with which developers can embed the functions for natural language processing and code interpretation from Google in their own applications. This represents a shifting of closed tools for initial providers to an expandable platform approach.

“Behind the scenes for all of these agents, they are actually built as a series of APIs,” said Ahmad. “With these API services, we are increasingly intending to make these APIs available to our partners.”

The DACH -API service publish rusting API services and agents -apis. Google has Lighthouse preview programs in which partners embed these APIs in their own interfaces, including notebook providers and ISV partners, which create data pipeline tools.

What it means for company data teams

For companies that want to lead in AI-controlled data processes, this announcement signals an acceleration to autonomous data workflows. These skills could offer significant competitive advantages in time and resource efficiency. Organizations should evaluate their current capacity of the data team and consider pilot programs for the automation of pipeline.

For companies that plan a later introduction of AI introduction, the integration of these functions into existing Google Cloud services changes the landscape. The infrastructure for advanced data agents is more standard than for premium. This shift may increase the basic expectations for data platform functions in the entire industry.

Companies must reconcile the efficiency gains against the need for supervision and control. Google’s transparency approach can deliver an average soil, but data leaders should develop governance frameworks for autonomous agent operations against widespread provision.

The emphasis on API availability shows that the development of the custom agent will become a competitive distinguishing feature. Companies should consider how to use these basic services to create domain -specific agents that deal with their unique business processes and data challenges.


Leave a Reply

Your email address will not be published. Required fields are marked *

< >