Whether or not you made the journey to this year’s re:Invent, there’s always a variety of great announcements lost amid an action-packed week of keynotes, breakouts, expo hall demos, and networking sessions. No need to worry—we’re always happy to be a big part of the re:Invent experience and share our observations with you.
This year, re:Invent 2023 leaned heavily into the following key areas:
- Investing in all layers of the generative AI stack
- Making security accessible
- Innovating on the developer experience
- Enhancing observability everywhere
Unsurprisingly, generative AI (GenAI) dominated the conference as a central theme, with AWS unveiling several impressive announcements for all three layers of the GenAI stack. But it wasn’t just AWS. Vendors from all industries showcased their latest GenAI advancements throughout the massive expo hall.
Even prior to getting on the ground in Las Vegas, teams at AWS have been quietly rolling out their “pre-Invent” announcements. One of the more highly anticipated releases was the general availability of Amazon Bedrock, a platform that democratizes private access to some of the most popular language models from leaders in the AI space. Along with this launch, Datadog released an integration for Bedrock to help customers gain critical insights into how to optimize their foundational models.
If your organization uses large language models (LLMs) and other generative models, Datadog LLM Observability helps you gain insights into the health and performance of those models. You can easily track your LLMs in production and identify problematic clusters, model drift, and prompt and response characteristics that impact model performance.
At re:Invent, AWS unveiled a number of new launches for their GenAI portfolio:
- Party Rock is a free way for anyone to play with Bedrock in a “no-stakes” sandbox.
- Reflecting a growing consciousness around ethical considerations with artificial intelligence, Guardrails for Amazon Bedrock was announced to foster responsible technology deployments when it comes to LLMs.
- The introduction of the Graviton4 processor further cemented AWS’s commitment to green computing, signaling a shift towards more environmentally friendly technology solutions.
It’s no secret that the software industry has seen a fundamental shift in how we secure our applications. Movements like DevSecOps have put security within every part of the software development cycle, and new, more stringent regulations have forced us to protect and observe our workloads more than ever before.
At Datadog, it has been our goal to make it as easy and straightforward as possible for engineers—by using their traditional telemetry—to detect and act on dangerous security conditions. That’s why we’ve released products like Application Security Management, Cloud Security Management, Cloud SIEM, and more to help you get the most out of the data you’re already collecting.
EKS rolled out a pod identity feature where users can now bind only the necessary role to a specific pod. This feature makes it easier to securely assign AWS permissions to pods running in a Kubernetes cluster. We wrote a deep dive about this new feature on our Security Labs blog.
More exciting are the new testability improvements to AWS IAM Access Analyzer. Akin to the improvements for developer experience, these improvements are a huge step forward in the testability of identity and access policies. They allow users to see if a policy grants more access than it did previously, including access to critical actions. To support these new capabilities, Datadog has enhanced our existing IAM Access Analyzer integration.
We have also rolled out Cloud Infrastructure Entitlement Management (CIEM) inside of Cloud Security Management, allowing users to see their most dangerous access conditions inside of Security Inbox. CIEM detects permissions gaps, indirect access issues, and dangerous cross-account conditions.
While re:Invent didn’t have any big serverless security splashes this year, we rolled out Application Security Management for serverless. The new set of features adds detection heuristics to APM running inside of serverless functions and extends vulnerability management capabilities to all your Lambda-based workloads.
It’s no secret that in any cloud business developer experience, having a “paved road” and great tooling are crucial to adoption. Here are some related enhancements announced for developers by AWS and Datadog:
- CodeWhisperer for MacOS now adds terminal-based autocompletion for hundreds of CLIs, ensuring that builders need not be experts at knowing which magic words to type to get the job done. Don’t know the exact CLI to use, for example? Code Whisperer can use a natural-language prompt to tell you which of the CLI tools best supports what you hope to accomplish. And for whatever your CodeWhisperer needs, Datadog already has out-of-the-box dashboards for AWS CodeWhisperer.
- Amazon CodeCatalyst with Amazon Q is designed to expedite software delivery by automating various development tasks. This enables developers to transform natural language inputs into fully tested, merge-ready code in just a few clicks, with options for feedback and manual adjustments as needed.
- As more organizations think about shifting left, Datadog has provided an AWS CodePipeline integration for CI Visibility.
Of course, observability is one of the main focuses for Datadog users on AWS, and we seek to extend observability wherever our customers need it.
For example, next-gen infrastructure such as serverless is consistently top of mind for our customers. We’re always excited to see AWS add additional foundational capabilities to platforms like AWS Lambda, such as giving it the ability to ship OpenTelemetry (OTel). OTel is incredibly useful for capturing function payloads in serverless workflows, measuring cold starts, and many other use cases, which is why Datadog APM now supports custom instrumentation for AWS Lambda by using the OpenTelemetry Tracing API.
AWS Step Functions has also remained critical to serverless users, building and orchestrating serverless workflows from hundreds of services—including Lambda, Amazon EKS, and Amazon API Gateway. And to help you even better understand the health of your Step Functions executions, Datadog’s State Machine Map now provides a high-level visualization of your Step Functions workflow, along with execution details from each state.
We would be remiss not to call out one of the biggest announcements at re:Invent: the newest AWS storage class, Amazon S3 Express One Zone, which delivers consistent single-digit millisecond data access for your most latency-sensitive applications. Customers can leverage Datadog APM to monitor S3 Express One Zone performance.
Jointly with S3 Express One Zone’s launch, AWS also announced that EMR customers can now speed up data processing and analysis with Apache Spark applications by up to four times when processing data from the S3 Express One Zone storage class instead of S3 Standard. When customers have performance-critical workloads or require fast response time, they can leverage S3 Express One Zone when they run EMR Spark applications on an EC2 cluster.
Speaking of Apache Spark, Datadog recently released our private beta for Data Jobs Monitoring (DJM), a new product that provides data platform and data engineering teams visibility into the performance and reliability of their data processing jobs—beginning with those using Apache Spark. DJM makes it easy to alert on and troubleshoot job reliability issues and to identify inefficient job configurations or overprovisioned infrastructure—all with a view to improving performance and reducing costs. You can request access to the private beta if you are interested.
This is just a short summary of the things that have excited us about this year’s re:Invent. For more announcements during AWS re:Invent, visit the AWS News Blog. And be sure to look out for more recaps, summaries, and great content from our team on how you can build on the AWS Cloud. We hope to see you at next year’s re:Invent.