Today, the Dapr maintainers released a new version of Dapr, the distributed application runtime. Dapr provides APIs for communication, state, and workflow, to build secure and reliable microservices. This post highlights the major new features and changes for the APIs and components in release v1.14.
We’ve recently celebrated the 1.14 release with a webinar where contributors demo the new features:
APIs
Jobs API (preview)
Many applications require job scheduling, or the need to take an action in the future. The Jobs API is an orchestrator for scheduling these future jobs, either at a specific time or for a specific interval.
Example scenarios include:
- Schedule batch processing jobs to run every business day
- Schedule various maintenance scripts to perform clean-ups
- Schedule ETL jobs to run at specific times (hourly, daily) to fetch new data, process it, and update the data warehouse with the latest information.
Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the new Scheduler service to schedule actor reminders.
Jobs in Dapr consist of:
- The Jobs API building block
- The Scheduler control plane service
The HTTP API to start a job is:
POST http://localhost:3500/v1.0-alpha1/jobs/<name>
Where <name> is the name of the job to run.
The request payload has the following parameters:
- data: A protobuf message @type/value pair, that contains the type and the serialized value.
- dueTime: An optional time at which the job should be running.
- schedule: An optional schedule at which the job is to be run.
- repeats: An optional number of times in which the job should be triggered.
- ttl: An optional time to live or expiration of the job.
This HTTP example shows how to start a job named jobforjabba, which has a string value payload of Running spice, and is scheduled to run every minute:
POST http://localhost:3500/v1.0-alpha1/jobs/jobforjabba
{
"job": {
"data": {
"@type": "type.googleapis.com/google.protobuf.StringValue",
"value": "Running spice"
},
"schedule": "@every 1m",
"repeats": 5
}
}
For more details on the Job API operations and parameters, read the Jobs API reference in the Dapr docs.
Watch the Jobs API & Scheduler Service session by Cassie Coyle on YouTube:
Performance improvements in actor reminders & workflows (preview)
The new Scheduler service can optionally be used as the backend for actor reminders and enables increased throughput and lower latency for both actors and workflows. Using the new Scheduler service delivers up to 80 times increase in throughput, scaling up to millions of reminders and workflow activities as compared with thousands previously. To use the scheduler service for actors and workflows, enable it in a Dapr Configuration resource as follows:
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: featureconfig
spec:
features:
- name: SchedulerReminders
enabled: true
Note that existing actor reminder data is incompatible with the new Scheduler Service.
Watch the Scalability & stabilization improvements for Dapr Workflow session by Oli Tomlinson on YouTube:
Streaming Subscriptions (preview)
Dapr now supports applications dynamically subscribing to PubSub topic events without restarting the Dapr sidecar. This was a much requested feature, since developers often want to subscribe and unsubscribe to topics based on runtime conditions.
In the cases where Dapr is not running as a sidecar, users often do not want to open a public port or create a tunnel to receive PubSub messages from Dapr. A Streaming Subscription allows applications to dynamically subscribe to PubSub topics and receive messages without opening a port to receive incoming traffic from Dapr.
Streaming Subscriptions are defined in your application code and do not require a subscription endpoint in your application (that is required by both programmatic and declarative subscriptions). Streaming subscriptions also do not require an app to be configured with the sidecar to receive messages. With streaming subscriptions, since messages are sent to a message handler in code, there is no concept of routes or bulk subscriptions.
Here’s a Go example that uses the Subscribe method to dynamically subscribe to the orders topic.
package main
import (
"context"
"log"
"github.com/dapr/go-sdk/client"
)
func main() {
cl, err := client.NewClient()
if err != nil {
log.Fatal(err)
}
sub, err := cl.Subscribe(context.Background(), client.SubscriptionOptions{
PubsubName: "pubsub",
Topic: "orders",
})
if err != nil {
panic(err)
}
// Close must always be called.
defer sub.Close()
for {
msg, err := sub.Receive()
if err != nil {
panic(err)
}
// Process the event
// We _MUST_ always signal the result of processing the message, else the
// message will not be considered as processed and will be redelivered or
// dead lettered.
// msg.Retry()
// msg.Drop()
if err := msg.Success(); err != nil {
panic(err)
}
}
}
There is also a SubscribeWithHandler method that takes the name of the handler method as the last argument. For more information, see Streaming Subscriptions in the Dapr docs.
Read more about the different types of subscriptions in the Understanding Dapr Pub/Sub Subscriptions Types blog post by Bilgin Ibryam. Or watch the Streaming Subscription session by Josh van Leeuwen on YouTube:
Outbox message projections (stable)
The transactional outbox feature is now marked as stable. The outbox feature allows you to commit a single transaction across a large combination of pub/sub brokers and state stores.
What is new in 1.14 is the possibility to project, or shape, the payload that is sent to the message broker. This provides a lot of flexibility, fields can be added or removed. When storing a full record in the state store, only the ID can be sent in the message payload, resulting in smaller messages, allowing for better performance. For more information on outbox projections, read the Dapr Docs.
Watch the Outbox Message Projections session by Yaron Schneider on YouTube:
Runtime
Actors multi-tenancy
Namespacing in Dapr provides isolation, and thus multi-tenancy. With actor namespacing, the same actor type can be deployed into different namespaces. You can call instances of these actors in the same namespace.
Each namespaced actor deployment must use its separate state store, especially if the same actor type is used across namespaces. This is because, namespace information is not part of the actor record, and therefore separate state stores are required for each namespace to prevent collisions with Actor IDs. While you could use different physical databases for each actor namespace, some state store components provide a way to logically separate data by table, prefix, collection, and more. This allows you to use the same physical database for multiple namespaces, as long as you provide the logical separation in the Dapr component definition.
Example when using SQLite:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.sqlite
version: v1
metadata:
- name: connectionString
value: "data.db"
- name: tableName
value: "namespace-actorA"
- name: actorStateStore
value: "true"
In self-hosted mode, you can specify the namespace for a Dapr instance by setting the NAMESPACE environment variable. On Kubernetes, you can create and configure namespaces when deploying actor applications. For example, start with the following kubectl commands:
kubectl create namespace namespace-actorA
kubectl config set-context --current --namespace=namespace-actorA
Then, deploy your actor applications into this namespace (in the example, .
Namespaced actors use the multi-tenant Placement service. With this control plane service, where each application deployment has its namespace, sidecars belonging to an application in namespace “ActorA” won’t receive placement information for an application in namespace “ActorB”.
Watch the Actor Multi-tenancy session by Elena Kolevska on YouTube:
Improved HTTP metrics filtering with path matching
When invoking Dapr using HTTP, metrics are created for each requested method by default. This can result in a high number of metrics, known as high cardinality, which can impact memory usage and CPU.
Path matching allows you to manage and control the cardinality of HTTP metrics in Dapr. This is an aggregation of metrics, so rather than having a metric for each event, you can reduce the number of metrics events and report an overall number.
Metrics filtering is specified in a Dapr configuration YAML file as follows:
http:
increasedCardinality: false
pathMatching:
- /orders/{orderID}
Metrics generated with this configuration will look like this:
# matched paths
dapr_http_server_request_count{app_id="order-service",method="GET",path="/orders/{orderID}",status="200"} 5
# unmatched paths
dapr_http_server_request_count{app_id="order-service",method="GET",path="",status="200"} 1
For more details on optimizing HTTP metrics with path matching, read the Dapr Docs.
Watch the Improved HTTP metrics filtering with path matching session by Nelson Parente on YouTube:
Deploy Dapr per-node or per-cluster with Dapr Shared
In a typical Dapr deployment on Kubernetes, Dapr automatically injects a sidecar to enable the Dapr APIs for your applications for the best availability and reliability. Dapr Shared enables two alternative deployment strategies to create Dapr applications where the Dapr API is decoupled from the application lifecycle. The two deployment options are a per-node deployment () or a per-cluster deployment ().
DaemonSet
: When running Dapr Shared as a KubernetesDeamonSet
resource the daprd containers runs on each Kubernetes node in the cluster. This can reduce network hops between the applications and Dapr.
Deployment
: When running Dapr Shared as a KubernetesDeployment
, the Kubernetes scheduler decides on which node in the cluster the daprd container instance runs.
Dapr Shared can be used to optimize resource usage or create a simpler testing environment. For more information on Dapr Shared, read the Dapr Docs.
Watch the Dapr Shared session by Mauricio Salatino on YouTube:
Component improvements
Dapr decouples the functionality of the integrated set of APIs with their underlying implementations via components. Components of the same type are interchangeable since they implement the same interface. Release 1.14 contains many improvements to existing components. The improvements listed here are just a small subset.
Pub/sub brokers
- Apache Kafka now has metadata fields to:
- keep a connection alive to prevent errors after a connection timeout.
- configure heartbeat interval and session timeout.
- GCP Pub/Sub now has a configurable acknowledgment deadline (ackDeadline).
- RabbitMQ now has a metadata field to enable Single Active Consumer.
- Azure Service Bus ApplicationProperties are now added to the metadata payload on subscription.
State stores
- Azure Cache for Redis is now supported in all Redis components.
- AWS IAM Auth is now supported for PostgreSQL components.
Bindings
- AWS S3 storage class is added to the binding metadata when using the create operation.
Installing the new Dapr 1.14 release
Locally with the Dapr CLI
When you’re using the Dapr CLI on your local dev machine, perform the following steps to upgrade to the new Dapr version:
- Uninstall Dapr:
dapr uninstall —all
- Install Dapr:
dapr init
To upgrade the Dapr CLI itself:
- Linux:
wget -q
https://raw.githubusercontent.com/dapr/cli/master/install/install.sh
-O - | /bin/bash
- macOS:
brew upgrade dapr-cli
- Windows:
winget upgrade Dapr.CLI
Verify the installation by running dapr -v
On Kubernetes with Diagrid Conductor
The easiest way to upgrade Dapr on your Kubernetes clusters is via Diagrid Conductor Free - the easiest way for Dapr developers to visualize, troubleshoot, and optimize their Dapr applications running on Kubernetes.
You can sign up for Conductor Free here, and try it for your Dapr apps on Kubernetes.
What's next?
This post is not a complete list of features and changes released in version 1.14. Read the official Dapr release notes for more information. The release notes also contain information on how to upgrade to this latest version.
Excited about these features and want to learn more? I'll cover the new features in more detail in future posts. Until then, join the Dapr Discord to connect with thousands of Dapr users.