The following limitations apply to Atlas Stream Processing:
- The - state.stateSizeof a stream processor can't exceed 80% of the RAM available for its pod. For example, the maximum size of a stream processor in the- SP30tier, which has 8GB of RAM, is 6.4GB. If the- state.stateSizeof any of your stream processors is approaching 80% of its available RAM, consider stopping the processor and restarting it on a higher tier. If your stream processor already runs at the maximum tier enabled for your stream processing workspace, consider adjusting your stream processing workspace configuration to enable higher-tier stream processors.- When a stream processor crosses the 80% RAM threshold, it fails with a - stream processing workspace out of memoryerror. You can view the- state.stateSizevalue of each stream processor with the- sp.processor.stats()command. See View Statistics of a Stream Processor to learn more.
- A stream processing workspace can use only clusters in the same project as sources or sinks. 
- An Atlas Stream Processing pipeline definition cannot exceed 16 MB. 
- Only users with the - Project Owneror- Atlas adminroles can use Atlas Stream Processing.
- Atlas Stream Processing currently supports only the following connection types: Connection TypeUsage- Source or Sink - Atlas Database - Source or Sink - Sample Connection - Source Only 
- For Atlas Stream Processing using Apache Kafka as a $source, if the Apache Kafka topic acting as $source to the running processor adds a partition, Atlas Stream Processing continues running without reading the partition. The processor fails when it detects the new partition after you restore it from a checkpoint after a failure, or you restart it after stopping it. You must recreate the processors that read from topics with the newly added partitions. 
- Atlas Stream Processing currently supports only JSON-formatted data. It does not currently support alternative serializations such as Avro or Protocol Buffers. 
- For Apache Kafka connections, Atlas Stream Processing currently supports only the following security protocols: - SASL_PLAINTEXT
- SASL_SSL
- SSL
 - For - SASL, Atlas Stream Processing supports the following mechanisms:- PLAIN
- SCRAM-SHA-256
- SCRAM-SHA-512
- OAUTHBEARER
 - For - SSL, you must provide the following assets for your Apache Kafka system mutual TLS authentication with Atlas Stream Processing:- a Certificate Authority (if you are using one other than the default Apache Kafka CA) 
- a client TLS certificate 
- a TLS keyfile, used to sign your TLS certificate 
 
- Atlas Stream Processing doesn't support $function JavaScript UDFs. 
- Atlas Stream Processing supports a subset of the Aggregation Pipeline Stages available in Atlas, allowing you to perform many of the same operations on streaming data that you can perform on data-at-rest. For a full list of supported Aggregation Pipeline Stages, see the Stream Aggregation documentation. 
- Atlas Stream Processing doesn't support the aggregation variables - $$NOW,- $$CLUSTER_TIME,- $$USER_ROLES, and- $SEARCH_META.