Skip to content

AWS Lambda's Billing Policies Significantly Criticized, as Vercel Reveals Technique to Evade Idle Fees

Business Innovation Recycles Stranded Computer Resources to Reduce Expenses from Overrunning Function Fees

AWS Lambda's billing policy for idle time has come under scrutiny, with Vercel asserting a method...
AWS Lambda's billing policy for idle time has come under scrutiny, with Vercel asserting a method to avoid unnecessary charges.

AWS Lambda's Billing Policies Significantly Criticized, as Vercel Reveals Technique to Evade Idle Fees

=============================================================================

In the realm of serverless computing, Vercel's innovative approach, Fluid Compute, is making waves by significantly reducing costs associated with long-running or latency-prone tasks, often a challenge on Amazon Web Services' Lambda platform.

Vercel's Fluid Compute addresses this issue by reusing idle compute instances instead of starting new ones for each request. This strategy reduces redundant infrastructure use, leading to substantial savings.

Moreover, Fluid Compute employs an "Active CPU" pricing model, charging only for the actual active CPU time consumed and provisioned memory, rather than the traditional billing model that encompasses all idle time. This shift results in up to 95% cost savings on compute bills for workloads characterised by long idle waiting or low concurrency, common in serverless AI or I/O-bound applications.

A customer reported improvements after turning on Vercel's Fluid Compute, with one instance reporting a bill shock due to functions with slow-returning AI calls. The bill, which was $300 a month, jumped to $3,550 this month. The majority of the increase was attributed to 'serverless function duration'.

Vercel started working on streaming UI data to the browser in 2020, when AWS Lambda did not support streaming. To circumvent this, they implemented a TCP-based protocol to create a tunnel between Vercel and AWS Lambda functions.

A Vercel Function Router converts the data returned from the tunnel to a stream that is returned to the client. This feature is particularly useful for serverless AI or I/O-bound applications, where long-running tasks can be handled more efficiently.

AWS Lambda, Amazon's serverless compute platform, is suitable for short bursts of work but can be costly for long-running or latency-prone tasks. Each request on AWS Lambda runs in its own environment and gets billed for the full duration, even when idle. In contrast, AWS Lambda (Arm architecture) costs $0.20 per million requests, plus $0.048 per GB/hour of instance usage.

Vercel's Fluid Compute, by reusing idle compute instances and charging only for active CPU time, offers a more cost-effective solution for these tasks, making Vercel's platform more suitable for applications requiring prolonged or I/O-bound serverless functions, including AI workloads.

It's worth noting that prior to Fluid Compute, Vercel charged $0.60 per million requests and $0.18 per GB/hour for function usage. Despite adding a hefty markup, Vercel's work on optimising AWS Lambda does mitigate the cost.

Vercel is the home of Next.js, a React-based framework recommended by the React team as the best implementation of React Server Components (RSC). This further solidifies Vercel's position as a leading player in the serverless computing landscape.

In summary, Vercel’s Fluid Compute efficiently handles long-running or latency-prone serverless workloads by reusing idle compute instances, charging only for active CPU time, and enabling substantial savings over traditional AWS Lambda billing models for such tasks.

[1] [Link to source 1] [2] [Link to source 2] [3] [Link to source 3]

  1. The innovative technology, Fluid Compute, developed by Vercel, is designed to address cost issues associated with long-running tasks on cloud-based platforms like Amazon Web Services' Lambda, especially in serverless AI or I/O-bound applications.
  2. Vercel's Fluid Compute strategy of reusing idle compute instances rather than starting new ones for each request, and its "Active CPU" pricing model, significantly reduces costs, potentially saving up to 95% on compute bills.
  3. A shift from traditional billing models to one that charges only for active CPU time and provisioned memory can result in substantial savings, as demonstrated by a customer who experienced a bill shock due to functions with slow-returning AI calls.
  4. Vercel's Fluid Compute is particularly beneficial for serverless AI or I/O-bound applications, where long-running tasks can be handled more efficiently, making Vercel's platform a more cost-effective solution compared to Amazon's serverless compute platform, AWS Lambda.

Read also:

    Latest