Show HN: Inngest – an open-source event-driven queue https://ift.tt/ITG4ePk

Hat hair
0
Show HN: Inngest – an open-source event-driven queue Hi HN! We’re Tony and Dan, the founders of Inngest - https://www.inngest.com. We’re an event-driven queueing system. Existing queueing solutions have pretty terrible UX. We solve this by making it simple for you to write delayed or background jobs by triggering step functions from JSON-based events. At a high level, Inngest does two things: - Ingest events from your systems via HTTP (pun intended) - Triggers serverless functions in response to specific events — async, either immediately or delayed. This allows you to build out async functions (eg. background jobs, handling webhooks) much faster, without worrying about config, queues, scaffolding, boilerplate, or infra. Because of the decoupling, it also means cleaner code. We talk about the benefits here [1]. Previously, Tony ran engineering at https://ift.tt/M4rAfsV and Dan was the CTO of https://buffer.com/. At both places, we had to build and manage a lot of complex async logic. You could say that Buffer is one big queue, and at Uniform we had lots of logic to run for compliance… managed via queues. So we're very familiar with the problem. Technicals and how it’s different: Functions are declarative. They specify which events trigger them, with optional conditionals. This is great because you can then deploy functions independently from your core systems, and you get things like canary deploys, plus immediate rollbacks. Each function can have many steps, represented as a DAG. Each step can be any code — an AWS function, custom code in a container, an HTTP call, etc. Edges of the DAG can also have conditions for traversal, and can “pause” until another event comes in, with TTLs and timeout (eg. after signup, run step 1, wait for a user to do something else, then run the next step). Because the functions are event driven, we also statically type and version events for you. This lets you inspect and generate SDKs for events, or to fail early on invalid data. It also lets you replay functions, test with historical data, or deploy functions and re-run historical events. Architecturally, we’ve focused on simple standards that are easy to learn and adopt. Events are published via HTTP requests. Functions use args & stdout. You can get started without knowing any implementation details. You only need to send events via POST requests and write functions that react to them — nothing else required. What people use us for: - A replacement for their current queueing infrastructure (eg. celery). - Running functions after receiving webhooks - Running business logic when users perform specific actions (eg. publishing things at a specific time) - Handling coordinated logic (eg. when a user signs up, wait for a specific event to come in then run another step) Where we’re at: We’ve open sourced our core execution engine [2], which allows you to run an in-memory environment locally with a single command. We’re working on opening more and more of the platform to allow you to self host — that’s currently our main goal. Right now, you can use us “serverless”. Because we record function state, we charge per ‘step’ of a function invoked. We’ve documented our core OS architecture [3], and we’ve also released the function spec and interfaces in our repo. We’ve talked more about about goals, vision, and why in our open sourcing post [4]. There’s also a minimal demo w/ a Next.js backend [5]. We know we’re far from feature complete. There’s so much more we can do. If there’s things you’d like to see, feedback, or improvements, please let us know — we’d love to hear from you and make this better, and get your initial thoughts. [1]: https://ift.tt/4etlSHE [2]: https://ift.tt/GaJE0Fj [3]: https://ift.tt/1LMnbfj [4]: https://ift.tt/e6ANaXl [5]: https://ift.tt/5Pwxn6Y June 28, 2022 at 11:19PM

Post a Comment

0Comments

Post a Comment (0)