icon

2016 has been an exciting time for Onyx. Over the last year, Onyx has been widely adopted by companies with high volumes of data and strict latency requirements. Distributed Masonry, the entity that commercially supports Onyx, has had the privilege of seeing the platform used to solve a wide variety of problems. After more than a year of providing consulting services and training, Distributed Masonry has taken the next step and has begun build a new product on top of Onyx. We are pleased to announce that we have raised an initial seed round of $500,000, which will accelerate development of this product as well as Onyx core.

So, what exactly are we building? To answer that question, we have to go back to the primary goal that Onyx was built to solve. Onyx was intended to substantially reduce the cost of building distributed computation pipelines. In many ways, this has succeeded beyond what we could have hoped for. We’ve seen teams build complex systems requiring highly dynamic behavior at accelerated speeds. Onyx was architected to handle modern data processing problems by embracing the principles that many already understand help at the language level: immutability, simple data types, and pure functions.

Over time, we have learned that there are areas of this problem that would benefit from an integrated solution. Particularly complex problems are innate to building reliable, consistent, and fault tolerant data systems. In particular, networking, dev ops, and monitoring stand out as impediments when building fast, reliable data pipelines. While terms such as “serverless” have been on the rise, we are going in a different direction. We do not wish to pretend that there are not servers involved, and that your code does not affect outcomes. Instead, we wish to improve your ability to deploy, manage, and scale the servers operating your data pipelines, monitor them, and meet your outcomes. This will be achieved by giving users insights into how their pipelines are behaving, and how to solve their problems.

Onyx’s design model has intrinsically decreased time-to-market and engineering effort delivering data pipelines. We’re happy to share that we’ve begun work on a next-generation service of a combined streaming IDE, and Platform as a Service built on top of Onyx.

Harnessing Onyx’s design model, we’re able to do something that few other technologies can do – offer a trade-off between lower levels of abstraction, such as the medium e.g. SQL, Kafka, Redis, HTTP, S3, all the way to APIs built on these mediums, e.g. Mailchimp, S3 serialization formats, etc. To facilitate higher abstractions, we recently introduced “task bundles” which reduce incidental complexity for developers. In addition to a fully hosted environment, users will be able to select from a library of task bundles – concise computational blocks that can be linked to each other.

We’re thrilled to reveal what’s in progress in the coming months. Onyx’s salient goal is to reduce complexity and decrease engineering costs, and with our up-and-coming services, you’ll have a unique path to move from zero to production faster than ever.

Finally, we’d like to clearly address a number of concerns:

Onyx will remain open source.

Onyx will continue to be actively worked on, and will form a key part of our managed offering, and will be dogfooded as such.

Onyx will not be relicensed, and will remain protected under the Eclipse Public License.

Commercial support for companies using Onyx in-house remains available.

In general, everything will remain exactly as is. Our product is built directly on top of all of the existing Onyx infrastructure, and will continue to be the workhorse that powers our latest creations.

By Luke