How can you add real-time to your Big Data application stack? How can you leverage in-memory computing to ensure you maintain consistency, scalability, transactionality, reliability, and high availability? Answers here.

How to Add Real Distributed Transactions to MongoDB – Ron Zavner

From time to time, I like to visit NoSQL conferences and talk to people from the community to see how they solve their problems. I guess I find it more neutral to do this sometimes, as opposed to speaking to our customers, as they might be biasedJ. In any case, it seems that the hype


via WordPress

The Next Big Thing in Big Data

As we move to fast data, there’s more emphasis on processing big data at speed. Getting speed without compromising on scale pushes the limits in the way most of the existing big data solutions work and drive new models and technologies for breaking the current speed boundaries. New advancements on the hardware infrastructure with new flash drives offer great potential for breaking the current speed limit which is bounded mostly by the performance of hard drive devices.

When we reach a point in which we need to change much of the existing assumptions and architecture of the existing databases to take advantage of new technology and devices such as flash, that’s a clear sign that local optimization isn’t going to cut it. This calls for a new disruption.

Read more of this article on VentureBeat.

To learn more about the GigaSpaces solution, check out our white paper, XAP MemoryXtend - Massive Application Storage Capacity for Real-Time Applications.

Process more data and at faster speeds with XAP MemoryXtend

Organizations and enterprises have a constant need for fast processing with bigger and bigger datasets. To mediate this issue, most companies turn to In Memory computing for real time processing, application scalability and high availability. However, the price of In Memory computing can be expensive [due to the cost of RAM], and take the effective In Memory solution off the table.

Read about a potential solution below that stores data on both RAM and SSD and allows you to process thousands of requests at a time.

Learn more about XAP MemoryXtend with SanDisk’s ZetaScale technology.

GigaSpaces is excited to announce a joint partnership with SanDisk today!

We are excited to announce a joint partnership with SanDisk today! With this partnership we have built a new product – called XAP MemoryXtend, that basically gives you extreme processing but at lower costs. How did it all start? We all know that there’s a growing need for faster processing of constantly growing data. Everyone


Speed Saves Lives!

Gigaspaces will be participating at the DATA360 Conference in Mountain View, CA. 

Paul Kudrle, Lead Developer, of Pharmacy OneSource 

(A Wolters Kluwer Health Company)  will be presenting on behalf of GigaSpaces. Paul will be speaking about Improving Patient Outcomes through Big Data Real Time Surveillance: “Physicians today must sift through a tremendous amount of data in order to make timely decisions that enhance patient outcomes while simultaneously reducing costs. Gets a high level overview of how one SaaS company built a big data real-time patient surveillance engine using an innovative blend of solutions”

Visit the GigaSpaces booth and get the latest technological updates.

More Details on Data360 Conference

Replication as a Service (Part One)

Cloud computing vendors provide a variety of APIs for deploying and interacting with the services they provide.  These APIs have grown in number along with the variety of services being offered.  Initially, the ability to request and manipulate cloud compute resources was provided.  This has grown to include many other infrastructure level services such as load balancing, storage, identity, DNS, images, and networking.  From there APIs have moved more into platform level services including blob storage, backup, queuing, caching, databases, big data and many more.  In a sense, Cloudify unifies several of these services behind a declarative (recipe) facade.   This allows users to interact with the myriad APIs by interacting with Cloudify’s generic API (in the case of block storage) or recipes in a portable way.  The goal of creating a replication as a service capability goes in a different direction from standardizing existing APIs, to creating a standardized cloud-neutral (and potentially inter-cloud) service that is not provided by the cloud vendors themselves. Replication as a service is built upon recent work that brought Gigaspaces XAP to the cloud.  Using the existing xap-gateway recipe, the construction of a replication network is possible by hand.  This post, and ones to follow, describe the process of turning this basic capability into a true cloud service, complete with multi-cloud support and REST API.