How can you add real-time to your Big Data application stack? How can you leverage in-memory computing to ensure you maintain consistency, scalability, transactionality, reliability, and high availability? Answers here.

The Next Big Thing in Big Data

As we move to fast data, there’s more emphasis on processing big data at speed. Getting speed without compromising on scale pushes the limits in the way most of the existing big data solutions work and drive new models and technologies for breaking the current speed boundaries. New advancements on the hardware infrastructure with new flash drives offer great potential for breaking the current speed limit which is bounded mostly by the performance of hard drive devices.

When we reach a point in which we need to change much of the existing assumptions and architecture of the existing databases to take advantage of new technology and devices such as flash, that’s a clear sign that local optimization isn’t going to cut it. This calls for a new disruption.

Read more of this article on VentureBeat.

To learn more about the GigaSpaces solution, check out our white paper, XAP MemoryXtend - Massive Application Storage Capacity for Real-Time Applications.

Process more data and at faster speeds with XAP MemoryXtend

Organizations and enterprises have a constant need for fast processing with bigger and bigger datasets. To mediate this issue, most companies turn to In Memory computing for real time processing, application scalability and high availability. However, the price of In Memory computing can be expensive [due to the cost of RAM], and take the effective In Memory solution off the table.

Read about a potential solution below that stores data on both RAM and SSD and allows you to process thousands of requests at a time.

Learn more about XAP MemoryXtend with SanDisk’s ZetaScale technology.

GigaSpaces is excited to announce a joint partnership with SanDisk today!

We are excited to announce a joint partnership with SanDisk today! With this partnership we have built a new product – called XAP MemoryXtend, that basically gives you extreme processing but at lower costs. How did it all start? We all know that there’s a growing need for faster processing of constantly growing data. Everyone


Speed Saves Lives!

Gigaspaces will be participating at the DATA360 Conference in Mountain View, CA. 

Paul Kudrle, Lead Developer, of Pharmacy OneSource 

(A Wolters Kluwer Health Company)  will be presenting on behalf of GigaSpaces. Paul will be speaking about Improving Patient Outcomes through Big Data Real Time Surveillance: “Physicians today must sift through a tremendous amount of data in order to make timely decisions that enhance patient outcomes while simultaneously reducing costs. Gets a high level overview of how one SaaS company built a big data real-time patient surveillance engine using an innovative blend of solutions”

Visit the GigaSpaces booth and get the latest technological updates.

More Details on Data360 Conference

Replication as a Service (Part One)

Cloud computing vendors provide a variety of APIs for deploying and interacting with the services they provide.  These APIs have grown in number along with the variety of services being offered.  Initially, the ability to request and manipulate cloud compute resources was provided.  This has grown to include many other infrastructure level services such as load balancing, storage, identity, DNS, images, and networking.  From there APIs have moved more into platform level services including blob storage, backup, queuing, caching, databases, big data and many more.  In a sense, Cloudify unifies several of these services behind a declarative (recipe) facade.   This allows users to interact with the myriad APIs by interacting with Cloudify’s generic API (in the case of block storage) or recipes in a portable way.  The goal of creating a replication as a service capability goes in a different direction from standardizing existing APIs, to creating a standardized cloud-neutral (and potentially inter-cloud) service that is not provided by the cloud vendors themselves. Replication as a service is built upon recent work that brought Gigaspaces XAP to the cloud.  Using the existing xap-gateway recipe, the construction of a replication network is possible by hand.  This post, and ones to follow, describe the process of turning this basic capability into a true cloud service, complete with multi-cloud support and REST API.


From the press:

"We have recently released the latest version of our in-memory computing platform − XAP 9.7. The XAP in- memory computing platform helps users quickly build and deploy high-performance applications geared for blazing fast event processing and real-time analytics. Two main features of version 9.7 are built-in MongoDB integration, and LINQ support.,

XAP’s MongoDB integration will enable users to use MongoDB as a backing store for the XAP in-memory data grid, furthering XAP’s high-performance and fully-blown transaction support and MongoDB’s scalability and flexibility. Newly introduced LINQ support allows XAP.NET users to submit complex queries to the in-memory data grid using the built-in LINQ syntax, making it more native and easier to use for .NET developers.

XAP 9.7 also contains a number of other enhancements. These include improvements to the query projection mechanism that allows for projecting properties of nested objects and the ability to prioritize specific racks or even data centers when deploying the data grid. Other enhancements include the allocation of primaries and backups, unique constraint enforcements for attributes of objects written to the grid, and a variety of bug fixes.

XAP’s  already existing built-in Cassandra integration, coupled with its newly released MongoDB integration, as well as enhanced integration interfaces to support metadata propagation to backing data stores, allows users to enjoy the full power of the in-memory data grid and NoSQL databases  in the same application. Native .NET users can now also leverage powerful LINQ syntax.”


*Published on DataCenter Post