How can you add real-time to your Big Data application stack? How can you leverage in-memory computing to ensure you maintain consistency, scalability, transactionality, reliability, and high availability? Answers here.

Speed Saves Lives!

Gigaspaces will be participating at the DATA360 Conference in Mountain View, CA. 

Paul Kudrle, Lead Developer, of Pharmacy OneSource 

(A Wolters Kluwer Health Company)  will be presenting on behalf of GigaSpaces. Paul will be speaking about Improving Patient Outcomes through Big Data Real Time Surveillance: “Physicians today must sift through a tremendous amount of data in order to make timely decisions that enhance patient outcomes while simultaneously reducing costs. Gets a high level overview of how one SaaS company built a big data real-time patient surveillance engine using an innovative blend of solutions”

Visit the GigaSpaces booth and get the latest technological updates.

More Details on Data360 Conference

Replication as a Service (Part One)

Cloud computing vendors provide a variety of APIs for deploying and interacting with the services they provide.  These APIs have grown in number along with the variety of services being offered.  Initially, the ability to request and manipulate cloud compute resources was provided.  This has grown to include many other infrastructure level services such as load balancing, storage, identity, DNS, images, and networking.  From there APIs have moved more into platform level services including blob storage, backup, queuing, caching, databases, big data and many more.  In a sense, Cloudify unifies several of these services behind a declarative (recipe) facade.   This allows users to interact with the myriad APIs by interacting with Cloudify’s generic API (in the case of block storage) or recipes in a portable way.  The goal of creating a replication as a service capability goes in a different direction from standardizing existing APIs, to creating a standardized cloud-neutral (and potentially inter-cloud) service that is not provided by the cloud vendors themselves. Replication as a service is built upon recent work that brought Gigaspaces XAP to the cloud.  Using the existing xap-gateway recipe, the construction of a replication network is possible by hand.  This post, and ones to follow, describe the process of turning this basic capability into a true cloud service, complete with multi-cloud support and REST API.

http://bit.ly/1cSut7k

A NEW VERSION OF IN-MEMORY COMPUTING PLATFORM

From the press:

"We have recently released the latest version of our in-memory computing platform − XAP 9.7. The XAP in- memory computing platform helps users quickly build and deploy high-performance applications geared for blazing fast event processing and real-time analytics. Two main features of version 9.7 are built-in MongoDB integration, and LINQ support.,

XAP’s MongoDB integration will enable users to use MongoDB as a backing store for the XAP in-memory data grid, furthering XAP’s high-performance and fully-blown transaction support and MongoDB’s scalability and flexibility. Newly introduced LINQ support allows XAP.NET users to submit complex queries to the in-memory data grid using the built-in LINQ syntax, making it more native and easier to use for .NET developers.

XAP 9.7 also contains a number of other enhancements. These include improvements to the query projection mechanism that allows for projecting properties of nested objects and the ability to prioritize specific racks or even data centers when deploying the data grid. Other enhancements include the allocation of primaries and backups, unique constraint enforcements for attributes of objects written to the grid, and a variety of bug fixes.

XAP’s  already existing built-in Cassandra integration, coupled with its newly released MongoDB integration, as well as enhanced integration interfaces to support metadata propagation to backing data stores, allows users to enjoy the full power of the in-memory data grid and NoSQL databases  in the same application. Native .NET users can now also leverage powerful LINQ syntax.”

image

*Published on DataCenter Post

Use XAP with Kafka- continuous replication of in memory data to big data stores

Apache Kafka is a distributed publish-subscribe messaging system. It is designed to support persistent messaging with a O(1) disk structures that provides constant time performance even with many TB of stored messages. Apache Kafka provides High-throughput even with very modest hardware, Kafka can support hundreds of thousands of messages per second. Apache Kafka supports partitioning the messages over Kafka servers and distributing consumption over a cluster of consumer machines while maintaining per-partition ordering semantics. Many times Apache Kafka is used to perform parallel data load into Hadoop.

The best practice integrates GigaSpaces with Apache Kafka. GigaSpaces’ write-behind IMDG operations to Kafka making it available for the subscribers. Such could be Hadoop or other data warehousing systems using the data for reporting and processing

http://docs.gigaspaces.com/sbp/kafka-integration.html

image

Posted on TechTarget: Avanza bank relaunches its entire system based upon GigaSpaces XAP

One benefit of running enterprise applications on in-memory databases is the ability to perform real-time analytics on transactions. A Swedish online bank is among the few companies that have invested in cloud-based architecture and in-house development to make it happen.

In early June, Avanza Bank, headquartered in Stockholm, went live on the cloud-enabled, in-memory eXtreme Application Platform (XAP) from New York-based GigaSpaces Technologies Inc.

"It’s a complete relaunch of the entire bank based on GigaSpaces," said Avanza Bank CIO Ronnie Bodinger. "We have closed down the old system."

Though it may be considered visionary now, the Avanza architecture is destined to become a mainstream best practice at companies with similar needs for scalability and real-time analytics, analyst Massimo Pezzini wrote in a report for Gartner Inc., a Stamford, Conn.-based research company. But he cautioned that “following Avanza Bank’s leading-edge approach requires advanced IT skills, technical prowess, and the willingness to take business and technical risks by adopting radically innovative design patterns and technologies.”

The banking platform’s radical redesign also provides an early view of where most enterprise applications, including ERP, are probably headed, according to Gartner analyst Christian Hestermann. Mission-critical functions will increasingly be transferred to in-memory technology that runs in the cloud, making real-time analytics possible on transactional data that historically could only be extracted in daily batches from slower on-premises systems.

Full story on TechTarget - http://bit.ly/KuEEHi