Earlier
this year, Information
Insights announced a partnership with
jSonar, enhancing data security capabilities with the SonarG solution for
further optimizing Guardium environments. Led by Ron Ben Natan, former
CTO/founder of Guardium, this additional layer of protection takes advantage of
next-generation Big Data technology to enhance and expand the platform’s resources
in a number of key areas.
Since
that announcement, I’ve met with many of jSonar’s Guardium clients to discuss
SonarG, receiving feedback on our approach, along with insight into current challenges
and goals for expanding the functionality and value that they get from Guardium.
We
were quickly able to pinpoint the source of SonarG’s enthusiastic adoption - it
provides a powerful set of capabilities that perfectly complement the
challenges and goals that the majority of Guardium deployments are facing.
Over
the next few weeks, we will outline the findings from these conversations to
address common concerns from customers, along with insight into how these are
solved via SonarG – focusing on three major areas of enterprise data
management: infrastructure optimization, improving data access and enabling
security analytics.
The
majority of clients that I spoke with are continuing to expand their Guardium
footprint, driven by increasing database counts and a need to open up their
policies to capture and monitor more sensitive data. Knowing that this growth directly translates
into increased infrastructure and operational costs, they are looking for complementary
technology that would allow them to minimize the “care and feeding” overhead of
Guardium while also reducing costs. The
overwhelming message was How can we….?
· …reduce infrastructure
and storage costs while collecting more data?
· …simplify the collection
architecture to enable better use of the data?
·
…kill aggregators and their processes, latency,
instability, etc?
· …increase
retention periods from 15-45 days to a year or more?
We
designed the SonarG architecture to simplify Guardium architecture and help
customers focus more on the data output
and less on the challenges of collecting and managing what is often many TBs of
activity data distributed across many appliances.
Looking
at the architectural diagram, a key change is clear: Aggregators are eliminated. All Collectors send their data to a
common SonarG warehouse, which is typically a single commodity server or instance
optimized for low cost, large storage and high performance queries.
Nothing
changes on the STAP, Collector and Central Manager fronts, other than enabling
the collectors to push data to the warehouse using an extraction method that
was jointly developed with IBM using data mart technology. With the right patch
level and a couple of simple scripts, data extraction is up and running. You
can even test this in parallel with your aggregators, since the SonarG system
can run concurrently.
With the SonarG approach, you can embrace
broader policies and collect much more data without fear of negatively
impacting their Guardium environment.
This simple
change in architecture provides a number of key benefits that tie directly into
the challenges highlighted earlier:
- We’ve eliminated Aggregators completely while consolidating
data into a central data warehouse and cost effectively extending retention
periods
- We’ve reduce Collector storage from 600GB to 60GB and
eliminated the challenge of managing long term data on the collectors
- We’ve reduce collection latency from 24 hours to 1 hour
- We’ve increase collector throughput by 10-20%
The
next blog in the series will look into how SonarG can greatly improve access to
increasingly large volumes of data, securely, across teams. Think self-service
reporting and hundreds of different tools that your team can use to securely
access relevant data.
Guest Author: Chris Brown, jSonar
No comments:
Post a Comment